^{1}

^{2}

^{3}

^{*}

^{3}

^{2}

^{3}

^{1}

^{2}

^{3}

^{*}

^{1}

^{2}

^{3}

Edited by: Carlos Figuera, Universidad Rey Juan Carlos, Spain

Reviewed by: Andreu Climent, Fundación Hospital Gregorio Marañon, Spain; Olaf Doessel, Karlsruher Institut für Technologie (KIT), Germany

This article was submitted to Cardiac Electrophysiology, a section of the journal Frontiers in Physiology

This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

The electrocardiographic imaging inverse problem is ill-posed. Regularization has to be applied to stabilize the problem and solve for a realistic solution. Here, we assess different regularization methods for solving the inverse problem. In this study, we assess (i) zero order Tikhonov regularization (ZOT) in conjunction with the Method of Fundamental Solutions (MFS), (ii) ZOT regularization using the Finite Element Method (FEM), and (iii) the L1-Norm regularization of the current density on the heart surface combined with FEM. Moreover, we apply different approaches for computing the optimal regularization parameter, all based on the Generalized Singular Value Decomposition (GSVD). These methods include Generalized Cross Validation (GCV), Robust Generalized Cross Validation (RGCV), ADPC, U-Curve and Composite REsidual and Smoothing Operator (CRESO) methods. Both simulated and experimental data are used for this evaluation. Results show that the RGCV approach provides the best results to determine the optimal regularization parameter using both the FEM-ZOT and the FEM-L1-Norm. However for the MFS-ZOT, the GCV outperformed all the other regularization parameter choice methods in terms of relative error and correlation coefficient. Regarding the epicardial potential reconstruction, FEM-L1-Norm clearly outperforms the other methods using the simulated data but, using the experimental data, FEM based methods perform as well as MFS. Finally, the use of FEM-L1-Norm combined with RGCV provides robust results in the pacing site localization.

The non-invasive electrocardiographic imaging (ECGI) is an imaging technique that allows one to non-invasively reconstruct the electrical activity of the heart using electrocardiograms and a patient specific heart-torso geometry. This clinical tool is used by electrophysiologists to understand the mechanisms underlying arrhythmias and to localize targets for ablation therapy, such as for atrial fibrillation (Haissaguerre et al., _{ext}. Given the extracellular electrical potential _{H} on the epicardial heart boundary Γ_{H}, the distribution of the electrical potential _{T} in the torso domain Ω_{T} and specifically at electrodes distributed on the body surface Γ_{ext}, could be obtained by solving the following Laplace equation:

where σ_{T} stands for the torso conductivity tensor and _{T} is the outward unit normal to the torso external boundary Γ_{ext}. This is what we call a forward problem. Now, given a body surface potential distribution and knowing that the flux of potential over the body surface is zero, could we obtain the right distribution of the electrical potential on the heart surface? This is what we call an inverse problem in electrocardiography. In almost all of the works reported in the literature, the mathematical approach used for solving the inverse problem is based on a transfer matrix which has been first formulated by Barr et al. (

where

A recent work by Barnes and Johnston (

In this work we compare not only different methods for computing the transfer matrix, but also different regularization operators and different methods for optimizing the regularization parameter to assess how they perform on two sets of data: simulated and experimental.

To date, the regularization approach most commonly used to solve the electrocardiographic imaging inverse problem is the Tikhonov regularization defined by the following objective function:

where

In the case where

where _{i}, _{i} and σ_{i} are, respectively, the columns of

can be written as (Hansen,

It can be shown that the two terms of (5) can be written as (Johnston and Gulrajani,

and

where _{LSS} and

In the case where

where ^{T}^{T}_{1} … σ_{n}} and _{1} … ν_{n}}. Diagonal elements of _{1} ≤ … ≤ σ_{n} ≤ 1 and 1 ≥ ν_{1} ≥ … ≥ ν_{n} ≥ 0. The matrix

Using the generalized singular value decomposition, the solution of the problem expressed by Equation (3) can be written as (Chung et al.,

where Φ is a

It can be shown that the two terms of (3) can be written in terms of generalized singular values as (Chung et al.,

and (Ghista,

Several regularization techniques can be applied to the ill-posed inverse problem of electrocardiography. In this study, we focus on two methods.

Using the zero order Tikhonov regularization, the objective function can be expressed by (5). This type of regularization places a constraint on the magnitude of the reconstructed epicardial potentials which is known to provide a smooth solution but may lead to the loss of meaningful information.

Previous studies have shown that using the L1-Norm can provide a better reconstruction when applied in different fields (Wolters et al.,

This will yield less smoothed potentials than zero-order Tikhonov. The use of current density in the regularization of the inverse problem in electrocardiography was first introduced by Khoury (

The objective function using L1-Norm based regularization is given by:

where _{H} is the outward unit normal to the epicardium surface.Using the Finite Element Method, and thanks to the linearity of the solution of problem (1) to its boundary conditions, we can define the Dirichlet-To-Neumann operator

where _{1}, _{2}, …, _{n}) are the coordinate tuples of the heart mesh vertices. Note that the operator D is different from the gradient over the surface used for the total variation regularization. In fact the gradient of x over the heart surface (∇_{ΓH}_{T}), whereas _{T} = (∇_{ΓH}_{ΓH} depends only on the epicardial surface Γ_{H}, whereas,

The L1-Norm regularization of the current density leads to a non-linear problem. Following Karl (

with β a small constant satisfying β > _{i} the ^{th} component of the vector

This approximation leads to an interesting formulation of the L1-Norm regularization problem in the form of a set of equations whose resolution as β → 0 gives an estimate of the solution of (16). The linear problem to be solved is then:

where _{β}(

We notice that (19) has an effect on the variation of the normal derivative penalty. In fact, when the local normal derivative is too small, the weight goes to larger values imposing greater smoothness on the solution. When the local normal derivative is large, the weight goes to small values allowing larger gradients in the solution in these regions.

The above formulation can be further simplified in a way that it can be seen as a first-order Tikhonov regularization. In fact, thanks to the diagonality of _{β}(

which leads to:

where

Computationally, the Equation (21) is still non-linear since the weighting matrix _{β}(

where _{0} is the zero-order Tikhonov solution determined by the Finite Element Method.

In this section, we detail the formulation of several methods used for choosing the optimal regularization parameter in terms of, both, the singular value decomposition in the case of the zero-order Tikhonov regularization and the generalized singular value decomposition in the case of L1-Norm regularization of the current density treated as a first-order Tikhonov regularization. It's fundamental for a good regularization parameter λ to satisfy the _{i} and the generalized singular values

The U-Curve is a plot of the sum of the inverse of η_{1}(λ) (respectively, η_{2}(λ)) and the inverse of the corresponding residual ρ_{1}(λ) (respectively, ρ_{2}(λ)) in the case where

The U-Curve method was proposed by Krawczyk-Stańdo and Rudnicki (

According to Krawczyk-Stańdo and Rudnicki (_{1} and δ_{n} are, respectively, the biggest and the smallest singular values (generalized singular value in the case where _{u}, the optimum value of λ.

As mentioned above, the optimal regularization parameter should satisfy the DPC. Therefore, ADPC is a regularization parameter choice method based on this condition. The idea is to look for the last index _{i} becomes smaller than _{t}, we seek for α_{t} = σ_{maxi} such that log(σ_{i}) ≥ _{t}. The ADPC regularization parameter is then λ = _{t}).

The Composite REsidual and Smoothing Operator (CRESO) method was introduced by Colli-Franzone et al. (^{2}.

In terms of the singular value decomposition, this can be written as (Johnston and Gulrajani,

where

The Generalized-Cross Validation (GCV) (Wahba,

The function ^{th} data point,

It's known that the GCV method has good asymptotic properties as

In Lukas (

where

Here, γ is called a robustness parameter, γ ∈ [0, 1].

The RGCV method is based on the average influence ^{th} data point on the regularized solution. It's trivial that, when γ = 1,

ECGI reconstructions were performed on two different sets of data:

Simulated data obtained by considering a realistic 3D heart-torso geometry segmented from CT-Scan images as illustrated in Figure

Experimental data were obtained using an

Tank and sock unipolar electrograms were recorded at 2 kHz (BioSemi, the Netherlands) and referenced to a Wilson's central terminal defined using tank electrodes. A multi-lead signal averaging algorithm was used to remove noise and non-synchronized p-waves on recordings. In most cases, retrograde VA conduction was present with P-waves only present during the non-analyzed ST-segment. The tank mesh contains 1,177 nodes and the epicardium 761 nodes. For the application of described inverse methods, potential recordings need to be available for all the mesh nodes. To do so, a linear interpolation was applied to the

_{ext} (green).

For all the carried out tests using the L1-Norm regularization, β is kept fixed and equal to ^{−5}.

The choice of γ for the RGCV tests is based on the study made by Barnes and Johnston (

The RGCV criterion plotted in terms of λ and γ. The red markers are the grid points where RGCV(λ,γ) is minimum when γ is fixed.

To assess the accuracy of the results obtained by the different approaches, we define the relative error (

where ^{c} and ^{e} denote, respectively, the computed epicardial potential and the known one. ^{c} and ^{e} over the ^{c} and ^{e} over the

First, we assessed regularization techniques and numerical methods using simulated data. The five regularization parameter choice criteria described above were assessed using all the suggested numerical methods: MFS, FEM-ZOT, and FEM-L1 which make 15 different algorithms.

Figure

For all the runned simulations using FEM, GCV and ADPC fail to compute the optimal regularization parameter. In fact, GCV tends to be flat for small values of λ which make it difficult to pick a minimum. RGCV is suggested to help with this difficulty. We observe here that it outperforms U-Curve by nearly 30% using the zero order Tikhonov and

Figure

Bar graphs of means of relative errors and correlation coefficients with the standard deviations for simulated data.

Simulated

Simulated

Preprocessing of the experimental data revealed the existence of a few localized sites of ischemia produced due to electrode pressure on the epicardium. This produced monophasic action potential-like signals. These electrodes were identified when the potential was greater than a fixed threshold equal to

For the sake of completeness, statistical detailed results of RE and CC in time and space on the reconstructed potential for all cases are reported in the

Spatial mean relative errors and correlation coefficients and their standard deviations for reconstructed epicardial potentials with all the algorithms for three paced rhythms:

For the localization of pacing sites, we used three different experiments, two of them provide LV, RV, and BiV pacing data sets and the other one has only RV and LV models. In summary, we have 3 cases of LV pacing, 3 cases of RV pacing and 2 cases of BiV pacing. In Figure

Real

Real

Mean errors and standard deviations of localization of pacing sites for the 2 paced rhythms RV, LV using the 3 numerical methods MFS-ZOT, FEM-ZOT, and FEM-L1 combined with the regularization parameter choice methods.

RV | MFS-ZOT | 2.8 ± 1.2 | 2.4 ± 1.1 | 1.9 ± 0.9 | 2.4 ± 0.8 | 2.5 ± 0.8 |

FEM-ZOT | 2.7 ± 0.8 | N.A | 2.7 ± 0.9 | 2.0 ± 0.1 | N.A | |

FEM-L1 | 1.9 ± 0.5 | N.A | 1.8 ± 0.3 | 1.8 ± 0.4 | N. A | |

LV | MFS-ZOT | 1.7 ± 0.7 | 2.1 ± 0.3 | 2.0 ± 1.1 | 1.3 ± 0.6 | 2.1 ± 0.2 |

FEM-ZOT | 2.1 ± 0.4 | N.A | 2.8 ± 1.0 | 3.0 ± 0.2 | N.A | |

FEM-L1 | 1.3 ± 0.5 | N.A | 1.2 ± 0.6 | 1.3 ± 0.6 | N.A | |

BiV | MFS-ZOT | 2.5/ |
2.3/1.5 | 0/ |
2.3/ |
2.7/2.0 |

FEM-ZOT | 1.8/ |
N.A | 1.8/2.1 | 2.5/N.A | N.A | |

FEM-L1 | 2.5/N.A | N.A | 1.3/1.4 | 1.4/N.A | N.A |

For the LV-pacing (respectively, RV-pacing) case, we observe that L1-norm regularization of the current density combined with RGCV provides the best localization with an error of

For both LV and RV-pacing we observe that none of the methods is clear-cut.

In the case of a bi-ventricular pacing (BiV), not all the methods were able to locate both pacing sites. Only MFS-ZOT combined with GCV, FEM-ZOT and FEM-L1 with RGCV succeed to detect the two pacing sites with more-less good accuracy. Figure

Real

It's important to mention that in this work, the use of simulated data provides an optimal knowledge of the transfer matrix

where _{ex} is the exact solution whether it's the simulated epicardial potential or the measured one.

The _{d} is almost equal to zero using the simulated transfer matrix. However, it increases for the experimental data to reach, for some time steps, _{d} ≈

Obviously, the experimental conditions have a very important impact on the quality of the data that we obtain from experiments. One of the limitations of this study is the dataset of epicardial signals. In fact, the experimental protocol described in Bear et al. (

In this paper, we numerically assessed 15 different algorithms for the resolution of the inverse problem of electrocardiography based on the Generalized Singular Value Decomposition of the pair {Transfer matrix, Regularization matrix} combined with different regularization parameter choice methods. Although the L1-Norm of the normal derivative regularization method has been presented before (Khoury,

The evaluation of the different approaches studied in this paper is based on the reconstruction of the epicardial potential maps and the localization of pacing sites. For that, we used 3 different cardiac paced rhythms: left-ventricular, right-ventricular and bi-ventricular pacing.

Unlike the work presented by Barnes and Johnston (

However, for the experimental data, all the methods perform nearly the same with a slight difference in terms of both spatial and temporal relative error and correlation coefficient when comparing the epicardial potential distribution. We think that this is mainly due to the magnitude of the recorded potentials but also to the noise and other experimental uncertainties. Results show, also, that L1-Norm regularization of the potential normal derivative yields generally the best solution. For the purpose of benchmarking, the represented algorithms were evaluated against the data set used in the paper (Figuera et al.,

Regarding the pacing site localization, Table

AK is the main author of the paper. She participated in the implementation of the methods. She participated in the analysis of the results. PM participated in the implementation of the different methods. She participated in the analysis of the results. LB performed the

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

The Supplementary Material for this article can be found online at: