Generating Contours

Although several studies have been performed to correlate activation times to extracellular electrogram features, little has been studied about the mechanics of contour generation for activation maps. Some investigators use a visual approach and manually draw the maps. Others use "canned" contour programs available in scientific subroutine software libraries, and still others use custom written software. Only a few references exist on the mechanics of contour generation as applied to cardiac activation.1-4 The field of cartography has evolved around and is usually applied to geophysical problems,10'11 and it include methods employing minimum least square fitting. It is difficult to determine to what extent these methods have been used in either the published reports or from commercial vendors in cardiac applications.

It is our premise that many conclusions about activation sequences have been deduced from poorly constructed contour maps. Some general issues have been discussed by Ideker et al.1C In essence, a fundamental assumption about the underlying structure of activation is implicitly or explicitly made without regard to problems concerning spatial sampling or the assumption of spatial continuity. However, there have been few formal attempts to define the spatial sampling necessary for cardiac activation maps.13 Spatial continuity is the two-dimensional property similar to time domain continuity. Most linear mathematical approaches to signal processing require that there be no abrupt changes in the values of the measured quantity; that is, the time derivatives are not infinite. This is also true for the two-dimensional problem where the spatial derivatives are not infinite. Put into other terms, no point in the spatial representation can be multivalued. Unfortunately, mapping in infarct regions almost ensures discontinuous regions. The inhomogeneity in conduction properties is well known, and the presence of dead tissue (i.e., nonconducting regions) must be accounted for in the contour generation process. In geophysical terms, such discontinuities are called faults, and generating contours around a fault region should be considered in cardiac map generation.

Contour generation can be done by hand. It obviously interjects an element of bias, but more importantly it does not allow for a mathematical description of the data. Such descriptions allow the use of transformations such as directional derivatives, smoothing with two-dimensional filters, and the measurement of error in the contours when compared to the actual underlying data. At this stage, the values chosen from the individual raw data waveforms would be considered unambiguous. The next assumption in contour generation is that the spatial sampling is adequate. The previous discussion implied that at present there is no known way to ensure this since the minimum wavefront length is not known. The next assumption in contour generation is that the data fit some underlying mathematical structure. The simplest structure is the linear model assumed by simple triangulation methods. In essence this assumes that if a straight line connects the sample data points, the values under the line vary linearly between the two points. Triangulation has several drawbacks in that there is no physiologic basis for the linear assumption, and that depending on how the data points are linked, there is no unique solution. For unevenly spaced recording sites, there are many ways in which the data points can be linked. There are no a priori

Table 1: Example Data Points and Coordinates

X

Y

D

V

V/D

Grid

2

3

Data A

1.5

3.6

0.78

6

7.69

Data B

3.0

3.0

1.0

6

6.00

Data C

2.0

2.4

0.6

7

11.67

Data D

1.0

2.9

1.0

7

7.0

restrictions on the formation of triangles as the data in the entire region are linked together. Older software algorithms were even susceptible to the order of data entry. Hence, in the early 1970s, triangulation fell into disuse by cartographers as the method is not well defined. More recently, some efforts have been made to regularize the triangulation methods, but other more mathematically based methods are favored.

Gridding is a method whereby the data points (usually unevenly spaced) are converted to an underlying, evenly spaced set of points. The grid can be defined to have many more points than the sample points. The value assigned to each intersecting grid line can be a linear combination (e.g., an average) of nearest data points. The term "nearest" can be defined in a radial sense (e.g., all data points within 5 mm). Alternatively, using the radial search criterion, the grid value can be weighted with a distance measure. Thus, data points closer to the grid point will have a larger influence in calculating the grid value than data points farther away. Much of what has been said can be more succinctly stated in mathematical terms:

Here, VG is the value computed at the gridded data point, V are the values at the four original data points, and DiG are the distances from those original data points to the new grid point. For a more intuitive approach, however, these concepts can be described graphically with a set of simulated data points. Figure 7 has six panels. Figure 7a shows a set of irregularly spaced data points with values ranging from 5 to 8. Figure 7b is a regularly spaced grid, with the open circles at the line intersections representing the new underlying grid points. Figure 7c demonstrates the variable spacing between the data and grid points by overlapping Fig. 7a and 7b. Figure 7d is an example of one grid point and its four surrounding nearest neighbors. The simplest way to evaluate the value of the grid point is to average the surrounding data points. This value is 6.5. However, one can also use the weighting approach described by (1). Table 1 shows the actual coordinates and data points for this problem.

Substituting values into (1), VG = (7.69 + 6.0 + 11.67 + 7.0)/4.95 = 6.54. In this example, the calculation is very close to the simple average of 6.5. The rest of the grid points, to one decimal point, are shown in Fig. 7f. Now that the data have been converted to a a c a

• 6

• 5

•6

•7

• 6

•7

• 8

• 5

• 6

• 7

• 6

7

>6 i

<

M

• 6

<

¡7 «

e

• 6

A

6

»7 ' (

(7

to8 i

• 6

* w

<

»7

6.1

5.7

5.3

5.8

O

O

O

o

7.0

6.5

6.0

5.2

O

O

o

o

7.6

7.0

6.0

5.7

O

O

o

o

7.2

7.0

6.2

5.5

O

O

o

o

Figure 7: A graphical depiction of generating a grid of evenly spaced data points from a set of nonuniformly spaced data points (see the text for a full description)

Figure 8: An activation map overlying the set of unipolar electrograms obtained from a grid (4 mm spacing in the central 10 x 10 grid, greater spacing distances along each edge) of electrodes obtained from a canine infarct model in vivo. This set of electrodes was analyzed with late potential activation times as the primary mapped isochrones

Figure 8: An activation map overlying the set of unipolar electrograms obtained from a grid (4 mm spacing in the central 10 x 10 grid, greater spacing distances along each edge) of electrodes obtained from a canine infarct model in vivo. This set of electrodes was analyzed with late potential activation times as the primary mapped isochrones regularized grid, many types of operations can be performed. The method of deriving the contours can be based on one of many different schemes, such as cubic spline fitting or even linear interpolations. Many schemes can be used in forming the grid. For example, one could require that there be data points in all four quadrants surrounding the grid point, except in the case of boundary grid points. Krige14 proposed a statistically based method that

Figure 9: Visual representation of the spatial autocorrelation function with an increasing analysis grid size (a = 3 x 3 set of data points, b = 5 x 5, and c = 7 x 7). The relative amplitude of these data represents higher levels of correlation among the underlying data points

minimizes the variance of data points that coincide with or are very close to the grid points. This minimum variance method, often referred to as Kriging, now allows for estimates of error in the map. Detailed discussion of this is beyond the scope of this chapter, but in essence such a statistical approach would allow for the generation of an optimal map and clear delineation of regions with the highest uncertainty.

An example of gridding to generate an activation map over the infarct is shown in Fig. 8. The underling electrograms are shown at each recording site, spaced approximately 4 mm apart. The isochronal lines were derived from a uniform and Kriged grid where the data from each electrogram were assumed to be spatially continuous (no dead inactive regions), and where each electrogram has a unique activation time.15'16 Each isochrone represents 10 ms, with early activation on the left and late activation on the right. The specifics of timing are not considered, and the emphasis is on the actual generation of the contour lines. It is a first-order polynomial approximation and assumes uniform, constant velocity, conduction. It was derived from the grid.

The creation of a contour map that takes into consideration a faulted region (e.g., dead nonconducting tissue) is another example of how a gridded structure can be used. It is not enough to just declare that an electrogram site generates no activation time. Without a clear definition of a fault zone, most algorithms will simply interpolate across the dead tissue. The drawing of an isochrone or the inclusion of a data point in the interpolation that is "across" the fault should be considered an invalid approach. It is not known how this has been dealt with in prior studies.

One solution to this sense of uncertainty is to understand the degree to which the data points to be contoured are related. Theoretically only data that are highly correlated can be interpolated with a high degree of confidence. If the data points are not correlated, then interpolating among them is not a valid exercise. The use of correlation functions is widely used in many applications when comparing data sets. A similar two-dimensional approach for studying spatial correlation of data points is also possible and in particular the spatial-autocorrelation function can be used to assess the degree to which the surrounding data are correlated. The details of this approach in cardiac mapping are beyond the scope of this chapter.13 Figure 9 was obtained from the data that were contoured in Fig. 8. There are three panels in this figure, each obtained with different regional resolution. Figure 8a performed the spatial autocorrelation function around a 3 x 3 set of data points; Fig. 8b used a 5 x 5 set; Fig. 8c used a 7 x 7 set. The feature to recognize in these panels is the lower amplitudes to the right-center portion of the figure. This is the region of the late potentials and was the most uncertain point of defining the late activation times. Hence the spatial autocorrelation function can be used to highlight the regions of greatest uncertainty in a contour map. As one might attempt to interpret contour-generated data, the information from the autocorrelation function could bolster or temper the level of confidence of the interpretation, much as one does with the correlation coefficient of one-dimensional data.

0 0

Post a comment