**1. Introduction**

The primary basis for microwave breast imaging is that there is considerable dielectric property contrast between malignant and normal tissue. This contrast is a complicated issue and its understanding has evolved substantially over time. At the most simplistic level, microwaves are especially useful as "water detectors." At low microwave frequencies, water typically has relative permittivity values between 75 and 80 while that for fat ranges between 5 to 10. The earliest assumptions about breast tissue were that it was primarily fatty and tumors contained far more water on account of their rapid replication and proliferation [1]. However, recent studies have provided a more nuanced appreciation of normal breast tissue composition. In a study by Woodard and White [2], in the context of assessing the physical content of tissue for radiographic purposes, they found that the two constituents of breast tissue adipose and mammary tissue—varied considerably from woman to woman, and concomitantly, so did their water content. In fact, the water content variations were:

adipose—11.4–30.5%, and mammary—30.2–72.6%. More recent studies have tried to provide a more complex presentation of the properties. Studies by Lazebnik et al. [3], Sugitani et al. [4], Martellosio et al. [5], and Cheng and Fu et al. [6] have all presented values for all three tissue types referred to as adipose, fibroglandular and tumor, respectively. Relatively consistently, the studies have typically shown the adipose properties to be quite low, those for the fibroglandular to be substantially elevated and those for the tumors to be the highest.

However, there is considerable variation between results. As has been pointed out by Meaney et al. and [7, 8] Salahuddin et al. [9], there are important weaknesses and even flaws in the methodologies of these studies which could easily skew the desired results. Two of the more critical problems revolve around the use of the ubiquitous open-ended dielectric probe and the frequency sampling regimen used in algorithms for fitting the data to established Cole-Cole curves for broadband parameter estimation [3]. With regards to the former, it is well known that the penetration depth or sampling volume in front of the dielectric probe is on the order of 1/6th that of the probe diameter [10, 11]. Given that the probe diameters of the most common commercial probe (Keysight Technologies, Santa Clara, CA) are on the order of about 2 mm diameter, this suggests that the penetration depth is on the order of 0.3 mm. However, various reports explicitly state the assumed sampling volume to be 3 mm deep or more [12]. The significance is that it is relatively straightforward for a pathologist to perform a tissue analysis over a sample that is 3 mm thick. For a volume as small as 0.3 mm, the analysis is substantially less informative. In these cases, the probe measurements are really only assessing superficial dielectric properties. In addition to compositional issues near the surface, this is also where the greatest temperature (cooling after excision) and moisture (drying after excision) gradients appear especially during measurements of excised tissue. Both temperature and water content can have dramatic influence over the tissue properties. As a whole, these factors are not adequately addressed in recent reports and open the door for substantial variation.

With respect to the frequency sampling regimens, this is primarily a problem related to how the properties are fitted to the Cole-Cole equation to determine broadband coefficients. The norm for this is to acquire data in a logarithmic frequency fashion and was established by Cole and Cole [13]. The reason for this is that when the dielectric property data is plotted in the Cole-Cole plane, the sample points form nearly a circle and the coefficients are derived from the circle's features such as, the radius and center location. When the data is collected in a logarithmic fashion, the points end up being spaced nearly evenly around the arc of the circle, while for the linearly spaced frequencies, the data points are highly concentrated in a localized corner of the plot. For fitting purposes, the evenly spread out data points produce more accurate findings. The problem becomes further compounded when the data spans multiple relaxation zones (the most prominent cross over point for tissue properties occurs within the range of about 1.5–2.5 GHz) [14]. When a linear sampling regimen is used, the lower relaxation zone is dramatically under-sampled compared to that for the higher zone such that the Cole-Cole fitting is grossly skewed to bias the results for the higher zone. While many of the reports properly sampled the data logarithmically with respect to frequency, some have not [3].

More recently there have been several clinical breast imaging studies [15–17]. The Preece study was based on the University of Bristol radar imaging technique which only produces intensity maps, where localized hot spots are indicative of tumors. Even so, the clinical data suggests that their system is capable of distinguishing tumor from normal tissue, even in dense breasts suggesting that there must be property contrast

#### *Theoretical Premises and Contemporary Optimizations of Microwave Tomography DOI: http://dx.doi.org/10.5772/intechopen.103011*

between the tumor and fibroglandular tissue. The reports by Poplack et al. [16] and Meaney et al. [17] suggest that there is contrast in both the permittivity and conductivity between malignant and normal tissue. The more interesting aspect is that the contrast is only statistically significant for the conductivity images. Given that there are questions related to the absolute dielectric property levels of the different tissue types for all ex vivo studies, we have attempted to develop a hypothesis based on previous literature. In particular, extensive studies by Foster et al. showed distinctly different tissue permittivity and conductivity relationships as functions of water content [18]. In these situations, the fat and fibroglandular tissue would have either no water or low levels of free water (i.e. the bound water in the fibroglandular tissue is primarily bound to long sugars and proteins). In these situations, the ionic content of the solutions is not able to exhibit higher conductivity because there is little to no free water necessary for conduction. However, the tumor has considerable free water and exhibits high conductivity. This mechanism is one means for explaining the relatively high conductivity contrast between the tumor and normal tissue. Conversely, the permittivity acts more as a measure of bulk water. In this situation, the fibroglandular tissue has relatively high water content compared with fat, but not as high as that for tumor. But the percentage difference between that for tumor and fibroglandular is typically in the range of 10–20% compared with what could amount to factors of 2 or more for the conductivity. This can explain the more subdued permittivity contrast we observe for the imaging studies between normal and malignant tumor. These features need to be explored further—preferably in clinical studies.

As was alluded to above, there are multiple near field microwave imaging techniques. These consist primarily of (a) radar techniques, (b) tomography or inverse problem techniques, (c) thermoacoustic imaging, and (d) holography. The radar techniques are generally a type of synthetic aperture radar (SAR) methods that typically utilize either backscatter or transmission data. For the backscatter approaches, a large amount of wideband data is acquired for many positions around the object surface and then time delays are synthetically added or subtracted from each measurement to focus sequentially at each pixel within the imaging domain. The contributions from each measurement are summed at each pixel and the resulting intensity maps are displayed as the images [19]. Multiple simulation efforts have been developed with the most advanced phantom and patient experiments performed by the Fear group at the University of Calgary [20]. Transmission techniques have also been developed—primarily by the group at the University of Bristol [21]. Theirs utilizes a fixed array of wideband antennas which directly contact the breast—albeit they utilize a coupling gel to enhance coupling and minimize unwanted contributions from multipath signals. This has advanced from simulation and phantom experiments to more extensive clinical trials [15].

Tomographic and inverse problems have been studied extensively in simulation with only very few translating to phantom and clinical work. These approaches typically utilize mostly transmission data and require nonlinear inverse algorithms to produce actual maps of the tissue permittivity and conductivity [22]. Simulation work includes studies by Rocca et al. [23], Fhager and Persson [24], Catapano et al. [25] and Shea et al. [26]. Extensive phantom and ex vivo animal studies have been performed by Semenov et al. [27]. The most comprehensive clinical work has been performed by Meaney et al. which includes studies of normal patients [28, 29], a diagnostic study comparing images of patients with and without tumors [16], and a study monitoring the progression of tumors during neoadjuvant chemotherapy [17]. For these studies, the imaging technique can distinguish tumors from normal tissue and benign lesions

to a level of significance for lesions 1 cm and larger. For the therapy monitoring study, the technique was able to determine whether a tumor was responding to treatment or not within the first 30 days of the chemotherapy regimen.

The thermoacoustic techniques generally apply a low duty cycle, high power microwave pulse which is selectively absorbed by the malignant tissue and subsequently causes a mechanical vibration which can be detected by ultrasound transducers. The images are produced by synthetically combining the signals from the different ultrasound transducers. The technique has seen limited success in both phantom and clinical work [30]. The holography approach is being studied primarily at McMaster University by Dr. Nikolova. To date, they have developed an initial prototype which has produced promising phantom results [31].

For this chapter, we focus on tomographic or inverse problem approaches. Microwave tomography and/or microwave inverse problems have now been studied at great length for several decades [25, 26, 32]. The preponderance of efforts has been in simulation with very few advancing to actual implementation and/or clinical exams. While factors such as cost, exam time and image reconstruction complexity/costs are often cited as prime reasons for failure, our experience has led us to focus on two factors that inhibit progress. These are the need to explicitly contend with the problems of multi-path signal corruption [33] and the need for variance stabilizing transformations in the reconstruction process [34]. Viewing microwave imaging from the context of these two challenges clearly illustrates flaws in conventional approaches and inhibits overall progress.

Multi-path signal corruption has been acknowledged for decades. In the context of classic radar and telecommunications applications, signals such as ground clutter constructively and destructively add to the desired signals and excite unwanted artifacts such as ghosting [35]. For far-field applications such as radar, these artifacts can often be as little as a minor nuisance. However, for near field imaging situations, the corruption can be extreme—to the degree that the multi-path signal can easily completely overwhelm the desired one [33]. For far-field situations, the primary mechanism for multi-paths are reflections off of neighboring structures or surfaces and recombining with the original signal [36]. However, for near field cases, they very often propagate as surface waves along interfaces of support structures and the coupling medium or along the outside of antenna feedlines. One likely reason why this phenomenon goes unconsidered is the fact that these structures are simply not included in the models for numerical simulations [26, 37]. For many implementations, the computational costs are already enormous when simply including just the antennas, coupling medium and the target. Adding complexities such as the feedlines, support structure and the coupling medium tank would simply overwhelm the capabilities of modern computers. Consequently, for most simulation efforts, these structures are simply ignored—in fact, even neighboring antennas are usually also eliminated in the name of computational speed and cost. The unfortunate result is essentially a precise rendering of an unrealistic scenario.

There are few options in compensating for this challenge. The primary factor is that the multi-path signals originate from the same desired signal—i.e. it is the same frequency. Because of this, sophisticated filtering approaches are not effective. Techniques such as time-gating have been proposed in different implementations but have not resulted in any published results for microwave tomography [38]. One of the more challenging aspects of time gating is the need for a very broad band signal with fine sampling between frequencies. Given that the measured microwave data is most often acquired in the frequency domain, this can add dramatically to the acquisition time. In *Theoretical Premises and Contemporary Optimizations of Microwave Tomography DOI: http://dx.doi.org/10.5772/intechopen.103011*

addition, many of the proposed antennas simply do not operate over sufficient bandwidth to make this possible.

One technique employed by the Dartmouth group is the use of a lossy coupling medium [39]. This poses unique challenges, but when considering the imaging problem in a comprehensive manner, there is considerable merit to it. The main drawback is that propagating across even a short span in a lossy imaging medium can easily exceed the measurement dynamic range of high quality commercial vector network analyzers (VNA). This is not a trivial concern, but when properly understood, it is possible to devise systems to adequately accommodate this. Dynamic range considerations are discussed in more detail in the Methods section. The primary benefit is that the highly attenuating medium dramatically suppresses the unwanted surface waves. **Figure 1** shows a set of simulations for a monopole antenna radiating into a coupling medium where the active part of the antenna is positioned several centimeters above a Plexiglas plate [40]. For the low attenuating liquids (low conductivity), the surface waves easily reach the Plexiglas via coaxial modes traveling on the outsides of the feedlines. Once sufficiently powerful signals reach the low loss plate, planar modes are excited and the waves propagate unimpeded everywhere. However, as the conductivity of the liquid is increased, the coaxial modes are dramatically reduced to the point that for the last example, no signal reaches the Plexiglas. In this instance, the desired signal still propagates in a well-behaved beam pattern from the active part of the monopole antenna. In effect, we have traded a nearly impossible problem—i.e. uncontrolled propagation from damaging multi-path signals—for the need for a high dynamic range VNA which is just a difficult problem. It should be noted that multiport VNA's with the necessary dynamic range are now commercially available from some vendors—albeit at a significant cost.

As a side note, in explicitly dealing with multi-path issue, we have been able to realize several opportunities that would not have been possible had we taken a more conventionally intuitive approach. For instance, we have found that the monopole antennas are ideal in this setting. First, the naturally occurring resistive loading of the lossy bath dramatically improves the monopole antenna bandwidth [41]. These can be used from roughly 500 MHz to 3 GHz with a 10 dB return loss across the band—well in excess of that for most conventional antennas. While these antennas are essentially isotropic radiators, their low profile allows them to be packed tightly around the target which dramatically reduces the propagation distances compared with more

#### **Figure 1.**

*Simulations of the field patterns and associated surface waves for differing coupling bath conductivities (S/m) (Permission granted to re-print images by Human Press [40]).*

conventional aperture antennas. This shortening of the propagation distance easily compensates for the signal loss suffered due to having lower directivity. Ultimately, the close packing feature is a substantial advantage over the loss of directivity. In addition, because of the lossy bath, antenna low physical profile and the isotropic radiation pattern, there is essentially no mutual coupling between antennas even when spaced as closely as 2 cm apart [42]. Mutual coupling can be significantly debilitating for antenna array performance. Finally, the combination of the low loss medium and the low profile antennas makes the broadcast waves appear to be propagating in a mostly dielectric medium. This allows us to exploit the discrete dipole approximation (DDA) as an efficient means for simulating the signal forward solution which is recognized as the largest time cost in the reconstruction process [43]. We are now able to recover 2D images in 6 s and estimate that fully 3D images can be reconstructed in a few minutes—all without the aid of parallel processors and graphical processing units (GPUs). Overall, the advantages of using a lossy coupling medium have led to dramatic innovations.

Finally, while much has been written about the mathematics and algorithms regarding microwave tomography and/or inverse problems, the most common approach is a non-linear, iterative one which broadly falls under the category of multiparameter estimation problems. These problems have been studied extensively within the probability and statistics community for which a host of definitions and techniques have been developed to optimize algorithmic performance. In particular, work by Box and Cox in the 1950s and 1960s devised ways to assess the performance of different single-step and iterative algorithms and derived a wide range of suitable transformations to improve performance [44]. Their focus centered around characterizing problems where the data was inherently heteroscedastic and developing transformations to make the data more homoscedastic—i.e. amenable to standard, least squares multi-parameter estimation techniques. Two of the more ubiquitous examples include the log transforms used in X-ray CT and optical coherence tomography (OCT) [45, 46]. The basic assumptions are that the error function difference between the measured and computed field values should have a zero mean and a normal distribution. A convenient way to test this is to simply examine the residual vector after the reconstruction [47]. For the X-ray CT case, the image reconstructions are simply not possible without the transformation. This is also the case with OCT, for which the log transform is now widely adopted [48].

In earlier work, we demonstrated that the residual data for the microwave case was highly heteroscedastic when applied to an algorithm operating directly on the complex field data (**Figure 2a**) [34]. Our interpretation was that it was largely due to the wide signal strength dynamic range of the field values. However, once we applied the log transformation, the residuals were significantly more normal with a near zero mean (**Figure 2b**). The main challenge here is that when taking the log transform of a complex number, the result is the log magnitude and its phase [49]. The phase term immediately implies that there could be some form of unwrapping necessary. For the X-ray CT case, the detected signals are all real numbers, so there is no phase term. In OCT, the governing equation for the light is the transport equation and the phase is generated by harmonically modulating the light with a 100 MHz signal [46]. The major point here is that the wavelength associated with the 100 MHz is very large such that there is never enough scattering to generate phase changes greater than +/ 180 degrees. In effect, the data is always unwrapped. However, for the microwave breast imaging case, the dielectric scatterers are often physically on the same order size as the wavelengths and the scattered fields can quite frequently change phase by values well *Theoretical Premises and Contemporary Optimizations of Microwave Tomography DOI: http://dx.doi.org/10.5772/intechopen.103011*

**Figure 2.**

*Histograms of the residual error vector after reconstruction for the (a) non-transformed and (b) log transformed algorithms (Permission granted by Wiley Publishers to re-print graphs [34]).*

in excess of +/ 180 degrees. For the parameter estimation problems utilizing complex data, it is critical that all of the measured and computed phase pairs be on the same Riemann sheet for the algorithms to work properly. We briefly describe methods for unwrapping the phase in Section 2.2 which are covered in more detail in Meaney et al. [50].

Operationally, the key is that when starting at a baseline of a homogeneous medium, adding a contrasting target to the imaging zone can change the phases measured or computed for the different transmit/receive pairs substantially—often easily exceeding the single Riemann sheet bounds of 180 to +180 degrees. Depending on the antenna orientations, there are nominally some cases where the measurement and computed phases are already on the same Riemann sheet at the start of the algorithm, but there can also be many that are not. In cases where they are not, by mapping all of the measured and computed phases to the baseline Riemann sheet i.e. 180 to +180 degrees—(which is effectively what happens when using the nontransformed algorithm), it essentially transforms that element into a "bad" data point. While not universally viewed in this context, the various solutions that have been proposed generally work with the net effect of forcing the starting measured and computed phase values to be on the same Riemann sheet. These include: (1) introducing a priori information [24], (2) frequency hopping [51], and (3) simply adding more data [25, 27]. A priori information works because it effectively assumes a sample target at the start of the image reconstruction process that has similar characteristics to the actual target. The desired result is that it generally positions the measured and computed phases for all or most transmit/receive pairs on the same Riemann sheet. There are several reports utilizing this technique in simulation that appear promising; however, these techniques have generally not advanced beyond simulation studies. These results unfortunately end up being biased based on the quality of the initial guess. Frequency hopping acts in a similar manner as for applying a priori information in that images are reconstructed at progressively increasing frequencies with the results from lower frequencies used as starting guesses for the subsequent higher frequency reconstructions. At the lowest frequency, assumptions can be made that the phase changes are modest and it can be assumed that all of the data is unwrapped. While the lower frequency images can be quite blurred because of the associated larger wavelengths, the algorithms converge more reliably. The property images that

are transposed to the next higher frequency can be close enough to the actual images at the increased frequency such that it essentially positions the phases for all transmit/ receive pairs on the appropriate Riemann sheet. The notion is that the resolution of each successively higher frequency increases until one reaches the highest frequency. Finally, a number of groups have advocated utilizing substantially more measurement data than that prescribed by the Dartmouth team [25, 27]. While these often advance the notion that there needs to be as much measurement data as number of unknowns, our own experience based on a wealth of literature suggests that this may not be necessary [52]. An equally valid interpretation in the context of this discussion is that they hope to increase the amount of "good" data such that the amount of "good" data simply outweighs the amount of "bad" data for which a least squares image process can achieve a reasonable image. For many real situations, each of these solutions may be unrealistic. Accurate a priori information may be difficult to generate at the time of or before the actual imaging session. Frequency hopping may be unrealistic because it is difficult to devise antennas with sufficient bandwidth to accommodate the algorithm. Finally, adding more data inevitably implies that the measurement system will require more channels which inherently leads to increases in the algorithm complexity and hardware costs.

The Dartmouth team has devised robust unwrapping techniques for both the measured and computed phases which are briefly summarized in Section 2.2 [50]. These generally exploit the wide operating bandwidth of the monopole antennas and even the nature of the algorithm convergence. One important consequence of these developments is that the amount of measurement data can be kept to a minimum (typically 16 antennas configured in a circle for 2D images) which dramatically reduces overall system cost and complexity [52]. The algorithm is fast and does not suffer from convergence to unwanted local minima even when starting from an initial estimate of the coupling bath properties. These approaches have been developed in the context of utilizing actual measurement data to maximize the benefit from each piece of measurement data while not imposing Riemann sheet criteria that essentially transform "good" data into "bad". This approach is summarized in Section 2.2 and demonstrates excellent convergence behavior using real measurement data.
