**4. Registration and segmentation in ultrasound-based navigation**

Registration is the process of transforming an image into the coordinate system of a patient, or another image. After registration, the same anatomical features have the same coordinates in both the image and the patient, or in both images. Image-to-patient registration is one of the cornerstones of any navigation system, and is necessary for navigation using pre-operative

pointer and see the corresponding location in the images on the computer screen. The use of markers for image-to-patient registration presents some limitations both for the patient and the hospital staff. First, fiducial based registration requires an imaging session shortly before surgery to minimize the risk for markers to fall off or be displaced. In many cases this imaging session comes in addition to an initial session needed for diagnosis. Any displacement of the fiducial markers between the imaging session and surgery will compromise the image-topatient registration accuracy. The placement of fiducials also represents an inconvenience for

Ultrasound-Based Guidance and Therapy http://dx.doi.org/10.5772/55884 45

**Figure 14.** Image-to-Patient registration using corresponding points between image space (A) and physical space (B).

In order to avoid the use of fiducial markers, natural anatomical landmarks can be used for patient registration. Typical features in the context of neurosurgery are the medial and lateral corners of the eyes, the nose and ears. Like fiducial based registration, an image-to-patient registration framework using natural anatomical landmarks requires identification of points in the pre-operative images. The typically used landmarks are almost coplanar, and they are all located in a relatively small area around the face and ears. This might compromise the registration accuracy in other parts of the head, and possibly close to the surgical target [66]. A number of groups have presented surface matching techniques to address this issue. The skin surface of the patient is segmented from pre-operative data and registered to a set of surface points acquired in the operating room. Techniques to acquire surface points in the operating room include cameras [67, 68], laser surface scanners [69-71] and tracked pointers [72]. The accuracy of the different methods has been evaluated and compared [71, 73-75]. Both landmarks and surface based registration alone are less accurate than fiducial based registra‐ tion. Different approaches combining registration based on anatomical landmarks and

As surgery proceeds, tissue will shift and deform due to gravity, retraction, resection and administration of drugs. Consequently, the pre-operative images do not correspond to the patient anymore. In this case, intraoperative ultrasound can be used for direct guidance and to update the location of the pre-operative data according to the surgical reality at a given point

patients and hospital staff in the preparations for the procedure.

alignment of surfaces have therefore been developed.

in time.

**Figure 13.** Different approaches to integrating (3D) ultrasound and navigation. A) A two-rack solution and examples of one-rack solutions (B and C).

images such as MR and/or CT. Image-to-image registration is useful to align pre-operative images before registration to the patient, and also to update the pre-operative images during surgery using for example intra-operative US. Only the latter involves US and will be the focus in this section, but image-to-patient registration is important for proper initialization of the MR/CT-to-US registration. The main motivation behind image-to-image registration is that different images contain different and complimentary information about the patient at a given point in time. When we bring the images into the same coordinate system and into the coordinate system of the patient, we can take advantage of more of the useful information in the different images. Such information can be the size and location of the surgical target, important blood vessels, critical structures that should be avoided etc. The registration method used in each case depends heavily on the type of images we want to register. The type of spatial transformation, how we measure the similarity between the images and how this measure is optimized are key components of any registration procedure.

#### **4.1. Registration of preoperative images to the patient**

Image-to-patient registration is a necessary and crucial step in order to use pre-operative images for guidance. Intraoperative ultrasound only shows a limited portion of the surgical field and might require some experience to appreciate. Preoperative data can therefore be used for overview and interpretation. In neurosurgery, for example, it is not possible to acquire ultrasound images before opening of the dura. Pre-operative images are therefore necessary for planning the craniotomy.

One of the most frequently used registration methods consists in using self-adhesive markers, also called fiducials. The fiducials are glued to the patient's skin before MR or CT imaging. The markers can be identified in the images and the corresponding markers can be identified on the patient using a tracked pointer once the patient is immobilized on the operating table (figure 14). A spatial transformation can then be computed transforming the image into the coordinate system of the patient. The surgeon can then point on the patient using a tracked pointer and see the corresponding location in the images on the computer screen. The use of markers for image-to-patient registration presents some limitations both for the patient and the hospital staff. First, fiducial based registration requires an imaging session shortly before surgery to minimize the risk for markers to fall off or be displaced. In many cases this imaging session comes in addition to an initial session needed for diagnosis. Any displacement of the fiducial markers between the imaging session and surgery will compromise the image-topatient registration accuracy. The placement of fiducials also represents an inconvenience for patients and hospital staff in the preparations for the procedure.

images such as MR and/or CT. Image-to-image registration is useful to align pre-operative images before registration to the patient, and also to update the pre-operative images during surgery using for example intra-operative US. Only the latter involves US and will be the focus in this section, but image-to-patient registration is important for proper initialization of the MR/CT-to-US registration. The main motivation behind image-to-image registration is that different images contain different and complimentary information about the patient at a given point in time. When we bring the images into the same coordinate system and into the coordinate system of the patient, we can take advantage of more of the useful information in the different images. Such information can be the size and location of the surgical target, important blood vessels, critical structures that should be avoided etc. The registration method used in each case depends heavily on the type of images we want to register. The type of spatial transformation, how we measure the similarity between the images and how this measure is

**Figure 13.** Different approaches to integrating (3D) ultrasound and navigation. A) A two-rack solution and examples

Image-to-patient registration is a necessary and crucial step in order to use pre-operative images for guidance. Intraoperative ultrasound only shows a limited portion of the surgical field and might require some experience to appreciate. Preoperative data can therefore be used for overview and interpretation. In neurosurgery, for example, it is not possible to acquire ultrasound images before opening of the dura. Pre-operative images are therefore necessary

One of the most frequently used registration methods consists in using self-adhesive markers, also called fiducials. The fiducials are glued to the patient's skin before MR or CT imaging. The markers can be identified in the images and the corresponding markers can be identified on the patient using a tracked pointer once the patient is immobilized on the operating table (figure 14). A spatial transformation can then be computed transforming the image into the coordinate system of the patient. The surgeon can then point on the patient using a tracked

optimized are key components of any registration procedure.

**4.1. Registration of preoperative images to the patient**

for planning the craniotomy.

of one-rack solutions (B and C).

44 Advancements and Breakthroughs in Ultrasound Imaging

**Figure 14.** Image-to-Patient registration using corresponding points between image space (A) and physical space (B).

In order to avoid the use of fiducial markers, natural anatomical landmarks can be used for patient registration. Typical features in the context of neurosurgery are the medial and lateral corners of the eyes, the nose and ears. Like fiducial based registration, an image-to-patient registration framework using natural anatomical landmarks requires identification of points in the pre-operative images. The typically used landmarks are almost coplanar, and they are all located in a relatively small area around the face and ears. This might compromise the registration accuracy in other parts of the head, and possibly close to the surgical target [66]. A number of groups have presented surface matching techniques to address this issue. The skin surface of the patient is segmented from pre-operative data and registered to a set of surface points acquired in the operating room. Techniques to acquire surface points in the operating room include cameras [67, 68], laser surface scanners [69-71] and tracked pointers [72]. The accuracy of the different methods has been evaluated and compared [71, 73-75]. Both landmarks and surface based registration alone are less accurate than fiducial based registra‐ tion. Different approaches combining registration based on anatomical landmarks and alignment of surfaces have therefore been developed.

As surgery proceeds, tissue will shift and deform due to gravity, retraction, resection and administration of drugs. Consequently, the pre-operative images do not correspond to the patient anymore. In this case, intraoperative ultrasound can be used for direct guidance and to update the location of the pre-operative data according to the surgical reality at a given point in time.

#### **4.2. Ultrasound-based update of preoperative data**

As surgery proceeds the pre-operative images no longer reflect the reality and updated information is necessary for accurate navigation. Intra-operative ultrasound can be ac‐ quired when needed during the procedure and be used for direct guidance and resection control, but also as a registration target for pre-operative images in order to update their position. This is particularly important forimages such as functional MRI (fMRI) and diffusion tensor imaging (DTI) in neurosurgery because the information contained in these images cannot be easy re-acquired during the procedure. By performing MR/CT-to-US registra‐ tion, the information contained in the pre-operative images can be shifted to the correct position at any given point in time (figure 15). Registration of MR/CT to US is a challeng‐ ing task due to differences in image appearance and noise characteristics. The existing methods can be divided into two main categories:


organ can be imaged using 4D ultrasound (3D + time or real-time 3D) in order to monitor the temporal changes in anatomy during the imaging, planning and delivery of treatment. The consecutive 3D images can then be registered in order to estimate the organ motion (figure 16). The positioning of the HIFU or radiation beam can then be modified accordingly in order to hit the target at any point in time. We have validated automatic motion estimation from 4D ultrasound in the liver using a non-rigid registration algorithm and a group-wise optimiza‐ tion approach aspart of an ongoing study to bepublishedin the nearfuture.The offline analysis was performed using a recently published non-rigid registration algorithm that was specifical‐ ly designed for motion estimation from dynamic imaging data [87]. The method registers the entire 4D sequence in a group-wise optimization fashion, thus avoiding a bias towards a specifically chosen reference time point. Both spatial and temporal smoothness of the transfor‐ mations are enforced by using a 4D free-form B-spline deformation model. For the evaluation, three healthy volunteers were scanned over several breath cycles from three different posi‐ tions and angles on the abdomen (nine 4D scans in total). A skilled physician performed the scanning and manually annotated well-defined anatomic landmarks for assessment of the automatic algorithm. Four engineers each annotated these points in all time frames, the mean of which was taken as a gold standard. The error of the automatic motion estimation method was compared with inter-observer variability. The registration method estimated liver motion better than the individual observers and had an error (75% percentile over all datasets) of 1 mm. We conclude that the methodology was able to accurately track the motion of the liver in the 4D ultrasound data. This methodology may be used intraoperatively to guide ablation of moving targets in the abdomen if the registration method can be run in real-time and the ultrasound

**Figure 15.** Ultrasound-based shift correction of preoperative MR data during an AVM operation. Top and bottom row shows the situation before and after the MR-to-US registration respectively. A) Ultrasound. D) MR. MR (gray) and US (green) before (B) and after (E) registration. Centerlines from US (green) and MR (red) before (C) and after (F) registration.

Ultrasound-Based Guidance and Therapy http://dx.doi.org/10.5772/55884 47

probe can be made MR compatible (required for MR-guided HIFU).

Several methods within the two main categories have been validated using retrospective clinical data [12, 14, 15]. So far no automatic method has been thoroughly validated intrao‐ peratively (figure 15). The use of automatic registration methods in the operating room requires high quality data and straightforward, accurate, robust and fast image processing. With all this in place, image registration using intraoperative ultrasound will be able to correct the position of pre-operative data and thereby provide updated and reliable information about anatomy, pathology and function during surgery.

#### **4.3. Motion correction using 4D ultrasound**

Intensitybasedregistrationofultrasoundimages canalsobeusedtotrackthemotionofanorgan of interest. In the case of high-intensity focused ultrasound (HIFU or FUS) or radiotherapy, the

**4.2. Ultrasound-based update of preoperative data**

46 Advancements and Breakthroughs in Ultrasound Imaging

methods can be divided into two main categories:

registration of MR/CT and ultrasound [76-82].

anatomy, pathology and function during surgery.

**4.3. Motion correction using 4D ultrasound**

As surgery proceeds the pre-operative images no longer reflect the reality and updated information is necessary for accurate navigation. Intra-operative ultrasound can be ac‐ quired when needed during the procedure and be used for direct guidance and resection control, but also as a registration target for pre-operative images in order to update their position. This is particularly important forimages such as functional MRI (fMRI) and diffusion tensor imaging (DTI) in neurosurgery because the information contained in these images cannot be easy re-acquired during the procedure. By performing MR/CT-to-US registra‐ tion, the information contained in the pre-operative images can be shifted to the correct position at any given point in time (figure 15). Registration of MR/CT to US is a challeng‐ ing task due to differences in image appearance and noise characteristics. The existing

**•** *Intensity-based methods*: These methods take the original images (MR/CT and B-mode US) as input, and the optimization of the registration parameters is computed from the image intensities, either directly or indirectly (blurring, gradients etc.). Some of the existing methods use well-known similarity measures such as mutual information and crosscorrelation, while others have developed similarity measures particularly adapted to the

**•** *Feature-based methods*: These methods require segmentation or "enhancement" of particular features in the images to be registered. The registration algorithm will then align the corresponding features in each image. In MR/CT-to-US registration such feature might be the vascular tree [83-85]. Blood vessels are relatively easy to identify and segment in both MR angiography and Doppler ultrasound images, and are present in nearly any region of interest. A centerline or skeleton can be computed from the segmented vessels and be used for registration. The most commonly used method for feature-based registration is the iterative closest point algorithm (ICP) [86]. In the case of vessel registration, all the points in the moving dataset are paired with the closest point in the fixed dataset. Based on these point correspondences, the registration parameters can be computed using the least squares method. The resulting transformation is then applied to the moving dataset and new point

correspondences can be computed. The process is then iterated until convergence.

Several methods within the two main categories have been validated using retrospective clinical data [12, 14, 15]. So far no automatic method has been thoroughly validated intrao‐ peratively (figure 15). The use of automatic registration methods in the operating room requires high quality data and straightforward, accurate, robust and fast image processing. With all this in place, image registration using intraoperative ultrasound will be able to correct the position of pre-operative data and thereby provide updated and reliable information about

Intensitybasedregistrationofultrasoundimages canalsobeusedtotrackthemotionofanorgan of interest. In the case of high-intensity focused ultrasound (HIFU or FUS) or radiotherapy, the

**Figure 15.** Ultrasound-based shift correction of preoperative MR data during an AVM operation. Top and bottom row shows the situation before and after the MR-to-US registration respectively. A) Ultrasound. D) MR. MR (gray) and US (green) before (B) and after (E) registration. Centerlines from US (green) and MR (red) before (C) and after (F) registration.

organ can be imaged using 4D ultrasound (3D + time or real-time 3D) in order to monitor the temporal changes in anatomy during the imaging, planning and delivery of treatment. The consecutive 3D images can then be registered in order to estimate the organ motion (figure 16). The positioning of the HIFU or radiation beam can then be modified accordingly in order to hit the target at any point in time. We have validated automatic motion estimation from 4D ultrasound in the liver using a non-rigid registration algorithm and a group-wise optimiza‐ tion approach aspart of an ongoing study to bepublishedin the nearfuture.The offline analysis was performed using a recently published non-rigid registration algorithm that was specifical‐ ly designed for motion estimation from dynamic imaging data [87]. The method registers the entire 4D sequence in a group-wise optimization fashion, thus avoiding a bias towards a specifically chosen reference time point. Both spatial and temporal smoothness of the transfor‐ mations are enforced by using a 4D free-form B-spline deformation model. For the evaluation, three healthy volunteers were scanned over several breath cycles from three different posi‐ tions and angles on the abdomen (nine 4D scans in total). A skilled physician performed the scanning and manually annotated well-defined anatomic landmarks for assessment of the automatic algorithm. Four engineers each annotated these points in all time frames, the mean of which was taken as a gold standard. The error of the automatic motion estimation method was compared with inter-observer variability. The registration method estimated liver motion better than the individual observers and had an error (75% percentile over all datasets) of 1 mm. We conclude that the methodology was able to accurately track the motion of the liver in the 4D ultrasound data. This methodology may be used intraoperatively to guide ablation of moving targets in the abdomen if the registration method can be run in real-time and the ultrasound probe can be made MR compatible (required for MR-guided HIFU).

using a state estimation approach and a Kalman filter. When the shape, appearance and localization of the structure are unknown semi-automatic or manual segmentation by an expert might be the only solution to obtain satisfactory results. Segmentation of Doppler ultrasound images, on the other hand is usually straightforward using simple thresholding methods. Vascular structures, however, often appear with a diameter that is to large in the Doppler ultrasound images causing neighboring vessels to be smeared together. Reliable segmentation of the vascular tree can therefore be challenging due to the spatial resolution

Ultrasound-Based Guidance and Therapy http://dx.doi.org/10.5772/55884 49

**Figure 17.** A 3D model of the left ventricle (A) matched in real-time to 4D Ultrasound shown here as slices in 3D (A)

The amount of image data available for any given patient is increasing and may include preoperative structural data such as CT and MRI (T1, T2, FLAIR, MR angiography etc.), preoperative mapping of important gray (fMRI) and white matter (DTI), functional data from PET, intra-operative 3D ultrasound (B-mode and Doppler) in addition to images from microscopes, endoscopes and laparoscopes. All these sources of information are not equally important at all times during the procedure, and a selection of data has to be made in order to present only those images that are relevant for the surgeon at that particular point in time.

There are various ways to classify the different visualization techniques that exist. For medical visualization of 3D data from modalities like CT, MRI and US, it is common to refer to three

**•** *Slicing*: Slicing means extracting a 2D plane from the 3D data and can further be classified according to how the 2D slice data are generated and how this information is displayed. The sequence of slices acquired by the modality and used to generate a regular image volume is often referred to as the raw or natural slices. From the reconstructed volume we can extract both orthogonal (figure 18A) and oblique (figure 18B) slices. Orthogonal slicing is often used in systems for pre- and postoperative visualization, as well as in intraoperative

of the images.

and 2D (B and C). Source: Orderud [95].

approaches:

**5. Ultrasound-based visualization and navigation**

**Figure 16.** A) 4D (3D+t) Ultrasound of the liver. Example image before (top row B) and after (bottom row C) registra‐ tion. The middle (B-C) and right panel, respectively, show the evolution over time (vertical axis) of the horizontal and vertical profile indicated by the cross in the left panel. After registration, the motion has been successfully removed from the image (streight vertical lines).

#### **4.4. Segmentation of ultrasound data**

Fully automatic segmentation of structures from B-mode ultrasound images is a challenging task. The clarity and contrast of structure boundaries depend heavily on their orientation relative to the sound wave and the acoustic properties of the surrounding tissues. Conse‐ quently, the boundaries of interest are often broken or at least unclear in parts of the image volume. It is therefore necessary to use *a priori* knowledge about the shape and appearance of the structure of interest in order to obtain reliable segmentation results. This *a priori* knowledge can be obtained by manually segmenting the structure of interest in a set of training data. Then, shape and appearance statistics can be used to segment the structure in new datasets. Akbari et al. [88] and Zhan et al. [89] used this approach for segmentation of the prostate in 3D ultrasound images of the prostate, and Xie et al. [90] used a similar approach for segmentation of the kidneys from 2D ultrasound images. The disadvantage of this method is the requirement for a database of training data with manual segmentations. This method can also be difficult to employ if the shape and appearance of the structure is unknown or presents large variations such as tumors and other pathologies. Several groups have also presented segmentation algorithms for ultrasound images of bone surfaces, and particularly the spine [91-94]. In these cases, the purpose of the segmentation process is to extract the bone surface from intraoperative ultrasound images for registration to pre-operative CT images. The ultrasound images are filtered in order to highlight the bone surface and in some cases the characteristic shadow behind the bone surface can be used for segmentation purposes as shown by Yan et al. [94]. They used backwards scan-line tracing to extract the bone surface from ultrasound images of the spine.

One of the great advantages of ultrasound is real time dynamic imaging. Methods based on shape and appearance statistics are in general not able to run fast enough to capture the dynamics of a moving organ such as the heart. Orderud et al. [95] proposed a method for real time segmentation of the beating heart. They fitted a set of control points of a model of the left ventricle to 4D ultrasound data (figure 17). The fitting process was run in real time using a state estimation approach and a Kalman filter. When the shape, appearance and localization of the structure are unknown semi-automatic or manual segmentation by an expert might be the only solution to obtain satisfactory results. Segmentation of Doppler ultrasound images, on the other hand is usually straightforward using simple thresholding methods. Vascular structures, however, often appear with a diameter that is to large in the Doppler ultrasound images causing neighboring vessels to be smeared together. Reliable segmentation of the vascular tree can therefore be challenging due to the spatial resolution of the images.

**Figure 17.** A 3D model of the left ventricle (A) matched in real-time to 4D Ultrasound shown here as slices in 3D (A) and 2D (B and C). Source: Orderud [95].
