**5. Ultrasound-based visualization and navigation**

**Figure 16.** A) 4D (3D+t) Ultrasound of the liver. Example image before (top row B) and after (bottom row C) registra‐ tion. The middle (B-C) and right panel, respectively, show the evolution over time (vertical axis) of the horizontal and vertical profile indicated by the cross in the left panel. After registration, the motion has been successfully removed

Fully automatic segmentation of structures from B-mode ultrasound images is a challenging task. The clarity and contrast of structure boundaries depend heavily on their orientation relative to the sound wave and the acoustic properties of the surrounding tissues. Conse‐ quently, the boundaries of interest are often broken or at least unclear in parts of the image volume. It is therefore necessary to use *a priori* knowledge about the shape and appearance of the structure of interest in order to obtain reliable segmentation results. This *a priori* knowledge can be obtained by manually segmenting the structure of interest in a set of training data. Then, shape and appearance statistics can be used to segment the structure in new datasets. Akbari et al. [88] and Zhan et al. [89] used this approach for segmentation of the prostate in 3D ultrasound images of the prostate, and Xie et al. [90] used a similar approach for segmentation of the kidneys from 2D ultrasound images. The disadvantage of this method is the requirement for a database of training data with manual segmentations. This method can also be difficult to employ if the shape and appearance of the structure is unknown or presents large variations such as tumors and other pathologies. Several groups have also presented segmentation algorithms for ultrasound images of bone surfaces, and particularly the spine [91-94]. In these cases, the purpose of the segmentation process is to extract the bone surface from intraoperative ultrasound images for registration to pre-operative CT images. The ultrasound images are filtered in order to highlight the bone surface and in some cases the characteristic shadow behind the bone surface can be used for segmentation purposes as shown by Yan et al. [94]. They used backwards scan-line tracing to extract the bone surface from ultrasound

One of the great advantages of ultrasound is real time dynamic imaging. Methods based on shape and appearance statistics are in general not able to run fast enough to capture the dynamics of a moving organ such as the heart. Orderud et al. [95] proposed a method for real time segmentation of the beating heart. They fitted a set of control points of a model of the left ventricle to 4D ultrasound data (figure 17). The fitting process was run in real time

from the image (streight vertical lines).

images of the spine.

**4.4. Segmentation of ultrasound data**

48 Advancements and Breakthroughs in Ultrasound Imaging

The amount of image data available for any given patient is increasing and may include preoperative structural data such as CT and MRI (T1, T2, FLAIR, MR angiography etc.), preoperative mapping of important gray (fMRI) and white matter (DTI), functional data from PET, intra-operative 3D ultrasound (B-mode and Doppler) in addition to images from microscopes, endoscopes and laparoscopes. All these sources of information are not equally important at all times during the procedure, and a selection of data has to be made in order to present only those images that are relevant for the surgeon at that particular point in time.

There are various ways to classify the different visualization techniques that exist. For medical visualization of 3D data from modalities like CT, MRI and US, it is common to refer to three approaches:

**•** *Slicing*: Slicing means extracting a 2D plane from the 3D data and can further be classified according to how the 2D slice data are generated and how this information is displayed. The sequence of slices acquired by the modality and used to generate a regular image volume is often referred to as the raw or natural slices. From the reconstructed volume we can extract both orthogonal (figure 18A) and oblique (figure 18B) slices. Orthogonal slicing is often used in systems for pre- and postoperative visualization, as well as in intraoperative

**•** *Geometric surface rendering*: The technique used to render the texture-mapped quads is essentially the same technique that is used to render geometric surface representations of relevant structures (figure 17 A-F). However, the geometric representations must first be extracted from the image information. While it is possible in some cases to extract a structure and generate a 3D model of it by directly using an isosurface extraction algorithm [96], the generation of an accurate geometric model from medical data often requires a segmentation step first. The most common surface representation is to use a lot of simple geometric primitives (e.g., triangles), though other possibilities exist. Furthermore, the surfaces can be

Ultrasound-Based Guidance and Therapy http://dx.doi.org/10.5772/55884 51

The challenge is to combine the available data and visualization methods to present an optimal integrated multimodal scene that shows only the relevant information at any given time to the surgeon. Multimodal visualization and various image fusion techniques can be very beneficial when trying to take advantage of the best features in each modality. It is easier to perceive an integration of two or more volumes in the same scene than to mentally fuse the same volumes when presented in separate display windows. This also offers an opportunity to pick relevant and necessary information from the most appropriate of the available datasets. Ideally, relevant information should include not only anatomical structures for reference and patho‐ logical structures to be targeted, but also important structures to be avoided. Finally, aug‐ mented reality techniques can be used to mix the virtual representation of the patient provided by 3D medical data and models extracted from these and the real representation provided by a microscope or a laparoscope for example, giving an even more realistic picture of the

made transparent so that it's possible to see what's beneath the structure.

treatment delivered through small incisions in minimally invasive procedures.

The delicacy, precision and extent of the work the surgeon can perform based on image information rely on his/her confidence in the overall clinical accuracy and the anatomical or pathological representation. The overall clinical accuracy in image-guided surgery is the difference between the location of a surgical tool relative to some structure as indicated in the image information, and the location relative to the same structure in the patient. This accuracy is difficult to assess in a clinical setting due to the lack of fixed and well-defined landmarks inside the patient that can be accurately reached with a pointer. Common practice is therefore to estimate the system's overall accuracy in a controlled laboratory setting using precisely built phantoms. In order to conclude on the potential clinical accuracy, the differences between the

A comprehensive analysis of the error sources involved in neuronavigation based on intrao‐ perative ultrasound as well as preoperative MRI can be found in Lindseth et al. [97]. The overall accuracy is often referred to as the Navigation System Accuracy (NSA) and the essential points

**6. Ultrasound-based navigation accuracy**

clinical and the laboratory settings must be carefully examined.

**6.1. Error sources and key points**

to remember can be summarized like this:

**Figure 18.** Multimodal visualization. Orthogonal (A) and oblique (B) slicing, the position as well as the position and the orientation of the tool are used to extract the slices respectively. The three basic visualization types are shown in each image. The head is volume rendered in a 3D view that also shows geometric representations of both the tool and slice indicators. Corresponding slices are shown in a 2D view at the right. C) Display during freehand 3D ultrasound acquisition: Real-time 2D ultrasound to the left and an indication of the us-scanplane relative to MR data in a 3D and 2D view to the top and bottom right respectively. D) Overview of probe relative to head. E) Detailed view of real-time 2D ultrasound relative to MRA (read) and 3D power Doppler data (gray). F) Slice from ultrasound (top part) and MR (bottom part), surface model in red from MR (middle part). Mismatch between US (slice) and MR (tumor model) is clearly visible. G) 3D ultrasound (gray) is used to correct MRA (moved from red to green position) during an aneurysm operation.

navigation systems, where the tip of the tracked instrument determines the three extracted slices. The slices can also be orthogonal relative to the tracked instrument or the surgeon's view (i.e., oblique slicing relative to the volume axis or patient), and this is becoming an increasingly popular option in navigation systems. When a surgical tool cuts through multiple volumes several slices are generated. These slices can then be combined in different ways using various overlay and fusion techniques.

**•** *Direct volume rendering*: Volume- and geometric rendering techniques are not easily distin‐ guished. Often the two approaches can produce similar results, and in some cases one approach may be considered both a volume rendering and a geometric rendering technique. Still, the term volume rendering is used to describe a direct rendering process applied to 3D data where information exists throughout a 3D space instead of simply on 2D surfaces defined in (and often extracted from) such a 3D space. The two most common approaches to volume rendering are volumetric ray casting and 2D/3D texture mapping (figure 17 A, B, D, E, G). In ray casting, each pixel in the image is determined by sending a ray into the volume and evaluating the voxel data encountered along the ray using a specified ray function (maximum, isovalue, compositing). Using 2D texture mapping, polygons are generated along the axis of the volume that is most closely aligned with the viewing direction. The data is then mapped onto these quads and projected into a picture using standard graphics hardware.

**•** *Geometric surface rendering*: The technique used to render the texture-mapped quads is essentially the same technique that is used to render geometric surface representations of relevant structures (figure 17 A-F). However, the geometric representations must first be extracted from the image information. While it is possible in some cases to extract a structure and generate a 3D model of it by directly using an isosurface extraction algorithm [96], the generation of an accurate geometric model from medical data often requires a segmentation step first. The most common surface representation is to use a lot of simple geometric primitives (e.g., triangles), though other possibilities exist. Furthermore, the surfaces can be made transparent so that it's possible to see what's beneath the structure.

The challenge is to combine the available data and visualization methods to present an optimal integrated multimodal scene that shows only the relevant information at any given time to the surgeon. Multimodal visualization and various image fusion techniques can be very beneficial when trying to take advantage of the best features in each modality. It is easier to perceive an integration of two or more volumes in the same scene than to mentally fuse the same volumes when presented in separate display windows. This also offers an opportunity to pick relevant and necessary information from the most appropriate of the available datasets. Ideally, relevant information should include not only anatomical structures for reference and patho‐ logical structures to be targeted, but also important structures to be avoided. Finally, aug‐ mented reality techniques can be used to mix the virtual representation of the patient provided by 3D medical data and models extracted from these and the real representation provided by a microscope or a laparoscope for example, giving an even more realistic picture of the treatment delivered through small incisions in minimally invasive procedures.
