**4.1. Non-subsampled contourlet transform**

Despite different applications of Wavelet Transform in medical image analysis, it has some limitations in capturing the directional information in images such as smooth contours and the directional edges. For example, orthogonal wavelets consider only horizontally, vertically, and diagonally directed discontinuities. These directions do not express effectively the edges and textures of medical images such as breast mammograms, which have smooth curves that represent benign, malignant masses and micro-calcifications, etc. To express the contour-like smooth edges directly in the discrete domain effectively, the Contourlet Transform was introduced by Do and Vetterli [45], which is an extension of the wavelet transform which uses multi scale transform that is constructed by combining the Laplacian pyramid with directional filter banks (DFB) and has additional characteristics such as directionality and anisotropy in addition to the properties of Wavelet Transform. Although the Contourlet Transform is a more effective method than the Wavelet Transform in image representation, it is not shift-invariant due to down-sampling and up-sampling. Non-Subsampled Contourlet Transform (NSCT) was proposed by Cunha et al. [21] to compensate this limitation and due to its beneficial features, the NSCT has been used in this work for representing the breast masses according to its features.

In NSCT, to keep away from the frequency aliasing of the CT and to obtain the shift-invariance, the non-subsampled Laplacian Pyramids (NSLP) and the non-subsampled directional filter banks (NSDFB) is utilized based on Idealized frequency partitioning obtained with the structure proposed in [21]. Additionally, the multi-scale and directional decomposition processes are free from each other. The number of decomposition directions is changeable and can be adjusted to any value of 2*<sup>l</sup> j* where the *l j* parameter can be represented at scale *j*, 1 ⩽ j ⩽ J and J represents the number of decomposition scales. Different from the classical CT, all subbands of NSCT have the same resolution. That means, the NSCT coefficients of each subband are in one-to-one correspondence with the original surface in the spatial domain. For feature extraction, a combination of k mean, variance, energy, entropy, skewness, and kurtosis parameters from 4-level non-sampled contourlet transform is examined. An example of the NSCT for a mass is shown in **Figure 3**. The image is decomposed into four pyramidal levels, resulting in one, two and eight sub-bands.

## **4.2. Eig(Hess)-HOG features**

is computed, a threshold is empirically selected to obtain the optimal size ROIs. **Figure 2** illustrates some examples of saliency maps generated from pre-processed images and ROI

Feature extraction plays key role in our breast cancer diagnosis framework. Feature extraction process can only be carried out if the suspicious areas of breast masses are appropriately defined. In selecting effective features from mammogram lesions, great research efforts have been focused on capturing the texture of images and improving correlation to the human visual similarity. Among them, Curvelet transform, Gabor Wavelet, Discrete Wavelet Transform (DWT), and Spherical Wavelet Transform (SWT), Contourlet Transform (CT), local binary pattern (LBP) have been extensively investigated and compared in addition to other popular texture features derived from the co-occurrence matrices and Fourier transformation [39–44]. Since, clinically and visually similar lesions or disease patterns can depict on different locations of the mammograms with different orientations, the selected features should be invariant to the linear shift and rotation of the targeted lesions. To consider these criteria, NSCT and HOG based approaches are used for feature extraction in addition to traditional shape, mass and GLCM based features from region-of-interests (ROIs) with adaptively

Despite different applications of Wavelet Transform in medical image analysis, it has some limitations in capturing the directional information in images such as smooth contours and the directional edges. For example, orthogonal wavelets consider only horizontally, vertically, and diagonally directed discontinuities. These directions do not express effectively the edges and textures of medical images such as breast mammograms, which have smooth curves that represent benign, malignant masses and micro-calcifications, etc. To express the contour-like smooth edges directly in the discrete domain effectively, the Contourlet Transform was introduced by Do and Vetterli [45], which is an extension of the wavelet transform which uses multi scale transform that is constructed by combining the Laplacian pyramid with directional filter banks (DFB) and has additional characteristics such as directionality and anisotropy in addition to the properties of Wavelet Transform. Although the Contourlet Transform is a more effective method than the Wavelet Transform in image representation, it is not shift-invariant due to down-sampling and up-sampling. Non-Subsampled Contourlet Transform (NSCT) was proposed by Cunha et al. [21] to compensate this limitation and due to its beneficial features, the NSCT has been used in this work for representing the breast masses according

In NSCT, to keep away from the frequency aliasing of the CT and to obtain the shift-invariance, the non-subsampled Laplacian Pyramids (NSLP) and the non-subsampled directional filter banks (NSDFB) is utilized based on Idealized frequency partitioning obtained with

adjusted size based on the actual mass region segmentation results.

**4.1. Non-subsampled contourlet transform**

to its features.

segmentation from the saliency maps.

18 Medical Imaging and Image-Guided Interventions

**4. Feature extraction**

The HOG is computed for each key point from a block. The key point denotes the center of the central cell of the block. The adjacent area of each key point is partitioned into cells. Onedimensional histogram of gradient orientations is accumulated for each cell. The histogram of all the cells generates the feature of all key points [22, 46]. A simple 1-D [−1; 0; 1] mask is used

**Figure 3.** Non-sampled Contourlet transform of an ROI with four-level decomposition.

for the gradient computation. In conventional HOG, firstly the grayscale image is filtered with mask to obtain *x* and *y* derivatives of image as in Eq. (4).

mask to obtain  $x$  and  $y$  derivatives of image as in Eq. (4).

$$\begin{array}{llll}f\_{\mathbf{x}}(\mathbf{x},\mathbf{y}) & I(\mathbf{x}+1,\mathbf{y}) - I(\mathbf{x}-1,\mathbf{y}) & \forall \mathbf{x},\mathbf{y} \\ f\_{\mathbf{y}}(\mathbf{x},\mathbf{y}) & I(\mathbf{x},\mathbf{y}+1) - I(\mathbf{x},\mathbf{y}-1) & \forall \mathbf{x},\mathbf{y} \end{array} \tag{4}$$

where *f x* and *f y* indicates *x* and *y* derivatives of image gradient. *I*(*x*, *y*) indicates the intensity at position(*x*, *y*). The magnitude and orientation is calculated as in Eq. (5) and (6);

$$\stackrel{\circ}{\text{position}}\stackrel{\circ}{\text{The magnitude and orientation is calculated as in Eq. (5) and (6):}$$

$$m\_\text{(x, y)} = \sqrt{f\_x(\text{x, y})^2 + f\_y(\text{x, y})^2} \tag{5}$$

$$\theta(\alpha, y) = \tan^{-1} \left( \frac{f\_\circ(\alpha, y)}{f\_\circ(\alpha, y)} \right) \tag{6}$$

Eig(Hess)-HOG uses the Hessian matrix instead of the Gaussian derivative filters to compute the eigenvalues of image surface. The Hessian matrix contains more differential information than the conventional gradient. The Hessian matrix of an image is defined as the second-order partial derivative matrix of the gray scale image. The second order differentials provide more

In addition as shown in **Figure 5**, a 9-D shape feature and a 7-D mass feature are extracted which representing the mass boundary and the average contrast, smoothness, orientation, uniformity, entropy, perimeter and circularity [17]. Finally, a 6-D texture feature representing the energy, correlation, entropy, inverse difference moment, contrast and homogeneity is

For classification of breast masses as either normal and abnormal (two-class separation) or normal, benign, and malignant cases (three-class study), we used SVM and ELM classifiers for excellent generalization performance and little human intervention. The SVM carries out classification between two classes by determining a hyper plane in feature space that is based on the most informative points of the training set [47]. On the other hand, ELM is a single-hidden

. We set the parameters of the descriptor to *Nc* <sup>=</sup> <sup>4</sup> cells and *No* <sup>=</sup> <sup>8</sup> bins

A Decision Support System (DSS) for Breast Cancer Detection Based on Invariant Feature…

http://dx.doi.org/10.5772/intechopen.81119

21

<sup>2</sup> = 128 elements in a HOG feature. **Figure 4** demonstrates the com-

and the number of cells for direc-

accurate analysis in detail about function curves in breast masses [22].

The number of possible orientation bins is referred to as *No*

**Figure 5.** Feature extraction stage with an example mammogram.

putation of eigenvalues of the Hessian matrix for mass ROI.

obtained from the gray level co-occurrence matrix (GLCM).

**5. Classification and similarity matching**

tion is referred to as *Nc*

resulting in a total of *No* <sup>×</sup> *Nc*

The gradient orientations are partitioned into eight bins. For each pixel's orientation bin, the orientation's magnitude *m*(*x*, *y*) is voted to each bin. Then, orientation histogram of every cell and spatial blocks are normalized.

**Figure 4.** Computation of eigenvalues of the hessian matrix for an ROI as breast mass.

A Decision Support System (DSS) for Breast Cancer Detection Based on Invariant Feature… http://dx.doi.org/10.5772/intechopen.81119 21

**Figure 5.** Feature extraction stage with an example mammogram.

for the gradient computation. In conventional HOG, firstly the grayscale image is filtered with

*<sup>x</sup>*(*x*, *y*) = *I*(*x* + 1, *y*) − *I*(*x* − 1, *y*) ∀*x*, *y <sup>f</sup>*

position(*x*, *y*). The magnitude and orientation is calculated as in Eq. (5) and (6);

**Figure 4.** Computation of eigenvalues of the hessian matrix for an ROI as breast mass.

*f <sup>x</sup>* (*x*, *y*)

*<sup>y</sup>*(*x*, *<sup>y</sup>*) <sup>=</sup> *<sup>I</sup>*(*x*, *<sup>y</sup>* <sup>+</sup> <sup>1</sup>) <sup>−</sup> *<sup>I</sup>*(*x*, *<sup>y</sup>* <sup>−</sup> <sup>1</sup>) <sup>∀</sup>*x*, *<sup>y</sup>* (4)

<sup>2</sup> (5)

*<sup>x</sup>*(*x*, *<sup>y</sup>*)) (6)

indicates *x* and *y* derivatives of image gradient. *I*(*x*, *y*) indicates the intensity at

\_\_\_\_\_\_\_\_\_\_\_\_\_\_

<sup>2</sup> + *f <sup>y</sup>* (*x*, *y*)

( *f <sup>y</sup>*(*x*, *<sup>y</sup>*) \_\_\_\_\_ *f*

The gradient orientations are partitioned into eight bins. For each pixel's orientation bin, the orientation's magnitude *m*(*x*, *y*) is voted to each bin. Then, orientation histogram of every cell

mask to obtain *x* and *y* derivatives of image as in Eq. (4).

*<sup>f</sup>*

20 Medical Imaging and Image-Guided Interventions

*<sup>m</sup>*(*x*, *<sup>y</sup>*) <sup>=</sup> <sup>√</sup>

and spatial blocks are normalized.

*θ*(*x*, *y*) = tan<sup>−</sup><sup>1</sup>

where *f x* and *f y*

> Eig(Hess)-HOG uses the Hessian matrix instead of the Gaussian derivative filters to compute the eigenvalues of image surface. The Hessian matrix contains more differential information than the conventional gradient. The Hessian matrix of an image is defined as the second-order partial derivative matrix of the gray scale image. The second order differentials provide more accurate analysis in detail about function curves in breast masses [22].

> The number of possible orientation bins is referred to as *No* and the number of cells for direction is referred to as *Nc* . We set the parameters of the descriptor to *Nc* <sup>=</sup> <sup>4</sup> cells and *No* <sup>=</sup> <sup>8</sup> bins resulting in a total of *No* <sup>×</sup> *Nc* <sup>2</sup> = 128 elements in a HOG feature. **Figure 4** demonstrates the computation of eigenvalues of the Hessian matrix for mass ROI.

> In addition as shown in **Figure 5**, a 9-D shape feature and a 7-D mass feature are extracted which representing the mass boundary and the average contrast, smoothness, orientation, uniformity, entropy, perimeter and circularity [17]. Finally, a 6-D texture feature representing the energy, correlation, entropy, inverse difference moment, contrast and homogeneity is obtained from the gray level co-occurrence matrix (GLCM).
