*2.3.1 ROI extraction*

In previous section, we presented the methodology of segmenting the optic disc by correlating the input fundus image with designed template using Pearson-r

**Figure 9.** *Correlation: (a) correlation filter; (b) intensity distribution of correlated fundus.*

**109**

**Figure 13.**

**Figure 11.**

**Figure 12.**

*image showing neuroretinal rim.*

*image showing neuroretinal rim.*

*Efficient Computer-Aided Techniques to Detect Glaucoma*

correlation. The value of Pearson-r correlation is maximum at the center of the optic disc which corresponds to the brightest spot in the fundus image. After segmenting the disc as discussed in Section 2.2.2, its boundary points on the top,

*Hospital dataset sample. (a) Original sample. (b) Segmented optic disc. (c) Segmented optic cup. (d) Binary* 

*HRF dataset sample. (a) Original sample. (b) Segmented optic disc. (c) Segmented optic cup. (d) Binary* 

*Vessels in fundus image showing ROI: (a) vessels in the normal eye with smaller vessels around the optic disc;* 

*(b) the absence of smaller vessels around the optic disc in the glaucoma eye.*

*DOI: http://dx.doi.org/10.5772/intechopen.89799*

**Figure 10.**

*Segmentation. (a) Segmented optic disc. (b) Segmented optic cup. (c) Neuroretinal rim thickness.*

correlation. The value of Pearson-r correlation is maximum at the center of the optic disc which corresponds to the brightest spot in the fundus image. After segmenting the disc as discussed in Section 2.2.2, its boundary points on the top,

#### **Figure 11.**

*Visual Impairment and Blindness - What We Know and What We Have to Know*

shown in **Figure 10**.

lated from the extracted image.

region of interest (ROI).

*2.3.1 ROI extraction*

**2.3 Glaucoma detection using vessel segmentation**

respectively.

Further, the optic cup and disc are easily segmented as they differ in their correlated magnitudes. Binary images of the segmented optic disc, cup, and rim are

**Figures 11** and **12** show the segmentation of optic cup and disc and neuroretinal rim in binary image for a sample image of Hospital dataset and HRF dataset,

The values of CDR, RDR, and superior and inferior rim thicknesses are calcu-

According to the literature survey due to progression of Glaucoma, blood vessel

diameter decreases [19–22], and smaller diameter vessels around the optic disc start diminishing and disappearing. We have employed these aspects of vessels as features to detect the Glaucoma at an early stage. **Figure 13** clearly shows these aspects. The region where these significant vessels of interest are present in a small area around the optic disc in the image for detection of Glaucoma is considered as

In previous section, we presented the methodology of segmenting the optic disc by correlating the input fundus image with designed template using Pearson-r

*Correlation: (a) correlation filter; (b) intensity distribution of correlated fundus.*

*Segmentation. (a) Segmented optic disc. (b) Segmented optic cup. (c) Neuroretinal rim thickness.*

**108**

**Figure 10.**

**Figure 9.**

*Hospital dataset sample. (a) Original sample. (b) Segmented optic disc. (c) Segmented optic cup. (d) Binary image showing neuroretinal rim.*

#### **Figure 12.**

*HRF dataset sample. (a) Original sample. (b) Segmented optic disc. (c) Segmented optic cup. (d) Binary image showing neuroretinal rim.*

#### **Figure 13.**

*Vessels in fundus image showing ROI: (a) vessels in the normal eye with smaller vessels around the optic disc; (b) the absence of smaller vessels around the optic disc in the glaucoma eye.*

bottom, left, and right sides are identified, thus localizing it. A sample fundus image from our dataset and its corresponding identified ROI image part are shown in **Figure 14**.

In signal processing applications, analysis of signal in frequency domain is preferred over the time domain since they contain more information of the signal. This requires conversion of the signal from time domain to frequency domain. Fourier transform can be used to obtain the frequency contents of a signal. But it does not reveal which frequency component is present at what time instance of the signal. Wavelet transform, on the other hand, reveals this information and conveys more detailed information regarding the signal or the image. In image processing techniques such as segmentation, image decomposition using undecimated biorthogonal wavelet transforms is employed, as these transforms also facilitate reconstruction of the images [23]. For segmentation of astronomical and biological images where images contain isotropic objects, the isotropic undecimated wavelet transform (IUWT) can be applied for segmentation. In the fundus images, vessels are isotropic in nature. Hence IUWT can be used for segmentation of vessels. When ROI is subjected to IUWT, decompositions at different wavelet levels are shown in **Figure 15**.

### *2.3.2 Vessel localization*

For further processing, an image with a wavelet level of 3 is considered. To obtain the centerline of each of the segmented vessels, they are subjected to a thinning process which converts a vessel into a thin vessel of one pixel thickness. The position of this thin vessel will be almost in the center of the vessel, and it represents the centerline of the vessel. A sample of segmented vessels of ROI is shown in **Figure 16(a)**. The output of the thinning process of the sample vessels is shown in **Figure 16(b)**, which depicts the thinned centerlines of vessels. **Figure 16(c)** depicts the identified thinned branches after removing the branch pixels. Each thin line in the image of **Figure 16(c)** represents a separate vessel. Now, each thin line in the image represents a separate vessel.

Using connected component technique, each line (vessel) in the image is indexed and can be accessed with that index. Further, using the index information, the coordinates of the centerline pixels of each vessel are determined, thereby localizing the vessels. Further, the vessel data structure is created and maintained

**111**

structure [18].

**Figure 16.**

**Figure 15.**

*with wavelet level of 5.*

fundus image.

*Efficient Computer-Aided Techniques to Detect Glaucoma*

which contains the entire information regarding the vessels. Vessel-based feature vessel risk index (VRI) is extracted from the information obtained in vessel data

*(a) Vessels from the segmented image. (b) Thinned centerlines and (c) centerlines with branches separated.*

*(a) Extracted ROI. (b) Image with low wavelet level of 3. (c) Image with high wavelet level of 4. (d) Image* 

The block diagram of the methodology used is shown in **Figure 17**. The input fundus image is preprocessed for removal of background noise. Preprocessed image is decomposed by using wavelet filters to obtain the approximation and detail coefficients of the image. From these coefficients, texture features are generated. These texture features are fed to the ANN classifier to detect Glaucoma. In the preprocessing stage, arithmetic mean filter [20] is used to remove the background noise of the

The preprocessed image is decomposed by applying the wavelets. The set of 14 wavelet features' average and energy values are considered as the texture

**2.4 Glaucoma detection methodology using wavelet texture features**

*2.4.1 Image decomposition and texture feature extraction*

*DOI: http://dx.doi.org/10.5772/intechopen.89799*

**Figure 14.** *(a) Input image. (b) Extracted ROI.*

#### **Figure 15.**

*Visual Impairment and Blindness - What We Know and What We Have to Know*

in **Figure 14**.

**Figure 15**.

*2.3.2 Vessel localization*

image represents a separate vessel.

bottom, left, and right sides are identified, thus localizing it. A sample fundus image from our dataset and its corresponding identified ROI image part are shown

In signal processing applications, analysis of signal in frequency domain is preferred over the time domain since they contain more information of the signal. This requires conversion of the signal from time domain to frequency domain. Fourier transform can be used to obtain the frequency contents of a signal. But it does not reveal which frequency component is present at what time instance of the signal. Wavelet transform, on the other hand, reveals this information and conveys more detailed information regarding the signal or the image. In image processing techniques such as segmentation, image decomposition using undecimated biorthogonal wavelet transforms is employed, as these transforms also facilitate reconstruction of the images [23]. For segmentation of astronomical and biological images where images contain isotropic objects, the isotropic undecimated wavelet transform (IUWT) can be applied for segmentation. In the fundus images, vessels are isotropic in nature. Hence IUWT can be used for segmentation of vessels. When ROI is subjected to IUWT, decompositions at different wavelet levels are shown in

For further processing, an image with a wavelet level of 3 is considered. To obtain the centerline of each of the segmented vessels, they are subjected to a thinning process which converts a vessel into a thin vessel of one pixel thickness. The position of this thin vessel will be almost in the center of the vessel, and it represents the centerline of the vessel. A sample of segmented vessels of ROI is shown in **Figure 16(a)**. The output of the thinning process of the sample vessels is shown in **Figure 16(b)**, which depicts the thinned centerlines of vessels. **Figure 16(c)** depicts the identified thinned branches after removing the branch pixels. Each thin line in the image of **Figure 16(c)** represents a separate vessel. Now, each thin line in the

Using connected component technique, each line (vessel) in the image is indexed and can be accessed with that index. Further, using the index information, the coordinates of the centerline pixels of each vessel are determined, thereby localizing the vessels. Further, the vessel data structure is created and maintained

**110**

**Figure 14.**

*(a) Input image. (b) Extracted ROI.*

*(a) Extracted ROI. (b) Image with low wavelet level of 3. (c) Image with high wavelet level of 4. (d) Image with wavelet level of 5.*

**Figure 16.** *(a) Vessels from the segmented image. (b) Thinned centerlines and (c) centerlines with branches separated.*

which contains the entire information regarding the vessels. Vessel-based feature vessel risk index (VRI) is extracted from the information obtained in vessel data structure [18].

#### **2.4 Glaucoma detection methodology using wavelet texture features**

The block diagram of the methodology used is shown in **Figure 17**. The input fundus image is preprocessed for removal of background noise. Preprocessed image is decomposed by using wavelet filters to obtain the approximation and detail coefficients of the image. From these coefficients, texture features are generated. These texture features are fed to the ANN classifier to detect Glaucoma. In the preprocessing stage, arithmetic mean filter [20] is used to remove the background noise of the fundus image.

#### *2.4.1 Image decomposition and texture feature extraction*

The preprocessed image is decomposed by applying the wavelets. The set of 14 wavelet features' average and energy values are considered as the texture

**Figure 17.** *Block schematic of glaucoma detection using texture features.*

features of an image which can be used for classification of the image as normal or Glaucoma image.

We apply the three wavelets daubechies (db3), symlets (sym3), and biorthogonal (bio3.3, bio3.5, and bio3.7) separately for the fundus image and first obtain the approximation and the detail coefficients. Later, we compute the average and the energy values from these coefficients.

Among the average and energy values, we have used only average value of the horizontal information and energy of vertical information for all the wavelets (db3, sym3, bio3.3, bio3.5, and bio3.7). In addition, we have used the diagonal energy values for bio3.3, bio3.5, and bio3.7 and horizontal energy value for bio3.7, totaling to 14 texture values all together from all the wavelets. The other average and energy values are not used since their values for the normal and Glaucoma images lie in the same range, which has been verified by analyzing the values. This selection also has been reported in [24]. The texture values extracted from a fundus image are fed to the neural classifier for classification.

#### *2.4.2 Neural classifier*

Classification using ANN is the state-of-the-art technique employed for efficient classifications. We have used feedforward ANN with modified backpropagation training technique to implement the classifier. Feedforward multiple layer perceptron (MLP) neural networks are one of the important types of neural networks. They are widely used in recognition systems due to their good generalization property. MLPs consist of multiple layers as shown in **Figure 18**. It has one input and one output layer. One or more hidden layers are used in between input and output layers. The number of neurons in the input layer depends on the number of inputs to be fed to the network, and the number of neurons in the output layer depends on the number of outputs to be generated for the final classifier output. Hidden layers can have any number of neurons. Output of a neuron of input/hidden layer is connected to the input of all neurons in the next layer in fully connected network, as shown in **Figure 19**.

#### *2.4.3 Glaucoma neural classifier architecture*

We used a 4-layer MLP with input layer, two hidden layers, and output layer. For each input fundus image, 14 wavelet features are extracted. To feed these 14 features, input layer has 14 neurons. As the output result we want it to be either normal or Glaucoma indication, the output layer consists of only one neuron. We used two hidden layers with 14 neurons in each layer as we could get better results with two hidden layers than one. The architecture of the neural classifier used is shown in **Figure 18**.

Different training methods can be used to train the MLPs for classification. We employed the modified popular backpropagation training algorithm with the minimum

**113**

**Figure 18.**

**Figure 19.**

*Architecture of glaucoma neural classifier.*

**3. Results and discussion**

*Feedforward neural network.*

*Efficient Computer-Aided Techniques to Detect Glaucoma*

mean square error performance as a function for training our neural classifier. Levenberg–Marquardt method [25, 26] of optimization is used to train the classifier.

images and 60 normal images were used from the datasets for comparison.

Our final proposed integrated Glaucoma classifier system considers all these proposed features for classification. The structural features are related to the optic disc and cup such as CDR, RDR, and superior and inferior rim thicknesses, which are extracted using template methodology. VRI, the vessel structural feature, and texture features are obtained by wavelet filters. This aspect of incorporating more features has led to improvement of the efficiency of Glaucoma detection which can be seen from comparison of results as shown in **Table 1**. Totally, 120 Glaucoma

*DOI: http://dx.doi.org/10.5772/intechopen.89799*

*Efficient Computer-Aided Techniques to Detect Glaucoma DOI: http://dx.doi.org/10.5772/intechopen.89799*

*Visual Impairment and Blindness - What We Know and What We Have to Know*

Glaucoma image.

**Figure 17.**

energy values from these coefficients.

*Block schematic of glaucoma detection using texture features.*

the neural classifier for classification.

*2.4.2 Neural classifier*

as shown in **Figure 19**.

*2.4.3 Glaucoma neural classifier architecture*

features of an image which can be used for classification of the image as normal or

We apply the three wavelets daubechies (db3), symlets (sym3), and biorthogonal (bio3.3, bio3.5, and bio3.7) separately for the fundus image and first obtain the approximation and the detail coefficients. Later, we compute the average and the

Among the average and energy values, we have used only average value of the horizontal information and energy of vertical information for all the wavelets (db3, sym3, bio3.3, bio3.5, and bio3.7). In addition, we have used the diagonal energy values for bio3.3, bio3.5, and bio3.7 and horizontal energy value for bio3.7, totaling to 14 texture values all together from all the wavelets. The other average and energy values are not used since their values for the normal and Glaucoma images lie in the same range, which has been verified by analyzing the values. This selection also has been reported in [24]. The texture values extracted from a fundus image are fed to

Classification using ANN is the state-of-the-art technique employed for efficient classifications. We have used feedforward ANN with modified backpropagation training technique to implement the classifier. Feedforward multiple layer perceptron (MLP) neural networks are one of the important types of neural networks. They are widely used in recognition systems due to their good generalization property. MLPs consist of multiple layers as shown in **Figure 18**. It has one input and one output layer. One or more hidden layers are used in between input and output layers. The number of neurons in the input layer depends on the number of inputs to be fed to the network, and the number of neurons in the output layer depends on the number of outputs to be generated for the final classifier output. Hidden layers can have any number of neurons. Output of a neuron of input/hidden layer is connected to the input of all neurons in the next layer in fully connected network,

We used a 4-layer MLP with input layer, two hidden layers, and output layer. For

each input fundus image, 14 wavelet features are extracted. To feed these 14 features, input layer has 14 neurons. As the output result we want it to be either normal or Glaucoma indication, the output layer consists of only one neuron. We used two hidden layers with 14 neurons in each layer as we could get better results with two hidden layers than one. The architecture of the neural classifier used is shown in

Different training methods can be used to train the MLPs for classification. We employed the modified popular backpropagation training algorithm with the minimum

**112**

**Figure 18**.

#### **Figure 18.** *Architecture of glaucoma neural classifier.*

mean square error performance as a function for training our neural classifier. Levenberg–Marquardt method [25, 26] of optimization is used to train the classifier.
