**2. Proposed comprehensive efficient integrated glaucoma detection system**

The block schematic of our proposed comprehensive efficient integrated Glaucoma detection system for Glaucoma identification using image processing techniques is shown in **Figure 7**.

The retinal image of the eye is captured using a fundus camera. The captured eye fundus image is subjected to various image processing techniques to extract different features of fundus image.

Our proposed system extracts and uses the following three different sets of fundus eye image features for detection of Glaucoma:

Fundus structure based features (CDR, RDR and Neuroretinal rim thicknesses), Vessel structural features and Textural features.

Structure-based features of fundus image are extracted using a template-based approach. A template aids the segmentation of the optic cup and disc from the fundus image. A template is correlated with fundus image using Pearson-r correlation for segmentation. Structure-based features such as CDR, RDR, and superior and inferior rim thicknesses are extracted.

**Figure 7.**

*Comprehensive efficient integrated glaucoma detection system.*

Fundus images' vessel structure-based features such as vessel count, vessel diameters, maximum vessel diameter, and count of vessels having fewer diameters are extracted using isotropic undecimated wavelet transform (IUWT).

Fundus images' texture features are extracted using three families of wavelet filters: daubechies (db3), symlets (sym3), and biorthogonal (bio3.3, bio3.5, and bio3.7) wavelet filters. A trained neural classifier fed with the texture features classifies the given test image into a normal or Glaucoma image as the first stage of classification.

Based on the literature survey and suggestion of ophthalmologists, the ultimate Glaucoma classifier is developed. The above set of extracted features and neural classifier output are fed to the final classifier. It finally classifies the given test image as either (i) normal or (ii) Glaucoma or (iii) acute Glaucoma or (iii) early Glaucoma symptom image.

The extracted features like CDR, RDR, superior and inferior rim thicknesses, vessel count, vessel diameter, maximum vessel diameter, and count of vessels having fewer diameters are stored in a database meant for the patient. These features can be used to assess the progression of the disease during the next visit of the patient. This will assist the ophthalmologists for better monitoring of patients. Database will be very useful for mass screening programs and also plays an important role in detecting Glaucoma at an early stage in the case of risk patients who are having genetic background of Glaucoma.

#### **2.1 Sources of image dataset used for the development of proposed glaucoma detection system**

For our Glaucoma detection system experiments, analysis, and testing results, we have used the glaucomatous fundus image dataset and normal image dataset from the following sources:

High-Resolution Fundus (HRF) image dataset, available on the public domain https://www5.cs.fau.de/research/data/fundus-images/. This dataset has been established by a collaborative research group to support research and comparative studies on retinal fundus images [13, 14], which is used by many researchers for their experiments and testing results. This dataset comprises of a set of 15 Glaucoma and 15 normal images and fundus images available at KLE Dr. Prabhakar Kore Hospital, Belagavi, India, which have been captured using a Canon CF1 High-Resolution fundus camera with a 50° field of view. Each image was captured using 24 bits per color plane at dimensions of 2534 × 2301 pixels.

We are referring the above two datasets in our thesis as (i) HRF dataset and (ii) Hospital dataset, respectively.

#### **2.2 Glaucoma detection using template**

This chapter presents an efficient methodology developed for the automatic localization and segmentation of the optic cup and disc in retinal images followed by extraction of some structural features for Glaucoma detection. Localization of the optic disc in the retinal image and extraction of the features are done by correlating the fundus image with a newly developed template using Pearson-r correlation. Segmentation of the optic cup and disc is done on the basis of correlation levels [15–18].

#### *2.2.1 Preprocessing*

Our proposed methodology is based on correlating the fundus image with a designed template. The template is designed based on intensity distribution of the

**107**

**Figure 8.**

*Efficient Computer-Aided Techniques to Detect Glaucoma*

H(hue) = cos−1{

S(saturation) = 1 −

*2.2.2 Glaucoma detection using Pearson-r coefficient extraction*

*Preprocessing stage: (a) input fundus RGB image; (b) converted HSI image.*

fundus image. Hence, it is required to determine the intensity component of the input image. The true color format of the fundus image captured by the fundus camera does not reveal the intensity component value directly, whereas hue, saturation, and intensity (HSI) format representation of an image directly provide the intensity component value of an image (refer to **Figure 8**). Therefore, the RGB format captured by the fundus image is converted into HSI format in the prepro-

Conversion from RGB to HSI is achieved using the following equations:

\_1 2

A sample RGB fundus image and its corresponding converted HSI image is

Further, the intensity component of the fundus image is correlated with a template designed using Pearson-r correlation to localize the optic disc in the image. The template is designed keeping in view the general structure of the optic disc and cup. The size of the optic disc varies significantly with the different fundus images. We observed that the optic disc widths varied from 60 to 100 pixels in the images of our dataset. The optic disc consists of a rim and cup. The intensity of the rim is higher than the rest of the image outside the rim, and the cup is the brightest part of the fundus image. We have designed a square-shaped image template to have a disc with a rim and cup as shown in **Figure 9(a)**. The cup will have the highest intensity, and intensity decreases toward the rim and other outer parts of the image similar to intensity distribution pattern of a fundus image. A Laplacian of Gaussian distribution is used to get this intensity distribution pattern. Pearson-r coefficients are extracted from correlating the template with the preprocessed image. The correlated image contains the information of optic cup and disc in the form of intensity variation with respect to template image from the fundus image. As seen from **Figure 8(b)** for a sample correlated image, the optic disc and cup can be very easily

\_3

(R + G + B)

I(intensity) = R(red) <sup>+</sup> G(green) <sup>+</sup> B(blue) \_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_ 3 (1)

\_1 2 

[min (R,G,B)] (3)

} (2)

[(R <sup>−</sup> G) + (R <sup>−</sup> <sup>B</sup>] \_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_

[(R − G)<sup>2</sup> + (R − B)(G − B)]

*DOI: http://dx.doi.org/10.5772/intechopen.89799*

cessing stage.

shown in **Figure 8**.

separated on an intensity plane.

*Efficient Computer-Aided Techniques to Detect Glaucoma DOI: http://dx.doi.org/10.5772/intechopen.89799*

*Visual Impairment and Blindness - What We Know and What We Have to Know*

are extracted using isotropic undecimated wavelet transform (IUWT).

classification.

symptom image.

having genetic background of Glaucoma.

color plane at dimensions of 2534 × 2301 pixels.

**2.2 Glaucoma detection using template**

**detection system**

from the following sources:

Hospital dataset, respectively.

Fundus images' vessel structure-based features such as vessel count, vessel diameters, maximum vessel diameter, and count of vessels having fewer diameters

Fundus images' texture features are extracted using three families of wavelet filters: daubechies (db3), symlets (sym3), and biorthogonal (bio3.3, bio3.5, and bio3.7) wavelet filters. A trained neural classifier fed with the texture features classifies the given test image into a normal or Glaucoma image as the first stage of

Based on the literature survey and suggestion of ophthalmologists, the ultimate Glaucoma classifier is developed. The above set of extracted features and neural classifier output are fed to the final classifier. It finally classifies the given test image as either (i) normal or (ii) Glaucoma or (iii) acute Glaucoma or (iii) early Glaucoma

The extracted features like CDR, RDR, superior and inferior rim thicknesses, vessel count, vessel diameter, maximum vessel diameter, and count of vessels having fewer diameters are stored in a database meant for the patient. These features can be used to assess the progression of the disease during the next visit of the patient. This will assist the ophthalmologists for better monitoring of patients. Database will be very useful for mass screening programs and also plays an important role in detecting Glaucoma at an early stage in the case of risk patients who are

**2.1 Sources of image dataset used for the development of proposed glaucoma** 

For our Glaucoma detection system experiments, analysis, and testing results, we have used the glaucomatous fundus image dataset and normal image dataset

High-Resolution Fundus (HRF) image dataset, available on the public domain https://www5.cs.fau.de/research/data/fundus-images/. This dataset has been established by a collaborative research group to support research and comparative studies on retinal fundus images [13, 14], which is used by many researchers for their experiments and testing results. This dataset comprises of a set of 15 Glaucoma and 15 normal images and fundus images available at KLE Dr. Prabhakar Kore Hospital, Belagavi, India, which have been captured using a Canon CF1 High-Resolution fundus camera with a 50° field of view. Each image was captured using 24 bits per

We are referring the above two datasets in our thesis as (i) HRF dataset and (ii)

This chapter presents an efficient methodology developed for the automatic localization and segmentation of the optic cup and disc in retinal images followed by extraction of some structural features for Glaucoma detection. Localization of the optic disc in the retinal image and extraction of the features are done by correlating the fundus image with a newly developed template using Pearson-r correlation. Segmentation of

Our proposed methodology is based on correlating the fundus image with a designed template. The template is designed based on intensity distribution of the

the optic cup and disc is done on the basis of correlation levels [15–18].

**106**

*2.2.1 Preprocessing*

fundus image. Hence, it is required to determine the intensity component of the input image. The true color format of the fundus image captured by the fundus camera does not reveal the intensity component value directly, whereas hue, saturation, and intensity (HSI) format representation of an image directly provide the intensity component value of an image (refer to **Figure 8**). Therefore, the RGB format captured by the fundus image is converted into HSI format in the preprocessing stage.

Conversion from RGB to HSI is achieved using the following equations:

$$\text{RGB} \text{ induces an } \text{conv}\text{-free map from round or one } \text{Pre-pre}$$

$$\text{RGB to HSI is achieved using the following equations:}$$

$$\text{I (intensity)} = \frac{\text{R (red)} + \text{G} (\text{green}) + \text{B} (\text{blue})}{3} \tag{1}$$

$$\text{I(intensity)} = \frac{\text{I} - \text{I}}{3} \tag{1}$$

$$\text{H(hue)} = \cos^{-1} \left\{ \frac{\frac{1}{2} [(\text{R} - \text{G}) + (\text{R} - \text{B})]}{\left[ (\text{R} - \text{G})^2 + (\text{R} - \text{B})(\text{G} - \text{B}) \right]^{\frac{1}{2}}} \right\} \tag{2}$$

$$\text{S(saturation)} = 1 - \frac{3}{(\text{R} + \text{G} + \text{B})} [\min(\text{R}, \text{G}, \text{B})] \tag{3}$$

$$\mathbf{S(saturation) = 1 - \frac{3}{(R \star G \star B)} [\min(R, G, B)]} \tag{3}$$

A sample RGB fundus image and its corresponding converted HSI image is shown in **Figure 8**.

#### *2.2.2 Glaucoma detection using Pearson-r coefficient extraction*

Further, the intensity component of the fundus image is correlated with a template designed using Pearson-r correlation to localize the optic disc in the image. The template is designed keeping in view the general structure of the optic disc and cup. The size of the optic disc varies significantly with the different fundus images. We observed that the optic disc widths varied from 60 to 100 pixels in the images of our dataset. The optic disc consists of a rim and cup. The intensity of the rim is higher than the rest of the image outside the rim, and the cup is the brightest part of the fundus image. We have designed a square-shaped image template to have a disc with a rim and cup as shown in **Figure 9(a)**. The cup will have the highest intensity, and intensity decreases toward the rim and other outer parts of the image similar to intensity distribution pattern of a fundus image. A Laplacian of Gaussian distribution is used to get this intensity distribution pattern. Pearson-r coefficients are extracted from correlating the template with the preprocessed image. The correlated image contains the information of optic cup and disc in the form of intensity variation with respect to template image from the fundus image. As seen from **Figure 8(b)** for a sample correlated image, the optic disc and cup can be very easily separated on an intensity plane.

**Figure 8.** *Preprocessing stage: (a) input fundus RGB image; (b) converted HSI image.*

Further, the optic cup and disc are easily segmented as they differ in their correlated magnitudes. Binary images of the segmented optic disc, cup, and rim are shown in **Figure 10**.

**Figures 11** and **12** show the segmentation of optic cup and disc and neuroretinal rim in binary image for a sample image of Hospital dataset and HRF dataset, respectively.

The values of CDR, RDR, and superior and inferior rim thicknesses are calculated from the extracted image.
