**4. Our recognition method**

*Applications of Pattern Recognition*

correlation comparison algorithm.

in the market.

**3.2 Limitations**

template generation.

use of retinal recognition.

**3.3 Recognition schemes**

**3.4 Verification phase**

user data stored in the database [8, 21].

will not be afraid of using these devices.

The first prototype of the IR device was released in 1981. It has an eye-optic camera to illuminate the IR radiation. The camera was attached to an ordinary personal computer that will be used to analyze the captured image using a simple

*EyeDentification System 7.5* was launched after four years by EyeDentify Inc. Its verification is done using the retina image and the PIN entered by the user, with the

Retinal biometrics limitations discourage further use of it as a biometric system.

*Fear of eye damage -* due to a myth about the devices damaging the retina. The level of infrared illumination used by these devices is low and has proven to be completely harmless. The people must be shared with this information so that they

*Outdoor and indoor use -* the return beam of the light passing through the pupil twice (once inward then outward of the eye) can be greatly weakened if the subject's pupil is too small. This can result to an increase in the false rejection rate.

*Ergonomics -* the subject must be near to the sensor, which may cause discomfort. *Severe astigmatism -* the eye must be focused on a point. This may be difficult for those with visual impairments such as astigmatism, which can negatively affect the

*High Price -* the cost of optical devices is always more than the price of other biometric systems such as fingerprint or voice recognition capturing devices.

High-security areas such as nuclear and arms development, even manufacturing, government and military facilities, and other critical infrastructure can make

Several schemes can be used for recognizing retinal images. For instance, a retina image biometric recognition has different approaches. Farzin [8] and Hill [21] have segmented the blood vessels to generate features and store, at maximum, 256 12-bit samples, which are then shrunk to a reference record containing 40 bytes for each of the eye. The Time-domain stores the contrast information. Fuhrmann and Uhl [22] extracted the vessels which obtained the retina code. The retina code is a binary

In order to be able to use the proposed algorithm universally, and therefore also for the verification phase, it is necessary to choose the parameters with regard to the verification steps. During the verification phase, when recognizing samples that should be identical, we encounter the problem of inaccuracy in imaging. We

code describing the vessels surrounding the optical disc.

There are still no acceptable solutions found for these shortcomings [21].

*ICAM 2001* was the last known retinal scanning device that was made by EyeDentify Inc. The device might have been able to store a maximum of 3,000 subjects with a storage capacity of 3,300 history transactions [8]. Unfortunately, the product was withdrawn due to low user acceptance and high price. Other companies, such as Retica Systems Inc., worked on a retinal acquisition device prototype for biometric purposes that might have been more user friendly and easier to integrate into commercial applications. Unfortunately, this device was also a failure

**84**

The distribution of vascular lines in the retina of the human eye is unique (as shown in Chapter 3.1), which is similar to the papillary line on the human fingers. Currently, there is no single approach to retinal recognition. Our procedure follows dactyloscopy, where bifurcations, terminations, positions and directions of a detected point are stored. We look for "anomalies" on the vessels in the retina - the places of visual crossings and bifurcations - and also record their position within the retina. For images, it is not easy to recognize whether it is a crossing or bifurcation of a vessel as the two phenomena often overlap. Therefore, we are only interested in the feature and not on its specific type. The termination of the vessel takes place "until lost" so a specific place cannot and will not be detected. We locate the points according to the position relative to the optical disk and the fovea. Therefore, we also store their position within the image as will be further described in Chapter 4.2 - the coordinate system. The result is a set of vectors such that the system is not affected from the changes in retinal scanning (different rotations, zooms, or chamfers).

Recognition becomes problematic in the presence of diseases that are manifested by a change in the retina such as bleeding. As with other biometric features, a relatively large amount of human health information can be read from retinal manifestations. Therefore, it is appropriate that the biometric facility manager guarantees the non-misuse or non-storage of this sensitive data, for example under the GDPR Directive [9, 23].

## **4.1 Statistical evaluation of the crossings and bifurcations frequency**

If we take a brief look at a few images of the human eye retina, we discover that crosses and bifurcations are not equally frequented in various areas. The probability of their occurrence is in some areas higher, in others almost zero. In the beginning, it should be noted that the ability to mark intersections and bifurcations strongly depends on the quality and contrast of the image. In the statistically empty parts are the very small capillaries that are undetectable in the image using automatic or manual search.

When we create the frequency map, the points can be regarded with different weights to recognize the pattern. Finding matching points in two retinas being compared in rare occurring sites may score higher than matching points in other areas. Therefore, we tried to statistically evaluate several hundred retinal images and create our own frequency scheme, which we will later use to adjust the evaluation when comparing two retinas.

## **4.2 Coordinate system**

In order to be able to work uniformly with all retinas without major complications, we have introduced a polar coordinate system, where two values can be used to align the retinal image to same coordinate system with the others. Our coordinate system assumes distances between the optical disk and the fovea in the different retinas approximately similarly. We also assume, if the physical structure of the retina differs significantly, that its development proceeded by similar rules. For example, if the distance between the optical disk and the fovea is smaller than average, the entire retinal structure will be smaller, and this will not affect our system.

The main point of the entire coordinate system is the center of the optical disk. In the records of individual retinas, its position in a particular image is stored as the distance from the left and top of the image edges. In addition, the width and height of the optical disk area (1st line of the output text file) are stored here as well. The second record is the center of the fovea (2nd line). The width and height of fovea are no longer stored here because its boundaries are difficult to ascertain by a simple look. The distance *r* between these two points is the basic unit of length for our coordinate system in each retina. This value may differ for every single image but is always valid for one retina. The second value is given point angle *ψ* of the direction from the optical disk. An angle of 0° lies in the line to the fovea and the value increases as it goes clockwise. This means that the fovea will have coordinates (1, 0°) in all retinas according to our coordinate system. Bifurcations or crossings are expressed by *r* and *ψ*, respectively, and are stored on the next lines of the output file.

We convert the found bifurcations and crossings back from the polar to the Cartesian coordinate system when we need to display or evaluate the entered points globally. To do this, we need the distance between the center of the blind spot (CBS) and the yellow dot (CYS). Then, we calculate their Euclidean distance (d) and the angle between the centers of both points (α) according to the Eq. (1).

$$\alpha = \operatorname{arctg} 2\left( (\boldsymbol{\mathcal{y}} \cdot \mathbf{C}\_{\text{YS}} - \boldsymbol{\mathcal{y}} \cdot \mathbf{C}\_{\text{BS}}), \left( \boldsymbol{\mathcal{x}} \cdot \mathbf{C}\_{\text{YS}} - \boldsymbol{\mathcal{x}} \cdot \mathbf{C}\_{\text{BS}} \right) \right) \tag{1}$$

Calculate the bifurcation/crossing distance from the blind spot using Eq. (2): Using Eq. (2), we calculated the bifurcation/crossing distance from the blind spot:

$$
v = r \cdot d \tag{2}$$

Then, calculate the coordinates *dx* and *dy* using Eqs. (3) and (4). Then, using Eqs. (3) and (4), we calculated the coordinates *dx* and *dy*:

$$d\mathfrak{x} = d \cdot \cos(\psi + \alpha) \tag{3}$$

$$dy = d \cdot \sin\left(\psi + \alpha\right) \tag{4}$$

**87**

**Figure 5.**

*Retina Recognition Using Crossings and Bifurcations DOI: http://dx.doi.org/10.5772/intechopen.96142*

another point.

**4.4 Used databases**

STRaDe (EBD).

use only images of healthy retinas.

*Illustration of finding the nearest point.*

For the first point of intersection and bifurcation in the first retina, it is determined which set of points in the second retina is the closest one. Then, the same procedure is repeated for the point found from second retina - finding the closest point back in first retina. This will determine if the two points are really the closest. If we did not find points on both directions, as in the case of a marked point in only one retina, another point could be found, which has its pair from the first retina. **Figure 5** shows the scenario without two-way control. Green points from the first retina and blue points from the second retina are combined into one image. For a point in a red circle, we look for the nearest point that belongs to a pair with

If the distance between the two found points is less than the specified limit, then they are not considered as close. If they are "close", they will be removed from the lists of both retinas and their distance is saved. Before saving, the distance value is converted to a percentage where 0% means zero distance between points and 99% is the maximum allowed distance between two points to be considered as close. And

The percentage value of the distance is then adjusted according to the statistical model described above. The value is multiplied by a number from 1 to 4 where a lower number means lower frequency in the statistical model. The reason is, if no nearby point is found in the high frequency region, then it is much worse for retinal conformity than if no point is found between two far points that are at distances from the optical disk. In addition, places with higher frequencies are usually closer

For the testing purposes, we used several public or our internal databases. Messidor [24], e-ophtha [10], High-Resolution Fundus (HRF) [25] and Retina EBD

First of them is publicly available database *Messidor* [24] from team ADCIS. The *Messidor* database has 1,200 eye fundus color numerical images of the posterior pole. The images were captured by three ophthalmologic departments by using a color video 3CCD camera on a Topcon TRC NW6 non-mydriatic retinograph with a field of view of 45°. The captured images use 8 bits per color plane at 440 × 960, 240 × 488, or 304 × 536 pixels. There are 800 images that were captured with the

Second one is database *e-ophtha* also from team ADCIS. This database has 47 images that are with exudates and 35 images without lesions. For our purposes we

this value is then squared for better difference between near and far points.

to the center of the coordinate system where it is more accurate.

pupil dilated (one drop of Tropicamide at 0.5%) and 400 without it.

Lastly, calculate the point that resulted from the bifurcation/crossing in the Cartesian system [*dx* + *x.C*BS; *dy* + *y.C*BS].

#### **4.3 Recognition scheme**

The algorithm for determining the grade of conformity of the two retinas works by converting all points from the polar coordinate system (described earlier) to Cartesian coordinate system. It is not necessary to align or rotate both images. Due to the chosen system, which is based on the position of the optical disk and the fovea, these points on two retinas will always exactly overlap.

*Retina Recognition Using Crossings and Bifurcations DOI: http://dx.doi.org/10.5772/intechopen.96142*

*Applications of Pattern Recognition*

output file.

blind spot:

to align the retinal image to same coordinate system with the others. Our coordinate system assumes distances between the optical disk and the fovea in the different retinas approximately similarly. We also assume, if the physical structure of the retina differs significantly, that its development proceeded by similar rules. For example, if the distance between the optical disk and the fovea is smaller than average, the entire retinal structure will be smaller, and this will not affect our system.

The main point of the entire coordinate system is the center of the optical disk. In the records of individual retinas, its position in a particular image is stored as the distance from the left and top of the image edges. In addition, the width and height of the optical disk area (1st line of the output text file) are stored here as well. The second record is the center of the fovea (2nd line). The width and height of fovea are no longer stored here because its boundaries are difficult to ascertain by a simple look. The distance *r* between these two points is the basic unit of length for our coordinate system in each retina. This value may differ for every single image but is always valid for one retina. The second value is given point angle *ψ* of the direction from the optical disk. An angle of 0° lies in the line to the fovea and the value increases as it goes clockwise. This means that the fovea will have coordinates (1, 0°) in all retinas according to our coordinate system. Bifurcations or crossings are expressed by *r* and *ψ*, respectively, and are stored on the next lines of the

We convert the found bifurcations and crossings back from the polar to the Cartesian coordinate system when we need to display or evaluate the entered points globally. To do this, we need the distance between the center of the blind spot (CBS) and the yellow dot (CYS). Then, we calculate their Euclidean distance (d) and the

Calculate the bifurcation/crossing distance from the blind spot using Eq. (2): Using Eq. (2), we calculated the bifurcation/crossing distance from the

*dx d* =⋅ + cos(

*dy d* =⋅ + sin(ψ α

Lastly, calculate the point that resulted from the bifurcation/crossing in the

The algorithm for determining the grade of conformity of the two retinas works by converting all points from the polar coordinate system (described earlier) to Cartesian coordinate system. It is not necessary to align or rotate both images. Due to the chosen system, which is based on the position of the optical disk and the

ψ α

= ⋅ −⋅ ⋅ −⋅ *arctg y C y C x C x C* 2 , (( *YS BS* ) ( *YS BS* )) (1)

*v rd* = ⋅ (2)

) (3)

) (4)

angle between the centers of both points (α) according to the Eq. (1).

Then, calculate the coordinates *dx* and *dy* using Eqs. (3) and (4). Then, using Eqs. (3) and (4), we calculated the coordinates *dx* and *dy*:

α

Cartesian system [*dx* + *x.C*BS; *dy* + *y.C*BS].

fovea, these points on two retinas will always exactly overlap.

**4.3 Recognition scheme**

**86**

For the first point of intersection and bifurcation in the first retina, it is determined which set of points in the second retina is the closest one. Then, the same procedure is repeated for the point found from second retina - finding the closest point back in first retina. This will determine if the two points are really the closest. If we did not find points on both directions, as in the case of a marked point in only one retina, another point could be found, which has its pair from the first retina.

**Figure 5** shows the scenario without two-way control. Green points from the first retina and blue points from the second retina are combined into one image. For a point in a red circle, we look for the nearest point that belongs to a pair with another point.

If the distance between the two found points is less than the specified limit, then they are not considered as close. If they are "close", they will be removed from the lists of both retinas and their distance is saved. Before saving, the distance value is converted to a percentage where 0% means zero distance between points and 99% is the maximum allowed distance between two points to be considered as close. And this value is then squared for better difference between near and far points.

The percentage value of the distance is then adjusted according to the statistical model described above. The value is multiplied by a number from 1 to 4 where a lower number means lower frequency in the statistical model. The reason is, if no nearby point is found in the high frequency region, then it is much worse for retinal conformity than if no point is found between two far points that are at distances from the optical disk. In addition, places with higher frequencies are usually closer to the center of the coordinate system where it is more accurate.

### **4.4 Used databases**

For the testing purposes, we used several public or our internal databases. Messidor [24], e-ophtha [10], High-Resolution Fundus (HRF) [25] and Retina EBD STRaDe (EBD).

First of them is publicly available database *Messidor* [24] from team ADCIS. The *Messidor* database has 1,200 eye fundus color numerical images of the posterior pole. The images were captured by three ophthalmologic departments by using a color video 3CCD camera on a Topcon TRC NW6 non-mydriatic retinograph with a field of view of 45°. The captured images use 8 bits per color plane at 440 × 960, 240 × 488, or 304 × 536 pixels. There are 800 images that were captured with the pupil dilated (one drop of Tropicamide at 0.5%) and 400 without it.

Second one is database *e-ophtha* also from team ADCIS. This database has 47 images that are with exudates and 35 images without lesions. For our purposes we use only images of healthy retinas.

**Figure 5.** *Illustration of finding the nearest point.*

Next is High-Resolution Fundus image database from German Fiedrich-Alexander University. The *HRF* database has 15 images of healthy patients, 15 images of patients with diabetic retinopathy, and 15 images of glaucomatous patients. Each image has a binary gold standard vessel segmentation image. Moreover, particular datasets are provided with masks to determine the field of view (FOV). A group of experts from the retinal image analysis field and the medical staff from the cooperating ophthalmology clinics generated the gold standard data.

The EBD is internal set of iris and retina images our research group STRaDe (Security Technology Research and Development at the Faculty of Information Technology, Brno University of Technology (CZ), focused on security in IT and biometric systems). The database contains 684 images of both retinas from 110 distinct people, totaling 220 distinct samples. Unfortunately, a significant part of this set consists of very low-quality pictures. But in this database all persons have several images of each eye.

For additional checking of our algorithms we use our retinal fundus camera at our laboratory, which we make for several past years. We use 30 images from students, which captured during Biometric systems course. Some images have bad quality, which is useful for testing applications in a worse condition. Several images are from same person eye (further as "school database").

#### **4.5 Developed applications**

We developed several application software modules to determine some properties of the retina, which will then be used to find out the degree of similarity of the two entered retina patterns.

#### *4.5.1 Manual marking program*

The first program (SW1) is developed for manual retina marking by our students. First, the edges of the optical disk are marked. The program stores the top left position and the width and height of the ellipse around the optical disk. Then, the fovea is marked. Both positions are stored in Cartesian coordinates, which are based on the image properties and resolutions. Each feature is then marked after both main structures in the retina. These points are stored in polar coordinates. Data from the images are stored as a plain text file. By this program, we marked all retinal images using Messidor's e-optha and HRF databases.

#### *4.5.2 Automatic marking program*

The second program (SW2) stores the same information about the image as SW1, except that it performs the steps automatically. Details of the overall work of the program, its steps, and further development are summarized in work [5]. The program is developed in Python and was used on the same images as SW1. An average of 48 features was found in these retinal images.

#### *4.5.3 Compare program*

SW3 compares the detection accuracy between the manually marked-up results by SW1 and the automatically marked-up results by SW2. The algorithm is designed to compare the bifurcations/crossings that were selected manually, with the automatically detected set after they have been detected. The paired bifurcations/crossings are automatically found through a method like that in chapter 4.3.

**89**

*Retina Recognition Using Crossings and Bifurcations DOI: http://dx.doi.org/10.5772/intechopen.96142*

paired can then be calculated.

*4.5.4 Visualization program*

*4.5.5 Recognition program*

students of our faculty.

**5. Results**

crossings by SW1.

darkness in it.

tiae was about 5 pixels [5].

**5.1 Accuracy of manual and automatic marking**

emphasizing the comparison is shown in **Figure 6**.

around the yellow (yellow) and blind (black) spots.

**5.2 Frequency of features occurrences**

The converted points are stored in the list with their position in it being used as ID for compiling disjunctive sets. A placeholder ID is assigned with −1 value. The problem is converted to an integer programming problem [26] to calculate the minimum pairing. The edges are then determined between the graphs' individual vertices. The number of bifurcations/crossings that were manually found and

SW4 collects the marked data by SW1 and SW2 in the previously described text file format into one picture. It collects individual pixels into a grid of adjustable size. For our purposes, a summary grid of 5 × 5 pixels was chosen as the most suitable. In the images, the fields with a higher frequency of occurrences are colored darker.

SW5 works according to the recognition principle described in chapter 4.3. For this purpose, we used database EBD, which have more images from one retina. For practical tests we used our school database of retina images, which has taken by

The representative sample of data is obtained by randomly choosing 460 images, which came from Messidor, 160 images came from e-optha, and 50 images came from HRF. The chosen retinal images have both the left and right eye images available. They were shrunk to a resolution of around 1 Mpx to fit them on the screen. FIT BUT students in 2016 and 2017 did the manual marking of bifurcations and

After manually marking the points via SW1, SW2 also searched for the same points automatically. With this principle, we determine the accuracy of the positions of points during manual marking. The resulting average deviation of a minu-

Simultaneously, the automatic algorithm tested on the VARIA database [27], contained 233 images from 139 individuals, is enhanced. The resultant image

On the contrary, we assume that the hand marking will accurately highlight the blind and yellow spots. SW3 also compares the correct location of the automatic search. The success rates were 92.97% for the blind spot and 94.05% for the yellow spot. Wrong localization of spots was mainly caused by too much brightness or

**Figure 7** shows the frequencies for manual marking by SW1. The program SW4 also shows the outer circle area in which at least some points occur, and the axes of the retina and the inner ellipses indicate areas with minimal occurrences of points

The converted points are stored in the list with their position in it being used as ID for compiling disjunctive sets. A placeholder ID is assigned with −1 value. The problem is converted to an integer programming problem [26] to calculate the minimum pairing. The edges are then determined between the graphs' individual vertices. The number of bifurcations/crossings that were manually found and paired can then be calculated.
