**1. Introduction**

[64] Sanders, D.B. and E.V. Stalberg, *AAEM minimonograph #25: single-fiber electromyogra‐*

[65] Gan, R. and J.F. Jabre, *The spectrum of concentric macro EMG correlations. Part II. Pa‐ tients with diseases of muscle and nerve.* Muscle Nerve, 1992. 15(10): p. 1085-8.

[66] Ilieva, H.S., et al., *Mutant dynein (Loa) triggers proprioceptive axon loss that extends sur‐ vival only in the SOD1 ALS model with highest motor neuron death.* Proc Natl Acad Sci U

[67] Yamanaka, K., et al., *Mutant SOD1 in cell types other than motor neurons and oligoden‐ drocytes accelerates onset of disease in ALS mice.* Proc Natl Acad Sci U S A, 2008. 105(21):

[68] Blain, C.R., et al., *Differential corticospinal tract degeneration in homozygous 'D90A' SOD-1 ALS and sporadic ALS.* J Neurol Neurosurg Psychiatry. 82(8): p. 843-9.

[69] Agosta, F., et al., *Assessment of white matter tract damage in patients with amyotrophic lat‐ eral sclerosis: a diffusion tensor MR imaging tractography study.* AJNR Am J Neuroradiol.

[70] Agosta, F., et al., *The present and the future of neuroimaging in amyotrophic lateral sclero‐*

[71] Hegedus, J., C.T. Putman, and T. Gordon, *Progressive motor unit loss in the G93A mouse model of amyotrophic lateral sclerosis is unaffected by gender.* Muscle Nerve, 2009. 39(3): p.

[72] Nakamizo, T., et al., *Protection of cultured spinal motor neurons by estradiol.* Neurore‐

[73] van Zundert, B., et al., *Neonatal neuronal circuitry shows hyperexcitable disturbance in a mouse model of the adult-onset neurodegenerative disease amyotrophic lateral sclerosis.* J

[74] Ratti, A., et al., *C9ORF72 repeat expansion in a large Italian ALS cohort: evidence of a*

[75] Chio, A., et al., *ALS/FTD phenotype in two Sardinian families carrying both C9ORF72 and*

[76] Renton, A.E., et al., *A hexanucleotide repeat expansion in C9ORF72 is the cause of chromo‐*

[77] Sabatelli, M., et al., *C9ORF72 hexanucleotide repeat expansions in the Italian sporadic ALS*

*TARDBP mutations.* J Neurol Neurosurg Psychiatry, 2012. 83(7): p. 730-3.

*founder effect.* Neurobiol Aging, 2012. 33(10): p. 2528 e7-2528 e14.

*some 9p21-linked ALS-FTD.* Neuron, 2011. 72(2): p. 257-68.

*population.* Neurobiol Aging, 2012. 33(8): p. 1848 e15-20.

*phy.* Muscle Nerve, 1996. 19(9): p. 1069-83.

*sis.* AJNR Am J Neuroradiol. 31(10): p. 1769-77.

S A, 2008. 105(34): p. 12599-604.

242 Current Advances in Amyotrophic Lateral Sclerosis

p. 7594-9.

31(8): p. 1457-61.

port, 2000. 11(16): p. 3493-7.

Neurosci, 2008. 28(43): p. 10864-74.

318-27.

Recently, eye-gaze input systems have been developed as novel human–machine interfaces [1-10]. Their operation requires only eye movements by the user. Based upon such systems, many communication aids have been developed for people with severe physical disabilities, such as amyotrophic lateral sclerosis (ALS). Eye-gaze input systems commonly employ noncontact eye-gaze detection for which an incandescent, fluorescent, or LED lamp can be used as the source of infrared or natural light. Detection based on infrared light can detect eye gaze with a high degree of accuracy [1-3] but requires an expensive device. Detection based on natural light uses ordinary devices and is therefore cost-effective [4,5]. However, an eye-gaze input system for natural light has a low degree of accuracy.

We have previously developed an eye-gaze input system for people with severe physical disabilities [8-10]. This system uses a personal computer (PC) and a home video camera to detect eye gaze under natural light. The camera (e.g., a DV camera) can easily be connected to a PC through an IEEE 1394 interface. The frames taken by the camera can be analyzed in real time using the DirectShow library by Microsoft. We developed image analysis software to detect eye gaze. Our eye-gaze input system runs the software on Windows. This system does not require any special devices and is easily customizable. Therefore, this system is not only cost-effective but also versatile. Moreover, it can be operated under natural light and thus is suitable for personal use.

© 2013 Kiyohiko et al.; licensee InTech. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. © 2013 The Author(s). Licensee InTech. This chapter is distributed under the terms of the Creative Commons Attribution License http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

### **2. Current eye-gaze input systems**

Many systems or devices have been developed as communication aids for ALS patients. For example, the E-tran (eye transfer) frame is used for communication between ALS patients and others. The E-tran frame is a conventional device and its structure is very simple. It is a transparent plastic board with characters, such as the alphabet, printed on it. When using the E-tran frame, a communication partner (helper) holds it over the user's face. Specifically, the user gazes at the place where the character that the user wishes to communicate is positioned. The helper moves the E-tran frame until the eye gaze of the user corresponds with that of the helper. Therefore, the helper can determine the character from the user's eye gaze. A user who can gaze at the characters on the E-tran frame can also communicate with others. In addition, the E-tran frame does not require power supply and is therefore highly portable. However, considerable skill is required to use the E-tran frame.

Indicators

http://dx.doi.org/10.5772/56560

245

Eye-Gaze Input System Suitable for Use under Natural Light and Its Applications Toward a Support for ALS Patients

A B C D E

F G H I Next

F I Next

Eye gaze is defined as a unit vector in a three-dimensional coordinate space. The origin of this unit vector is the center of the eyeball. Generally, the user's gaze is detected on a two-dimen‐ sional plane. It has horizontal and vertical components. The method of tracking the iris (the colored part of the eye) is the most popular method for eye-gaze detection using image analysis in natural light [4-6]. For example, if the edge between the iris and the sclera (the white part of the eye) is estimated by image processing, the appropriately approximated ellipse of the edge shows the location of the iris. However, it is difficult to distinguish the iris and the sclera by image analysis, because the edge between the iris and the sclera is not sharp. In addition, if a large part of the iris is hidden by the upper and lower eyelids, the measurement errors increase, because the obscuring of the iris by the eyelids causes estimation errors in the delineation of the iris. To resolve these issues, we propose a new image analysis method for detecting eye gaze using both the horizontal and vertical directions. This detection method is based on the limbus tracking method. Our eye-gaze detection method can obtain the coordi‐

In our eye-gaze detection method, the video camera records images of the user's eye from a distant location (the distance between the user and camera is approximately 70 cm), and then this image is enlarged. The head movements of the user induce a large error in the detected gaze. We compensated for the head movements by tracing the location of a

The limbus tracking method is an eye-gaze detection method using the difference in reflectance between the iris and the sclera. By this method, eye gaze can be estimated with relative ease, and therefore it has been used since the 1960s [12]. The general eye-gaze detection system using the limbus tracking method irradiates an eyeball of a subject with infrared light. The eye gaze of the subject is detected by measuring the reflected light using optical sensors such as

A

**3. Eye-gaze detection by image analysis**

**Figure 1.** Overview of eye-gaze input

nates of the user's gaze point.

corner within the eye.

photodiodes.

**3.1. Horizontal gaze detection**

Input area

The row–column scanning system is also used to aid the communication of ALS patients. This system can be operated with one switch. In other words, the user can input characters or operate a PC by using their physical residual function. The row–column scanning system is configured to exploit simple hardware. For example, if the user employs the screen keyboard that is installed on Windows, the user can operate many of the Windows software applications. It takes considerable time to input using the row–column scanning system, because this system operates by scanning the rows and columns of keyboards using only one switch. To improve upon this situation, a new method for row–column scanning has been reported [11]. This method optimizes the speed of row-column scanning by using a Bayesian network for machine learning. However, a patient with severe ALS cannot use the row–column scanning system, despite its single switch.

Our eye-gaze input system mitigates these weaknesses. In a general eye-gaze input system, the icons displayed on the PC monitor are selected by the user gazing at them, as shown in Figure 1. These icons are called indicators and are assigned to characters or functions of the application program. The eye-gaze input has to detect the user's gaze in order to ascertain the selected indicator. Many eye-gaze detection methods have been developed in the past. Several systems use the EOG(electro-oculogram) method for eye-gaze detection [7], which detects eye gaze by the difference in the electrical potential between the cornea and the retina. It is a contact method that uses electrodes placed around the eye. Although cost-effective, some users find that long-term use of the electrodes is uncomfortable. Therefore, many systems detect eye gaze using non-contact methods [1-10]. Specifically, the user's gaze is detected by analyzing images of the eye (and its surrounding skin) captured by a video camera. To classify the many indicators, most conventional systems use special devices such as infrared light [1-3] or multiple cameras [6]. Nevertheless, in order to be suitable for personal use, the system should be inexpensive and user-friendly. Therefore, a simple system using a single camera in natural light is desirable [4-6]. However, natural-light systems often have low accuracy and are capable of classifying only a few indicators [4]. This makes it difficult for users to perform a task that requires many functions, such as text input. To solve these problems, a simple eye-gaze input system that can classify many indicators is needed.

**Figure 1.** Overview of eye-gaze input

**2. Current eye-gaze input systems**

244 Current Advances in Amyotrophic Lateral Sclerosis

considerable skill is required to use the E-tran frame.

system that can classify many indicators is needed.

despite its single switch.

Many systems or devices have been developed as communication aids for ALS patients. For example, the E-tran (eye transfer) frame is used for communication between ALS patients and others. The E-tran frame is a conventional device and its structure is very simple. It is a transparent plastic board with characters, such as the alphabet, printed on it. When using the E-tran frame, a communication partner (helper) holds it over the user's face. Specifically, the user gazes at the place where the character that the user wishes to communicate is positioned. The helper moves the E-tran frame until the eye gaze of the user corresponds with that of the helper. Therefore, the helper can determine the character from the user's eye gaze. A user who can gaze at the characters on the E-tran frame can also communicate with others. In addition, the E-tran frame does not require power supply and is therefore highly portable. However,

The row–column scanning system is also used to aid the communication of ALS patients. This system can be operated with one switch. In other words, the user can input characters or operate a PC by using their physical residual function. The row–column scanning system is configured to exploit simple hardware. For example, if the user employs the screen keyboard that is installed on Windows, the user can operate many of the Windows software applications. It takes considerable time to input using the row–column scanning system, because this system operates by scanning the rows and columns of keyboards using only one switch. To improve upon this situation, a new method for row–column scanning has been reported [11]. This method optimizes the speed of row-column scanning by using a Bayesian network for machine learning. However, a patient with severe ALS cannot use the row–column scanning system,

Our eye-gaze input system mitigates these weaknesses. In a general eye-gaze input system, the icons displayed on the PC monitor are selected by the user gazing at them, as shown in Figure 1. These icons are called indicators and are assigned to characters or functions of the application program. The eye-gaze input has to detect the user's gaze in order to ascertain the selected indicator. Many eye-gaze detection methods have been developed in the past. Several systems use the EOG(electro-oculogram) method for eye-gaze detection [7], which detects eye gaze by the difference in the electrical potential between the cornea and the retina. It is a contact method that uses electrodes placed around the eye. Although cost-effective, some users find that long-term use of the electrodes is uncomfortable. Therefore, many systems detect eye gaze using non-contact methods [1-10]. Specifically, the user's gaze is detected by analyzing images of the eye (and its surrounding skin) captured by a video camera. To classify the many indicators, most conventional systems use special devices such as infrared light [1-3] or multiple cameras [6]. Nevertheless, in order to be suitable for personal use, the system should be inexpensive and user-friendly. Therefore, a simple system using a single camera in natural light is desirable [4-6]. However, natural-light systems often have low accuracy and are capable of classifying only a few indicators [4]. This makes it difficult for users to perform a task that requires many functions, such as text input. To solve these problems, a simple eye-gaze input
