**Meet the editor**

Dr Ken-Shiung Chen received both his M.A. and Ph.D. degrees from the University of Texas, Austin, USA and held the position of an Assistant Professor at the Baylor College of Medicine, Houston, USA. Currently, he is an Associate Professor at the School of Biological Sciences, Nanyang Technological University, Singapore. Dr Chen's research interests are focused on the elucida-

tion of the molecular mechanism of chromosomal deletion/duplication syndrome as well as on epigenetic regulation in genomic imprinting. The human genome project and the concomitant race to map and sequence the mice genomes have created the basis for interspecies comparisons. Together with the recent advancement in mouse mutagenesis, conditional knockout and chromosomal engineering techniques have given us the ability to define gene function and to model human disease in a mammalian system.

Contents

**Preface IX** 

**Part 1 Bioengineering in Neurological Disorders 1** 

Chapter 2 **Methods of Measurement and Evaluation of Eye,** 

Chapter 3 **Mesenchymal Stromal Cells to Treat Brain Injury 45** 

F. Robert-Inacio, E. Kussener, G. Oudinet and G. Durandau

Patrik Kutilek, Jiri Hozman, Rudolf Cerny and Jan Hejda

Ciara C. Tate and Casey C. Case

Yingying Zhang, Guoguo Zhu,

Chapter 5 **Real-Time Analysis of Intracranial** 

Chapter 6 **Developing Novel Methods for** 

Chapter 4 **Development of Foamy Virus Vectors for Gene** 

Yu Huang, Xiaohua He and Wanhong Liu

**Pressure Waveform Morphology 99**  Fabien Scalzo, Robert Hamilton and Xiao Hu

**Protein Analysis and Their Potential** 

Olgica Trenchevska, Vasko Aleksovski, Dobrin Nedelkov and Kiro Stojanoski

Chapter 7 **Angelman Syndrome: Proteomics Analysis of** 

**Part 2 Proteomic Analysis in Neurological Disorders 127** 

Chapter 1 **Image Analysis for Automatically-Driven Bionic Eye 3** 

**Head and Shoulders Position in Neurological Practice 25** 

**Therapy for Neurological Disorders and Other Diseases 79** 

**Implementation in Diagnosing Neurological Diseases 129** 

**an** *UBE3A* **Knockout Mouse and Its Implications 159**  Low Hai Loon, Chi-Fung Jennifer Chen, Chi-Chen Kevin Chen,

Tew Wai Loon, Hew Choy Sin and Ken-Shiung Chen

## Contents

#### **Preface XI**


	- **Part 2 Proteomic Analysis in Neurological Disorders 127**

Chapter 6 **Developing Novel Methods for Protein Analysis and Their Potential Implementation in Diagnosing Neurological Diseases 129**  Olgica Trenchevska, Vasko Aleksovski, Dobrin Nedelkov and Kiro Stojanoski

Chapter 7 **Angelman Syndrome: Proteomics Analysis of an** *UBE3A* **Knockout Mouse and Its Implications 159**  Low Hai Loon, Chi-Fung Jennifer Chen, Chi-Chen Kevin Chen, Tew Wai Loon, Hew Choy Sin and Ken-Shiung Chen

X Contents

#### **Part 3 Migraine and the Use of Herbal Medicine as an Alternative Treatment 185**

Chapter 8 **Migraine: Molecular Basis and Herbal Medicine 187**  Mohammad Ansari, Kheirollah Rafiee, Solaleh Emamgholipour and Mohammad-Sadegh Fallah

#### **Part 4 Neuropsychiatry of Drug and Alcohol Dependence 215**

Chapter 9 **Substance Dependence as a Neurological Disorder 217**  William Meil, David LaPorte and Peter Stewart

## Preface

The study of our brain and associated neurological disorders represent one of the most fascinating frontiers in the biomedical sciences. Recent research studies using multidisciplinary approaches not only provide a molecular understanding of disease mechanisms but also suggest novel tools for therapeutic interventions. *Advanced Topics in Neurological Disorders* presents reports on a wide range of areas in the field of neurological disorders, including bioengineering, stem cell transplantation, gene therapy, proteomic analysis, and alternative treatments to neuropsychiatry illnesses.

Chapter 1 presents a review of the history of the bionic eye and provides an update on the current state and future prospects of the field. Chapter 2 describes an accurate method for the simultaneous evaluation of the eye, head and shoulder positions in neurological practice. Possible applications and perspectives for clinical practice are also described in the chapter. Chapter 3 provides a review for the clinical application of using mesenchymal stem cells to treat brain injury. Ongoing clinical trials, issues of delivery timing, route and donor source, dosage and mechanisms of action underlying beneficial effects are discussed. Chapter 4 provides a review for the use of foamy virus vector as a carrier to deliver therapeutic genes for neurological disorders and other diseases. Chapter 5 presents a review of existing methods to extract the morphology of intracranial pressure (ICP). A novel probabilistic framework to track the ICP morphology in real time is introduced and a successful study for the real-time prediction of ICP hypertension is described. Chapter 6 introduces methods for protein analysis and profiling techniques in clinical practice, and further discusses the application of new methods for cerebrospinal fluid analysis as a primary biological media for the study of neurological diseases. Chapter 7 describes the use of a proteomic approach to study the Angelman syndrome (AS) mouse model. The results indicate that proteins involved in neuronal cell differentiation, learning processes, energy production, actin disassembly and neuronal signal transduction are affected in AS. Chapter 8 discusses the molecular basis, metabolites, biomarkers, triggers and polymorphisms associated with migraines. This chapter further discusses the pharmacotherapy for migraines and the effects of herbal medicine in migraine treatments. Chapter 9 argues that substance dependence should be considered a brain disease associated with cellular and neurocircuitry dysfunction. This review compares the relationship between anatomical and functional changes in the prefrontal cortex, executive abilities, and substance dependence by describing this relationship across multiple classes of drugs and polysubstance dependence.

#### X Preface

I would like to sincerely thank all the authors who contributed to this book. In preparing this book, we have included information from a wide range of fields involved in neurological research that not only provides researchers, clinicians and graduate students with insight on the most recent advances in neurological research but also highlights the importance of applying multidisciplinary approaches to the study of neurological disorders.

> **Ken-Shiung Chen Ph.D.**  Associate Professor Nanyang Technological University, School of Biological Sciences, Singapore

X Preface

study of neurological disorders.

I would like to sincerely thank all the authors who contributed to this book. In preparing this book, we have included information from a wide range of fields involved in neurological research that not only provides researchers, clinicians and graduate students with insight on the most recent advances in neurological research but also highlights the importance of applying multidisciplinary approaches to the

Nanyang Technological University, School of Biological Sciences,

**Ken-Shiung Chen Ph.D.**  Associate Professor

Singapore

**Part 1** 

**Bioengineering in Neurological Disorders** 

## **Part 1**

## **Bioengineering in Neurological Disorders**

**1. Introduction**

other hand, for autonomous robots.

**1.1 Human visual sytem**

different wavelengths.

In many fields such as health or robotics industry, reproducing the human visual system (HVS) behavior is a widely sought aim. Actually a system able to reproduce even partially the HVS could be very helpful, on the one hand, for people with vision diseases, and, on the

*1Institut Materiaux Microelectronique et Nanosciences de Provence, (IM2NP, UMR 6242)* 

**Image Analysis for Automatically-Driven** 

F. Robert-Inacio1,2, E. Kussener1,2, G. Oudinet2 and G. Durandau2

*2Institut Superieur de l'Electronique et du Numerique (ISEN-Toulon)* 

**1**

**Bionic Eye** 

*France* 

Historically, the earliest reports of artificially induced phosphenes were associated with direct cortical stimulation Tong (2003). Since then devices have been developed that target ùany different sites along the visual pathway Troyk (2003).These devices can be categorized according to the site of action along the visual pathway into cortical, sub-cortical, optic nerve ane retinal prostheses. Although the earliest reports involved cortical stimulation, with the advancements in surgical techniques and bioengineering, the retinal prosthesis or artificial

In this chapter, both applications will be presented after the theoretical context, the state of the art and motivations. Furthermore, a full system will be described including a servo-motorized camera (acquisition), specific image processing software and artificial intelligence software for exploration of complex scenes. This chapter also deals with image analysis and interpretation.

The human visual system is made of different parts: eyes, nerves and brain. In a coarse way, eyes achieve image acquisition, nerves data transmission and brain data processing (Fig. 1). The eye (Fig. 2) acquires images through the pupil and visual information is processed by retina photoreceptors. There exist two kinds of photoreceptors: rods and cones. Rods are dedicated to light intensity acquisition. They are efficient in scotopic and mesopic conditions. Cones are specifically sensitive to colors and require a minimal light level (photopic and mesopic conditions). There are three different types of cones sensitive for

Fig. 3a shows the photoreceptors responses and Fig. 3b their distribution accross the retina from the foveal area (at the center of gaze) to the peripheral area. At the top of Fig. 3b, small parts of retina are presented with cones in green and rods in pink. This outlines that the repartition of cones and rods varies on the retina surface according to the distance to the

retina has become the most advanced visual prosthesis Wyatt (2011).

## **Image Analysis for Automatically-Driven Bionic Eye**

F. Robert-Inacio1,2, E. Kussener1,2, G. Oudinet2 and G. Durandau2 *1Institut Materiaux Microelectronique et Nanosciences de Provence, (IM2NP, UMR 6242) 2Institut Superieur de l'Electronique et du Numerique (ISEN-Toulon) France* 

## **1. Introduction**

In many fields such as health or robotics industry, reproducing the human visual system (HVS) behavior is a widely sought aim. Actually a system able to reproduce even partially the HVS could be very helpful, on the one hand, for people with vision diseases, and, on the other hand, for autonomous robots.

Historically, the earliest reports of artificially induced phosphenes were associated with direct cortical stimulation Tong (2003). Since then devices have been developed that target ùany different sites along the visual pathway Troyk (2003).These devices can be categorized according to the site of action along the visual pathway into cortical, sub-cortical, optic nerve ane retinal prostheses. Although the earliest reports involved cortical stimulation, with the advancements in surgical techniques and bioengineering, the retinal prosthesis or artificial retina has become the most advanced visual prosthesis Wyatt (2011).

In this chapter, both applications will be presented after the theoretical context, the state of the art and motivations. Furthermore, a full system will be described including a servo-motorized camera (acquisition), specific image processing software and artificial intelligence software for exploration of complex scenes. This chapter also deals with image analysis and interpretation.

#### **1.1 Human visual sytem**

The human visual system is made of different parts: eyes, nerves and brain. In a coarse way, eyes achieve image acquisition, nerves data transmission and brain data processing (Fig. 1).

The eye (Fig. 2) acquires images through the pupil and visual information is processed by retina photoreceptors. There exist two kinds of photoreceptors: rods and cones. Rods are dedicated to light intensity acquisition. They are efficient in scotopic and mesopic conditions. Cones are specifically sensitive to colors and require a minimal light level (photopic and mesopic conditions). There are three different types of cones sensitive for different wavelengths.

Fig. 3a shows the photoreceptors responses and Fig. 3b their distribution accross the retina from the foveal area (at the center of gaze) to the peripheral area. At the top of Fig. 3b, small parts of retina are presented with cones in green and rods in pink. This outlines that the repartition of cones and rods varies on the retina surface according to the distance to the

(a) Rods (R) and cones (L, M, S) responses

Image Analysis for Automatically-Driven Bionic Eye 5

(b) Rods and cones distribution accross the retina (from

knowledge in different fields such as microelectronics, computer vision and image processing and analysis, but also in the medical field: ophtalmology and neurosciences. Cognitive studies determining the human behavior when facing a new scene are lead in parallel in order to validate methods by comparing them to a human observer's abilities. Several solutions are offered to plug an electronic device to the visual system (Fig. 4). First of all, retina implants can

http://improveeyesighttoday.com/improveeyesight-centralization.htm)

Fig. 3. Rods and cones features

Fig. 1. Human visual system

Fig. 2. Human eye

center of gaze. Most of the cones are located in the fovea (retina center) and rods are essentially present in periphery. Then light energy data are turned into electrochemical energy data to be carried to the visual cortex through the optic nerves. The two optic nerves converge at a point called optic chiasm (Fig. 4), where fibers of the nasal side cross to the other brain side, whereas fibers of the temporal side do not. Then the optic nerves become the optic tracts. The optic tracts reach the lateral geniculate nucleus (LGN). Here begins the processing of visual data with back and forth between the LGN and the visual cortex.

#### **1.2 Why a bionic eye?**

Blindness affects over 40 millions people around the world. In the medical field, providing a prosthesis to blind or quasi-blind people is an ambitious task that requires a huge sum of 2 Will-be-set-by-IN-TECH

center of gaze. Most of the cones are located in the fovea (retina center) and rods are essentially present in periphery. Then light energy data are turned into electrochemical energy data to be carried to the visual cortex through the optic nerves. The two optic nerves converge at a point called optic chiasm (Fig. 4), where fibers of the nasal side cross to the other brain side, whereas fibers of the temporal side do not. Then the optic nerves become the optic tracts. The optic tracts reach the lateral geniculate nucleus (LGN). Here begins the processing of visual

Blindness affects over 40 millions people around the world. In the medical field, providing a prosthesis to blind or quasi-blind people is an ambitious task that requires a huge sum of

data with back and forth between the LGN and the visual cortex.

Fig. 1. Human visual system

Fig. 2. Human eye

**1.2 Why a bionic eye?**

(a) Rods (R) and cones (L, M, S) responses

(b) Rods and cones distribution accross the retina (from http://improveeyesighttoday.com/improveeyesight-centralization.htm)

#### Fig. 3. Rods and cones features

knowledge in different fields such as microelectronics, computer vision and image processing and analysis, but also in the medical field: ophtalmology and neurosciences. Cognitive studies determining the human behavior when facing a new scene are lead in parallel in order to validate methods by comparing them to a human observer's abilities. Several solutions are offered to plug an electronic device to the visual system (Fig. 4). First of all, retina implants can

**1.3 Why now?**

overall size.

• The size of the device

• The performance circuit

by five considerations as shown on Fig.5:

Fig. 5. Low power Hand for low power applications

**2. State of the art: Overview**

The development of biological implantable devices incorporating microelectronic circuitry requires advanced fabrication techniques which are now possible. The importance of device stability stems from the fact that the microelectronics have to function properly within the relatively harsh environment of the human body. This represents a major challenge in developing implantable devices with long-term system performance while reducing their

Image Analysis for Automatically-Driven Bionic Eye 7

Biomedical systems are one example of ultra low power electronics is paramount for multiple reasons [Sarpeshkar (2010)]. For example, these systems are implanted within the body and need to be small, light-weighted with minimal dissipation in the tissue that surrounds them. In order to obtain implantable device, some constraints have to be taken into account such as:

• The type of the technology (flexible or not) in order to be accepted by the human body

The low power hand reminds us that the power consumption of a system is always defined

Supplying visual information to blind people is a goal that can be reached in several ways by more or less efficient means. Classically blind people can use a white cane, a guide-dog or more sophisticated means. The white cane is perceived as a symbol that warns other people and make them more careful to blind people. It is also very useful in obstacle detection. A guide-dog is also of a great help, as it interprets at a dog level the context scene. The dog

• The circuit consumption in order to optimize the battery life

Fig. 4. Human visual system and solutions for electronic device plugins

be plugged either to the retina or to the optic nerve. Such a solution requires image processing in order to integrate data and make them understandable by the brain. No image analysis is necessary as data will be processed by the visual cortex itself. But the patient must be free of pathology at least at the optic nerve, so that data transmission to the brain can be achieved. In another way, retina implants can directly stimulate the retina photoreceptors. That means that the retina too must be in working order. Secondly, when either the retina or the optic nerve is damaged, only cerebral implants can be considered, as they directly stimulate neurons. In this context, image analysis is required in order to mimick at least the LGN behavior.

#### **1.3 Why now?**

4 Will-be-set-by-IN-TECH

Fig. 4. Human visual system and solutions for electronic device plugins

be plugged either to the retina or to the optic nerve. Such a solution requires image processing in order to integrate data and make them understandable by the brain. No image analysis is necessary as data will be processed by the visual cortex itself. But the patient must be free of pathology at least at the optic nerve, so that data transmission to the brain can be achieved. In another way, retina implants can directly stimulate the retina photoreceptors. That means that the retina too must be in working order. Secondly, when either the retina or the optic nerve is damaged, only cerebral implants can be considered, as they directly stimulate neurons. In

this context, image analysis is required in order to mimick at least the LGN behavior.

The development of biological implantable devices incorporating microelectronic circuitry requires advanced fabrication techniques which are now possible. The importance of device stability stems from the fact that the microelectronics have to function properly within the relatively harsh environment of the human body. This represents a major challenge in developing implantable devices with long-term system performance while reducing their overall size.

Biomedical systems are one example of ultra low power electronics is paramount for multiple reasons [Sarpeshkar (2010)]. For example, these systems are implanted within the body and need to be small, light-weighted with minimal dissipation in the tissue that surrounds them. In order to obtain implantable device, some constraints have to be taken into account such as:


The low power hand reminds us that the power consumption of a system is always defined by five considerations as shown on Fig.5:

Fig. 5. Low power Hand for low power applications

## **2. State of the art: Overview**

Supplying visual information to blind people is a goal that can be reached in several ways by more or less efficient means. Classically blind people can use a white cane, a guide-dog or more sophisticated means. The white cane is perceived as a symbol that warns other people and make them more careful to blind people. It is also very useful in obstacle detection. A guide-dog is also of a great help, as it interprets at a dog level the context scene. The dog

in order to demonstrate proof of principle. The company Second Sight<sup>1</sup> was then created in the late 1990s to develop this prosthesis. The first generation (Argus I) has 16 electrodes and was implanted in 6 patients between 2002 and 2004. The second generation (Argus II) has 60 electrodes and clinical trials have been planned since 2007. Argus III is still in process and will

Image Analysis for Automatically-Driven Bionic Eye 9

VisionCare Ophtalmic Technologies and the CentralSight Treatment Program [Chun (2005)Lane (2004)Lane (2006)] has created an implantable miniature telescope in order to provide central vision to people having degenerated macula diseases. This telescope is implanted inside the eye behind the iris and projects magnified images on healthy areas of

At University of Louvain, a subretinal implant (MIVIP: Microsystem-based Visual Prosthesis) made of a single electrode has been developped [Archambeau (2004)]. The optic nerve is directly stimulated by this electrode from electric signals received from an external camera. In the late 1980s, Dr. Joseph Rizzo and Professor John Wyatt performed a number of proof-of-concept epiretinal stimulation trials on blind volunteers before developing a subretinal stimulator. They co-founded the Boston Retinal Implant Project (BRIP). The collaboration was initiated between the Massachusetts Eye and Ear Infirmary, Harvard Medical School and the Massachusetts Institute of Technology. The mission of the Boston Retinal Implant Project is to develop novel engineering solutions to restore vision and improve the quality-of-life for patients who are blind from degenerative disease of the retina, for which there is currently no cure. Early results are actually a reference for this solution. The core strategy of the Boston Retinal Implant Project <sup>2</sup> is to create novel engineering solutions to treat blinding diseases that elude other forms of treatment. The specific goal of this study is to develop an implantable microelectronic prosthesis to restore vision to patients with certain forms of retinal blindness. The proposed solution provides a special opportunity for visual rehabilitation with a prosthesis, which can deliver direct electrical stimulation to those cells

The Artificial Silicon Retina (ASR)3 is a microchip containing 3500 photodiodes, developed by Alan and Vincent Chow. Each photodiode detects light and transforms it into electrical

In France, at the Institut de la Vision, the team of Pr Picaud has developed a subretinal implant

As well, in Germany [Zrenner (2008)], a subretinal prosthesis has been developed. A microphotodiode array (MPDA) acquires incident light information and send it to the chip located behind the retina. The chip transforms data into electrical signal stimulating the retinal

In Japan [Yagi (2005)], a subretinal implant has been designed at Yagi Laboratory4.

Experiments are mainly directed to obtain new biohybrid micro-electrode arrays.

have 240 electrodes.

the central retina.

**2.1.2 Subretinal implants**

that carry visual information.

ganglion cells.

<sup>1</sup> 2-sight.eu/

impulses stimulating retinal ganglion cells (Fig. 8).

[Djilas (2011)]. They have also set up clinical trials.

<sup>2</sup> http://www.bostonretinalimplant.org <sup>3</sup> http://optobionics.com/asrdevice.shtml <sup>4</sup> http://www.io.mei.titech.ac.jp/research/retina/

is trained to guide the person in an outdoor environment. It can inform the blind person and advise of danger through its reactions. In the very last decades, electronics has come to reinforce the environment perception. On the one hand, several non-invasive systems have been set up such as GPS for visually impaired [Hub (2006)] that can assist blind people with orientation and navigation, talking equipment that provides an audio description in a basic way for thermometers, clocks or calcultors or in a more accurate way for audio-description that gives a narration of visual aspects of television movies or theater plays, electronic white canes [Faria (2010)], etc. On the other hand, biomedical devices can be implanted in an invasive way, that requires surgery and clinical trials. As presented in Fig. 4, such devices can be plugged at different spots along the visual data processing path. In a general way the principle is the same for retinal and cerebral implants. Two subsystems are linked, achieving data acquisition and processing for the first one and electrostimulation for the second one. A camera (or two for stereovision) is used to acquire visual data. These data are processed by the acquisition processing box in order to obtain data that are transmitted to the image processing box via a wired or wireless connection (Fig. 6). Then impulses stimulate cells where the implant is connected.

Fig. 6. General principle of an implant

#### **2.1 Retina implant**

For retinal implants, there exist two different ways to connect the electronic device: directly to the retina (epiretinal implant) or behind the retina (subretinal implant). Several research teams work on this subject worldwide. The target diseases mainly are:


#### **2.1.1 Epiretinal implants**

The development of an epiretinal prosthesis (Argus Retinal Prosthesis) has been initiated in the early 1990s at the Doheny Eye Institute and the University of California (USA)[Horsager (2010)Parikh (2010)]. This prosthesis was implanted in patients at John Hopkins University in order to demonstrate proof of principle. The company Second Sight<sup>1</sup> was then created in the late 1990s to develop this prosthesis. The first generation (Argus I) has 16 electrodes and was implanted in 6 patients between 2002 and 2004. The second generation (Argus II) has 60 electrodes and clinical trials have been planned since 2007. Argus III is still in process and will have 240 electrodes.

VisionCare Ophtalmic Technologies and the CentralSight Treatment Program [Chun (2005)Lane (2004)Lane (2006)] has created an implantable miniature telescope in order to provide central vision to people having degenerated macula diseases. This telescope is implanted inside the eye behind the iris and projects magnified images on healthy areas of the central retina.

#### **2.1.2 Subretinal implants**

6 Will-be-set-by-IN-TECH

is trained to guide the person in an outdoor environment. It can inform the blind person and advise of danger through its reactions. In the very last decades, electronics has come to reinforce the environment perception. On the one hand, several non-invasive systems have been set up such as GPS for visually impaired [Hub (2006)] that can assist blind people with orientation and navigation, talking equipment that provides an audio description in a basic way for thermometers, clocks or calcultors or in a more accurate way for audio-description that gives a narration of visual aspects of television movies or theater plays, electronic white canes [Faria (2010)], etc. On the other hand, biomedical devices can be implanted in an invasive way, that requires surgery and clinical trials. As presented in Fig. 4, such devices can be plugged at different spots along the visual data processing path. In a general way the principle is the same for retinal and cerebral implants. Two subsystems are linked, achieving data acquisition and processing for the first one and electrostimulation for the second one. A camera (or two for stereovision) is used to acquire visual data. These data are processed by the acquisition processing box in order to obtain data that are transmitted to the image processing box via a wired or wireless connection (Fig. 6). Then impulses stimulate cells

For retinal implants, there exist two different ways to connect the electronic device: directly to the retina (epiretinal implant) or behind the retina (subretinal implant). Several research

The development of an epiretinal prosthesis (Argus Retinal Prosthesis) has been initiated in the early 1990s at the Doheny Eye Institute and the University of California (USA)[Horsager (2010)Parikh (2010)]. This prosthesis was implanted in patients at John Hopkins University

• retinitis pigmentosa, which is the leading cause of inherited blindness in the world, • age-related macular degeneration, which is the leading cause of blindness in the

teams work on this subject worldwide. The target diseases mainly are:

where the implant is connected.

Fig. 6. General principle of an implant

**2.1 Retina implant**

industrialized world.

**2.1.1 Epiretinal implants**

At University of Louvain, a subretinal implant (MIVIP: Microsystem-based Visual Prosthesis) made of a single electrode has been developped [Archambeau (2004)]. The optic nerve is directly stimulated by this electrode from electric signals received from an external camera.

In the late 1980s, Dr. Joseph Rizzo and Professor John Wyatt performed a number of proof-of-concept epiretinal stimulation trials on blind volunteers before developing a subretinal stimulator. They co-founded the Boston Retinal Implant Project (BRIP). The collaboration was initiated between the Massachusetts Eye and Ear Infirmary, Harvard Medical School and the Massachusetts Institute of Technology. The mission of the Boston Retinal Implant Project is to develop novel engineering solutions to restore vision and improve the quality-of-life for patients who are blind from degenerative disease of the retina, for which there is currently no cure. Early results are actually a reference for this solution. The core strategy of the Boston Retinal Implant Project <sup>2</sup> is to create novel engineering solutions to treat blinding diseases that elude other forms of treatment. The specific goal of this study is to develop an implantable microelectronic prosthesis to restore vision to patients with certain forms of retinal blindness. The proposed solution provides a special opportunity for visual rehabilitation with a prosthesis, which can deliver direct electrical stimulation to those cells that carry visual information.

The Artificial Silicon Retina (ASR)3 is a microchip containing 3500 photodiodes, developed by Alan and Vincent Chow. Each photodiode detects light and transforms it into electrical impulses stimulating retinal ganglion cells (Fig. 8).

In France, at the Institut de la Vision, the team of Pr Picaud has developed a subretinal implant [Djilas (2011)]. They have also set up clinical trials.

As well, in Germany [Zrenner (2008)], a subretinal prosthesis has been developed. A microphotodiode array (MPDA) acquires incident light information and send it to the chip located behind the retina. The chip transforms data into electrical signal stimulating the retinal ganglion cells.

In Japan [Yagi (2005)], a subretinal implant has been designed at Yagi Laboratory4. Experiments are mainly directed to obtain new biohybrid micro-electrode arrays.

<sup>1</sup> 2-sight.eu/

<sup>2</sup> http://www.bostonretinalimplant.org

<sup>3</sup> http://optobionics.com/asrdevice.shtml

<sup>4</sup> http://www.io.mei.titech.ac.jp/research/retina/

Fig. 9. Stanford University visual prosthesis

Fig. 10. Principle of Polystim Laboratory visual prosthesis

the primary visual cortex.

<sup>7</sup> http://www.polystim.ca

were implanted with permanent electrode arrays. The first volunteers were operated at the University of Western Ontario, Canada. A 292 × 512 CCD camera is connected to a sub-notebook computer in a belt pack. A second microcontroller is also included in the belt pack and it is dedicated to brain stimulation. The stimulus generator is connected to the electrodes implanted on the visual cortex through a percutaneous pedestral. With this system

Image Analysis for Automatically-Driven Bionic Eye 11

In Canada, the research team of Pr Sawan [Sawan (2008)] at Polystim Neurotechnologies Laboratory7 has begun clinical trials for an electrode array providing images of 256 pixels (Fig. 10). Such images are not very accurate but they allow the patient to guess shapes. Furthermore clinical trials have proved that it was possible to directly stimulate neurons in

a vision-impaired person is able to count his fingers and recognize basic symbols.

(a) Silicon wafer wit flexible polyalide iridium oxide electrode array (b) Close up of a flx circuit to which IC will be attached

Fig. 7. BRIP Solution

Fig. 8. ASR device implanted in the retina

At Stanford University, a visual prosthesis<sup>5</sup> (Fig. 9) has been developed [Loudin (2007)]. It includes an optoelectronic system composed of a subretinal photodiode array and an infrared image projection system. A video camera acquires visual data that are processed and displayed on video goggles as IR images. Photodiodes in the subretinal implant are activated when the IR image arrives on retina through natural eye optics. Electric pulses stimulate the retina cells.

In Australia, the Bionic Vision system6 consists of a camera, attached to a pair of glasses, which transmits high-frequency radio signals to a microchip implanted in the retina. Electrical impulses stimulate retinal cells connected to the optic nerve. Such an implant improves the perception of light.

#### **2.2 Cortex implant**

William H. Dobelle initiated a project to develop a cortical implant [Dobelle (2000)], in order to return partially the vision to volunteer blind people [Ings (2007)]. His experiments began in the early 1970s with cortical stimulation on 37 sighted volunteers. Then four blind volunteers

<sup>5</sup> http://www.stanford.edu/ palanker/lab/retinalpros.html

<sup>6</sup> http://bionicvision.org.au/eye

8 Will-be-set-by-IN-TECH

(b) Close up of a flx circuit to which IC

will be attached

At Stanford University, a visual prosthesis<sup>5</sup> (Fig. 9) has been developed [Loudin (2007)]. It includes an optoelectronic system composed of a subretinal photodiode array and an infrared image projection system. A video camera acquires visual data that are processed and displayed on video goggles as IR images. Photodiodes in the subretinal implant are activated when the IR image arrives on retina through natural eye optics. Electric pulses stimulate the

In Australia, the Bionic Vision system6 consists of a camera, attached to a pair of glasses, which transmits high-frequency radio signals to a microchip implanted in the retina. Electrical impulses stimulate retinal cells connected to the optic nerve. Such an implant improves the

William H. Dobelle initiated a project to develop a cortical implant [Dobelle (2000)], in order to return partially the vision to volunteer blind people [Ings (2007)]. His experiments began in the early 1970s with cortical stimulation on 37 sighted volunteers. Then four blind volunteers

(a) Silicon wafer wit flexible polyalide

iridium oxide electrode array

Fig. 8. ASR device implanted in the retina

<sup>5</sup> http://www.stanford.edu/ palanker/lab/retinalpros.html

Fig. 7. BRIP Solution

retina cells.

perception of light.

**2.2 Cortex implant**

<sup>6</sup> http://bionicvision.org.au/eye

Fig. 9. Stanford University visual prosthesis

were implanted with permanent electrode arrays. The first volunteers were operated at the University of Western Ontario, Canada. A 292 × 512 CCD camera is connected to a sub-notebook computer in a belt pack. A second microcontroller is also included in the belt pack and it is dedicated to brain stimulation. The stimulus generator is connected to the electrodes implanted on the visual cortex through a percutaneous pedestral. With this system a vision-impaired person is able to count his fingers and recognize basic symbols.

In Canada, the research team of Pr Sawan [Sawan (2008)] at Polystim Neurotechnologies Laboratory7 has begun clinical trials for an electrode array providing images of 256 pixels (Fig. 10). Such images are not very accurate but they allow the patient to guess shapes. Furthermore clinical trials have proved that it was possible to directly stimulate neurons in the primary visual cortex.

Fig. 10. Principle of Polystim Laboratory visual prosthesis

<sup>7</sup> http://www.polystim.ca

(a) Bench in a park

Image Analysis for Automatically-Driven Bionic Eye 13

(b) Tree in a town

Fig. 11. Image context and points of interest

## **3. Bionic eye**

Such a system has to mimick several abilities of the human visual system in order to make visual information available for blind people. The system is made of a camera acquiring images, an electronical device processing data and a mechanical system that drives the camera. Outputs can be provided on cerebral implants, in other words, electrodes matrices plugged to the primary visual cortex. When discovering a new scene the human eye processes by saccades and the gaze is successively focused at different points of interest. The sequence of focusing points enables to scan the scene in an optimized way according to the interest degree. The interest degree is a very complex criterion to estimate because it depends on the context and on the nature of elements included in the scene. Geometrical features of objects as well as color or structure are important in the interest estimation (Fig. 11). For example, a tree (b) is of a great interest in a urban landscape whereas a bench (a) is a salient information in a contryside scene. In the first case, the lack of geometrical particularities and the color difference make the tree interesting. In the second case the structure and the geometrical features of the bench make it interesting in comparison to trees or meadows.

Several steps are carried out successively or in parallel to process data and drive the camera. First of all a detection of points of interest is achieved on a regular image, in other words, on an image usually provided by a camera. One of the points best-scoring with the detector is chosen as the first focusing point. Then the image is re- sampled in a radial way in order to obtain a foveated image. The resulting image is blurred according to the distance to the focusing point [Larson (2009)]. Then a detection of points of interest is achieved on the foveated image in order to determine the second focusing point. These two steps are repeated as many times as necessary to discover the whole scene (Fig. 12). This gives the computed sequence of points of interest. In parallel a human observer faces the primary image while an eye- tracker follows his eye movements in order to determine the observer sequence of points of interest, when exploring the scene by saccades [Hernandez (2008)]. Afterwards the two sequences will have to be compared in order to quantify and qualify the computer vision process, in terms of position and order.

## **4. Circuit and system approach**

#### **4.1 Principle and objective**

The proposed solution is based on Pr. Sawan research [Coulombe (2007)Sawan (2008)]. The implementation is a visual prosthesis implanted into the human cortex. In the first case, the principle of this application consists in stimulating the visual cortex by implanting a silicium micro-chip on a network of electrodes made of biocompatible materials [Kim (2010)Piedade (2005)] and in which each electrode injects a stimulating electrical current in order to provoke a series of luminous points to appear (an array of pixels) in the field of vision of the sightless person [Piedade (2005)]. This system is composed of two distinct parts:

	- 1. select and translate the captured images,

10 Will-be-set-by-IN-TECH

Such a system has to mimick several abilities of the human visual system in order to make visual information available for blind people. The system is made of a camera acquiring images, an electronical device processing data and a mechanical system that drives the camera. Outputs can be provided on cerebral implants, in other words, electrodes matrices plugged to the primary visual cortex. When discovering a new scene the human eye processes by saccades and the gaze is successively focused at different points of interest. The sequence of focusing points enables to scan the scene in an optimized way according to the interest degree. The interest degree is a very complex criterion to estimate because it depends on the context and on the nature of elements included in the scene. Geometrical features of objects as well as color or structure are important in the interest estimation (Fig. 11). For example, a tree (b) is of a great interest in a urban landscape whereas a bench (a) is a salient information in a contryside scene. In the first case, the lack of geometrical particularities and the color difference make the tree interesting. In the second case the structure and the geometrical

Several steps are carried out successively or in parallel to process data and drive the camera. First of all a detection of points of interest is achieved on a regular image, in other words, on an image usually provided by a camera. One of the points best-scoring with the detector is chosen as the first focusing point. Then the image is re- sampled in a radial way in order to obtain a foveated image. The resulting image is blurred according to the distance to the focusing point [Larson (2009)]. Then a detection of points of interest is achieved on the foveated image in order to determine the second focusing point. These two steps are repeated as many times as necessary to discover the whole scene (Fig. 12). This gives the computed sequence of points of interest. In parallel a human observer faces the primary image while an eye- tracker follows his eye movements in order to determine the observer sequence of points of interest, when exploring the scene by saccades [Hernandez (2008)]. Afterwards the two sequences will have to be compared in order to quantify and qualify the computer vision

The proposed solution is based on Pr. Sawan research [Coulombe (2007)Sawan (2008)]. The implementation is a visual prosthesis implanted into the human cortex. In the first case, the principle of this application consists in stimulating the visual cortex by implanting a silicium micro-chip on a network of electrodes made of biocompatible materials [Kim (2010)Piedade (2005)] and in which each electrode injects a stimulating electrical current in order to provoke a series of luminous points to appear (an array of pixels) in the field of vision of the sightless

• The implant lodged in the visual cortex wirelessly receives dedicated data and associated energy from the external controller. This electro-stimulator generates the electrical stimuli

• The battery-operated outer control includes a micro-camera which captures the image as well as a processor and a command generator. They process the imaging data in order to:

person [Piedade (2005)]. This system is composed of two distinct parts:

1. select and translate the captured images,

and oversees the changing microelectrode/biological tissue interface,

features of the bench make it interesting in comparison to trees or meadows.

process, in terms of position and order.

**4. Circuit and system approach**

**4.1 Principle and objective**

**3. Bionic eye**

(a) Bench in a park

(b) Tree in a town

Fig. 11. Image context and points of interest

to implement and the large number of different outputs thanks to the internal registers of the camera. The registers allow us to output a lot of standard resolutions (SXVGA, VGA, QVGA etcÉ), the output formats (RGB or YUV) and the frame rate (15 fps or 7.5 fps). These registers are initialized by the I2C controller of the DSP. This allows a dynamic configuration of the camera by the DSP. The camera outputs are 8 bits parallel data that allow a datastream up to 0, 3 Gb/s with 3 control signals (horizontal, vertical and pixel clocks). For the prototype we

Image Analysis for Automatically-Driven Bionic Eye 15

In order to reproduce of the eye movement, two analog servo motors have been used

The FPGA realizes two processes in parallel. The first one consists in controlling the servo motor. The FPGA transforms an angle in pulse width with a refresh rate of 50 Hz (Fig. 13). The angle is incremented or decremented by two pulse updates during the signal of a new frame (Fig. 15). For 15 fps a pulse is 2 degrees for a use at the maximal speed of the servo

The second process is the image preprocessing. This process consists of the transformation of 16 bits by pixel image with 2 clocks by pixel into 24 bits by pixel image with one clock by

(horizontal and vertical) mounted on a steel frame and controlled by the FPGA.

output at a VGA resolution in RGB 565 at 15 fps.

Fig. 14. Time affectation of the pulse width

Fig. 15. New frame: increment/decrement signal

motor (0.15s @ 60 ˛a).

**4.3 FPGA (Field-Programmable Gate Array) component**

Fig. 12. Scene exploration process


The topology is based on the schematic of Fig. 13.

An analog signal captured by the camera provides information to the DSP (Digital Signal Processor) component. The image is transmitted by using the FPGA which realizes the first Image Pre-processing. A DAM (Direct Access Memory) is placed at the input of the DSP card in order to transfer the preprocessing image to the SDRAM. The DSP realizes then the image processing in order to reproduce the eye behavior and a part of the cortex operation. The LCD screen is added in order to achieve debug of the image processing. In the final version, this last one will be removed. The FPGA drives two motors in two axes directions (horizontal, vertical) in order to reproduce the eye movements. We will know focus on the different components of

Fig. 13. Schematic principle of bionic eye

this bionic eye topology.

#### **4.2 Camera component**

With the development of the mobile phone, the CMOS camera became more compact, lower powered, with higher resolution and quicker frame rate. As for biomedical systems, the constraints tend to be the same, this solution retained our attention. Indeed, for example, Omnivision has created a 14 megapixel CMOS camera with a frame rate of 60 fps for a 1080p frame and a package of 9 mm × 7 mm. In this project, we have retained a choice of a 1.3 megapixel camera at a frame rate of 15 fps for mainly two reasons: the package who is easy 12 Will-be-set-by-IN-TECH

An analog signal captured by the camera provides information to the DSP (Digital Signal Processor) component. The image is transmitted by using the FPGA which realizes the first Image Pre-processing. A DAM (Direct Access Memory) is placed at the input of the DSP card in order to transfer the preprocessing image to the SDRAM. The DSP realizes then the image processing in order to reproduce the eye behavior and a part of the cortex operation. The LCD screen is added in order to achieve debug of the image processing. In the final version, this last one will be removed. The FPGA drives two motors in two axes directions (horizontal, vertical) in order to reproduce the eye movements. We will know focus on the different components of

With the development of the mobile phone, the CMOS camera became more compact, lower powered, with higher resolution and quicker frame rate. As for biomedical systems, the constraints tend to be the same, this solution retained our attention. Indeed, for example, Omnivision has created a 14 megapixel CMOS camera with a frame rate of 60 fps for a 1080p frame and a package of 9 mm × 7 mm. In this project, we have retained a choice of a 1.3 megapixel camera at a frame rate of 15 fps for mainly two reasons: the package who is easy

Fig. 12. Scene exploration process

3. oversee the implant.

Fig. 13. Schematic principle of bionic eye

this bionic eye topology.

**4.2 Camera component**

2. generate and manage the electrical stimulation process

The topology is based on the schematic of Fig. 13.

to implement and the large number of different outputs thanks to the internal registers of the camera. The registers allow us to output a lot of standard resolutions (SXVGA, VGA, QVGA etcÉ), the output formats (RGB or YUV) and the frame rate (15 fps or 7.5 fps). These registers are initialized by the I2C controller of the DSP. This allows a dynamic configuration of the camera by the DSP. The camera outputs are 8 bits parallel data that allow a datastream up to 0, 3 Gb/s with 3 control signals (horizontal, vertical and pixel clocks). For the prototype we output at a VGA resolution in RGB 565 at 15 fps.

In order to reproduce of the eye movement, two analog servo motors have been used (horizontal and vertical) mounted on a steel frame and controlled by the FPGA.

#### **4.3 FPGA (Field-Programmable Gate Array) component**

The FPGA realizes two processes in parallel. The first one consists in controlling the servo motor. The FPGA transforms an angle in pulse width with a refresh rate of 50 Hz (Fig. 13). The angle is incremented or decremented by two pulse updates during the signal of a new frame (Fig. 15). For 15 fps a pulse is 2 degrees for a use at the maximal speed of the servo motor (0.15s @ 60 ˛a).

Fig. 14. Time affectation of the pulse width

Fig. 15. New frame: increment/decrement signal

The second process is the image preprocessing. This process consists of the transformation of 16 bits by pixel image with 2 clocks by pixel into 24 bits by pixel image with one clock by

Fig. 18. Image processing

**4.5 Electronic prototype**

consumption and size.

Fig. 19. Bionic Eye prototype

<sup>10</sup> www.xilinx.com

<sup>11</sup> www.spectrumdigital.com

**5. Image processing and analysis**

A prototype has been realized, as shown in Fig. 19. As introducted before, this prototype is

Image Analysis for Automatically-Driven Bionic Eye 17

Its associated size is 20\*14\*2cm. This size is due to the use of a development card. We choose respectively for the FPGA and DSP cards a Xilinx10 Virtex 5 XC5VLX50 and a spectrum<sup>11</sup> digital evm omap l137. But on these two cards (FPGA, DSP), we just need the FPGA, DSP, memories and I/O ports. Indeed, the objective is to validate the software image processing. The LCD screen on the left of Fig19 is added to see the resulting image. This last one will not be present on the final product. For the test of the project, we choose a TFT sharp LQ043T3DX02. So, the objective size for the final product is first of all a large reduction by removing the obsolete parts of these two cards (80%) and then by using integrated circuit solution. The support technology will be standard 0.35*μm* CMOS technology which provides low current

An other advantage of using this technology is the possibility to develop on the same wafer analog and digital circuits. In this case, it is possible to realize powerful functions with low

The two main steps in HVS data processing that will be mimicked are focus of attention and detection of points of interest. Focus of attention enables to direct gaze at a particular point. In this way, the image around the focusing point is very clear (central vision) and becomes more and more blurred when the distance to the focusing point increases (peripheral vision).

based on : (i) a camera (ii) a FPGA card, (iii) a DSP card and (iv) a LCD screen.

leakage [Flandre (2011)] and so consumption reduction.

pixel. For this, we divide the pixel clock by two and we interpolate the pixel color with 5 or 6 bits to a pixel color with 8 bits.

#### **4.4 DSP (Digital Signal Processor) component**

For a full embedded product, we need a core that can run a heavy load due to the image processing in real-time. This is why we focus our attention on a DSP solution and precisely on the DSP with an integrated ARM core by Texas Instrument, in fact the OpenCV library8 is not optimized for DSP core (the mainline development of openCV effort targets the x86 architecture) but it has been successfully ported to the ARM platforms9. Nevertheless, several algorithms require floating-point computation and the DSP is the most suitable core for this thanks to the native floating point unit (Fig. 16).


Fig. 16. Operation time execution

Moreover, the parallelism due to the dual-core adds more velocity to the image processing (Fig. 17). And finally, we use pipeline architecture for an efficient use of the CPU thanks to the multiple controller included in the DSP. The first controller used is the direct memory access controller that allows to record the frame from the FPGA to a ping-pong buffer without the use of the CPU. The ping-pong buffer allows to record the second frame to a different address. This enables to work on the first frame during the record of the second frame without the problem of the double use of a file.


Fig. 17. Dual Core operation time execution

The second controller used is the SDRAM controller that controls two external 256 Mb SDRAM. The controller manages the priority of the use of the SDRAM, the refresh of the SDRAM and the signals control. The third controller used is the LCD controller that allows to display the frame at the end of the image processing in order to verify the result and presentation of the product. This architecture offers a use of the CPU exclusively dedicated to the image processing (Fig. 18).

<sup>8</sup> www.opencv.com

<sup>9</sup> www.ti.com

Fig. 18. Image processing

14 Will-be-set-by-IN-TECH

pixel. For this, we divide the pixel clock by two and we interpolate the pixel color with 5 or 6

For a full embedded product, we need a core that can run a heavy load due to the image processing in real-time. This is why we focus our attention on a DSP solution and precisely on the DSP with an integrated ARM core by Texas Instrument, in fact the OpenCV library8 is not optimized for DSP core (the mainline development of openCV effort targets the x86 architecture) but it has been successfully ported to the ARM platforms9. Nevertheless, several algorithms require floating-point computation and the DSP is the most suitable core for this

Moreover, the parallelism due to the dual-core adds more velocity to the image processing (Fig. 17). And finally, we use pipeline architecture for an efficient use of the CPU thanks to the multiple controller included in the DSP. The first controller used is the direct memory access controller that allows to record the frame from the FPGA to a ping-pong buffer without the use of the CPU. The ping-pong buffer allows to record the second frame to a different address. This enables to work on the first frame during the record of the second frame without the

The second controller used is the SDRAM controller that controls two external 256 Mb SDRAM. The controller manages the priority of the use of the SDRAM, the refresh of the SDRAM and the signals control. The third controller used is the LCD controller that allows to display the frame at the end of the image processing in order to verify the result and presentation of the product. This architecture offers a use of the CPU exclusively dedicated to

bits to a pixel color with 8 bits.

Fig. 16. Operation time execution

problem of the double use of a file.

Fig. 17. Dual Core operation time execution

the image processing (Fig. 18).

<sup>8</sup> www.opencv.com <sup>9</sup> www.ti.com

**4.4 DSP (Digital Signal Processor) component**

thanks to the native floating point unit (Fig. 16).

#### **4.5 Electronic prototype**

A prototype has been realized, as shown in Fig. 19. As introducted before, this prototype is based on : (i) a camera (ii) a FPGA card, (iii) a DSP card and (iv) a LCD screen.

Its associated size is 20\*14\*2cm. This size is due to the use of a development card. We choose respectively for the FPGA and DSP cards a Xilinx10 Virtex 5 XC5VLX50 and a spectrum<sup>11</sup> digital evm omap l137. But on these two cards (FPGA, DSP), we just need the FPGA, DSP, memories and I/O ports. Indeed, the objective is to validate the software image processing. The LCD screen on the left of Fig19 is added to see the resulting image. This last one will not be present on the final product. For the test of the project, we choose a TFT sharp LQ043T3DX02.

So, the objective size for the final product is first of all a large reduction by removing the obsolete parts of these two cards (80%) and then by using integrated circuit solution. The support technology will be standard 0.35*μm* CMOS technology which provides low current leakage [Flandre (2011)] and so consumption reduction.

An other advantage of using this technology is the possibility to develop on the same wafer analog and digital circuits. In this case, it is possible to realize powerful functions with low consumption and size.

Fig. 19. Bionic Eye prototype

#### **5. Image processing and analysis**

The two main steps in HVS data processing that will be mimicked are focus of attention and detection of points of interest. Focus of attention enables to direct gaze at a particular point. In this way, the image around the focusing point is very clear (central vision) and becomes more and more blurred when the distance to the focusing point increases (peripheral vision).

<sup>10</sup> www.xilinx.com

<sup>11</sup> www.spectrumdigital.com

Fig. 20. Hexagonal cell distribution.

**5.3 Sequence of points of interest**

Short sequences of points of interest are studied: the first one has been computed and the second one is the result observed on a set of 7 people. Fig. 23 shows the sequences of points of interest on the original image of Fig. 21a and Table 1 gives the point coordinates. Sequences are made of points numbered from 1 to 4. The observer sequence in white goes from the pink flower heart to the bottom left plant, whereas the computed sequence in cyan goes from the pink flower heart to the end of the branch. Another difference concerns the point in the red flower. The observers chose to look at the flower heart whereas the detector focused at the border between the petal and the leaf. This is explained by the visual cortex behavior. Actually the detector is attracted by color differences whereas the human visual system is also sensitive to geometrical features such as symmetry. In this case the petals around the heart are quite arranged in a symmetrical way aroud the flower heart. That is why the observers chose to gaze at this point. In this example the computed sequence is determined without computing again a new foveated image for each point of interest, but by considering each significant point from the foveated image with the central point as focusing point. Furthermore for equivalent points of interest the distance between two consecutive points is chosen as great as possible in order to cover a maximal area of the scene with a minimal number of eye movements.

Image Analysis for Automatically-Driven Bionic Eye 19

Table 1 gives the distance between two equivalent points from the two sequences. This distance varies from 8 to 32.249 with an average value of 18.214. This means that computed points are not so far from those of the observers. But the algorithm determining the sequence

must be refined in order to prevent errors on point order.

Detection of points of interest is the stage where a sequence of focusing points is determined in order to explore a scene.

#### **5.1 Focus of attention**

As a matter of fact, the role played by cones in diurnal vision is preponderant. Cones are much less numerous than rods in most parts of the retina, but greatly outnumber rods in the fovea. Furthermore cones are arranged in a concentric way inside the human retina [Marr (1982)]. In this way focus of attention may be modelized by representing cones in the fovea area and its surroundings. The general principle is the following. Firstly a focusing point is chosen as the fovea center (gaze center) and a foveal radius is defined as the radius of the central cell. Secondly an isotropic progression of concentric circles determines the blurring factor according to the distance to the focusing point. Thirdly integration sets are defined to represent cones and an integration method is selected in order to gather data over the integration set to obtain a single value. Integration methods can be chosen amongst averaging, median filtering, morphological filtering such as dilation, erosion, closing, opening, and so on. Then re-sampled data are stored in a rectangular image in polar coordinates. This gives the encoded image. This image is a compressed version of the original image, but the compression ratio varies according to the distance to the focusing point. The following step can be the reconstruction of the image from the encoded image. This step is not systematically achieved as there is no need of duplicating data to process them [Robert-Inacio (2010)]. When necessary it works by determining for each point of the reconstructed image the integration sets it belongs to. Then the dual method of the integration process is used to obtain the reconstructed value. When using directly the encoded image instead of the original or the reconstructed images, customized processing algorithms must be set up in order to take into account that data are arranged in a polar way. In this case a full pavement of the image is defined with hexagonal cells [Robert (1999)]. The hexagons are chosen so that they do not overlap each others and so that they are as regular as possible. A radius sequence is also defined as follows:

This hexagonal pavement is as close as possible to the biological cone distribution in the fovea. Furthermore data are taken into account only once in the encoded image because of non-overlapping.

Fig. 21 illustrates the type of results provided by previous methods on an image of the Kodak database12 (Fig. 21). Firstly Fig. 21 shows the encoded image (on the right) for a foveal radius of 25 pixels and with hexagonal cells. The focusing point is chosen at (414, 228), ie: at the central flower heart. Secondly the reconstructed image is given after re-sampling of the original image. In the following, the hexagonal pavement is chosen to define foveated images as it is the closest one to the cone distribution in the fovea.

#### **5.2 Detection of points of interest**

The detection of points of interest is achieved by using the Harris detector [Harris (1988)]. Fig. 22 shows the images with the detected points of interest. Points of interest detected as corners are highlighted in red whereas those detected as edges are in green. Fig. 22 illustrates the Harris method when using a regular image (a), in other words, an image sampled in a rectangular way, and a foveated image (b).

<sup>12</sup> http://r0k.us/graphics/kodak/

16 Will-be-set-by-IN-TECH

Detection of points of interest is the stage where a sequence of focusing points is determined

As a matter of fact, the role played by cones in diurnal vision is preponderant. Cones are much less numerous than rods in most parts of the retina, but greatly outnumber rods in the fovea. Furthermore cones are arranged in a concentric way inside the human retina [Marr (1982)]. In this way focus of attention may be modelized by representing cones in the fovea area and its surroundings. The general principle is the following. Firstly a focusing point is chosen as the fovea center (gaze center) and a foveal radius is defined as the radius of the central cell. Secondly an isotropic progression of concentric circles determines the blurring factor according to the distance to the focusing point. Thirdly integration sets are defined to represent cones and an integration method is selected in order to gather data over the integration set to obtain a single value. Integration methods can be chosen amongst averaging, median filtering, morphological filtering such as dilation, erosion, closing, opening, and so on. Then re-sampled data are stored in a rectangular image in polar coordinates. This gives the encoded image. This image is a compressed version of the original image, but the compression ratio varies according to the distance to the focusing point. The following step can be the reconstruction of the image from the encoded image. This step is not systematically achieved as there is no need of duplicating data to process them [Robert-Inacio (2010)]. When necessary it works by determining for each point of the reconstructed image the integration sets it belongs to. Then the dual method of the integration process is used to obtain the reconstructed value. When using directly the encoded image instead of the original or the reconstructed images, customized processing algorithms must be set up in order to take into account that data are arranged in a polar way. In this case a full pavement of the image is defined with hexagonal cells [Robert (1999)]. The hexagons are chosen so that they do not overlap each others and so that they are as regular as possible. A radius sequence is also defined as follows: This hexagonal pavement is as close as possible to the biological cone distribution in the fovea. Furthermore data are taken into account only once in the encoded image because of

Fig. 21 illustrates the type of results provided by previous methods on an image of the Kodak database12 (Fig. 21). Firstly Fig. 21 shows the encoded image (on the right) for a foveal radius of 25 pixels and with hexagonal cells. The focusing point is chosen at (414, 228), ie: at the central flower heart. Secondly the reconstructed image is given after re-sampling of the original image. In the following, the hexagonal pavement is chosen to define foveated images

The detection of points of interest is achieved by using the Harris detector [Harris (1988)]. Fig. 22 shows the images with the detected points of interest. Points of interest detected as corners are highlighted in red whereas those detected as edges are in green. Fig. 22 illustrates the Harris method when using a regular image (a), in other words, an image sampled in a

as it is the closest one to the cone distribution in the fovea.

in order to explore a scene.

**5.1 Focus of attention**

non-overlapping.

**5.2 Detection of points of interest**

<sup>12</sup> http://r0k.us/graphics/kodak/

rectangular way, and a foveated image (b).

#### Fig. 20. Hexagonal cell distribution.

#### **5.3 Sequence of points of interest**

Short sequences of points of interest are studied: the first one has been computed and the second one is the result observed on a set of 7 people. Fig. 23 shows the sequences of points of interest on the original image of Fig. 21a and Table 1 gives the point coordinates. Sequences are made of points numbered from 1 to 4. The observer sequence in white goes from the pink flower heart to the bottom left plant, whereas the computed sequence in cyan goes from the pink flower heart to the end of the branch. Another difference concerns the point in the red flower. The observers chose to look at the flower heart whereas the detector focused at the border between the petal and the leaf. This is explained by the visual cortex behavior. Actually the detector is attracted by color differences whereas the human visual system is also sensitive to geometrical features such as symmetry. In this case the petals around the heart are quite arranged in a symmetrical way aroud the flower heart. That is why the observers chose to gaze at this point. In this example the computed sequence is determined without computing again a new foveated image for each point of interest, but by considering each significant point from the foveated image with the central point as focusing point. Furthermore for equivalent points of interest the distance between two consecutive points is chosen as great as possible in order to cover a maximal area of the scene with a minimal number of eye movements.

Table 1 gives the distance between two equivalent points from the two sequences. This distance varies from 8 to 32.249 with an average value of 18.214. This means that computed points are not so far from those of the observers. But the algorithm determining the sequence must be refined in order to prevent errors on point order.

(a) On a regular image (b) On a foveated image

Image Analysis for Automatically-Driven Bionic Eye 21

(a) Observers' sequence (b) Computed sequence

These different tasks are achieved very easily for sighted people, but they can be impossible for visually impaired people. For example, color perception cannot be operated by touch, by hearing, by the taste or smell. It is a pure visual sensation, unreachable to blind people. That

In robotics, such a system able to explore an unknown scene by itself can be of a great help for autonomous robots. For example AUV (Autonomous Underwater Vehicles) can be even more autonomous by being able to decide by themselves what path to follow. Actually, by mimicking detection of points of interest, the bionic eye can determine obstacle position and then it can compute a path avoiding them. Furthermore, application fields are numerous:

is why the bionic eye must be able to replace the human visual system for such tasks.

Fig. 22. Detection of points of interest

Fig. 23. Sequences of points of interest

• contextual environment perception,

• reading,

• etc.

• pattern recognition, • face recognition, • autonomous moving,

Fig. 21. Focus of attention on a particular image: from the original image to the reconstructed image, passing by the encoded image (foveated image)


Table 1. Distances between points of interest.

#### **6. Applications**

There exist two great families of applications: on the one hand, applications in the biomedical and health field, and on the other hand, applications in robotics.

In the biomedical field, a system such as the bionic eye can be very helpful at different tasks:


18 Will-be-set-by-IN-TECH

Fig. 21. Focus of attention on a particular image: from the original image to the reconstructed

**Number Detection Number Detection Distance** (191, 106) 1 (194, 114) 8.544 (279, 196) 2 (275, 164) 32.249 (99, 118) 4 (109, 103) 18.028 (24, 214) 3 (38, 215) 14.036

There exist two great families of applications: on the one hand, applications in the biomedical

In the biomedical field, a system such as the bionic eye can be very helpful at different tasks:

**Point Regular Point Foveated**

image, passing by the encoded image (foveated image)

and health field, and on the other hand, applications in robotics.

Table 1. Distances between points of interest.

**6. Applications**

• light perception, • color perception,

(a) On a regular image (b) On a foveated image

(a) Observers' sequence (b) Computed sequence

Fig. 23. Sequences of points of interest


These different tasks are achieved very easily for sighted people, but they can be impossible for visually impaired people. For example, color perception cannot be operated by touch, by hearing, by the taste or smell. It is a pure visual sensation, unreachable to blind people. That is why the bionic eye must be able to replace the human visual system for such tasks.

In robotics, such a system able to explore an unknown scene by itself can be of a great help for autonomous robots. For example AUV (Autonomous Underwater Vehicles) can be even more autonomous by being able to decide by themselves what path to follow. Actually, by mimicking detection of points of interest, the bionic eye can determine obstacle position and then it can compute a path avoiding them. Furthermore, application fields are numerous:

Djilas M.; Oles C.; Lorach H.; Bendali A.; Degardin J.; Dubus E.; Lissorgues-Bazin G.;

Image Analysis for Automatically-Driven Bionic Eye 23

Faria, J.; Lopes, S.; Fernandes, H.; Martins, P. & Barroso, J. (2010). Electronic white cane for

Flandre D. , Bulteel O., Gosset G. , Rue B. & Bol D. (2011). Disruptive ultra-low-leakage

Harris C. & Stephens M. , (1998). A combined corner and edge detector,. *Proceedings of the 4th Alvey Vision Conference*, pp 147-151, England, Aug-Sept. 1988, Manchester, UK Hernandez T.; Levitan, C.; Banks M. & Schor C. (2008). How does saccade adaptation affect

Horsager A.; Greenberg RJ. & Fine I. (2010). Spatiotemporal interactions in retinal prosthesis

Hub A.; Hartter T. & Ertl T. (2006). Interactive tracking of movable objects for the blind on the

Ings S. (2007). Making eyes to see. *The Eye: a natural history*, London: Bloomsbury, pp 276-283 Kim D.-H.; Viventi, J & al (2010). Dissolvable films of silk fibroin for ultrathin conformal bio-integrated electronics, *Journal of Nature Materials*, Vol. 9, Apr-2010, pp 511-517 Lane SS.; Kuppermann BD.; Fine IH.; Hamill MB.; Gordon JF.; Chuck RS.; Hoffman RS.; Packer

Lane SS. & Kuppermann BD. (2006) The Implantable Miniature Telescope for macular

Larson A. & Loschky, L. (2009). The contributions of central versus peripheral vision to scene

Loudin J.; Simanovskii D.; Vijayraghavan K.; Sramek C.; Butterwick A.; Huie P.; McLean

Marr D. (1982). *Vision: A Computational Investigation into the Human Representation and*

Parikh N.; Itti L. & Weiland J. (2010). Saliency-based image processing for retinal prostheses. *J*

Piedade, M.; Gerald, J.; Sousa, L.A.; Tavares, G.& Tomas, P.; (2005). Visual neuroprosthesis:

Robert F. & Dinet E., (1999). Biologically inspired pavement of the plane for image encoding,.

basis of environment models and perception-oriented object recognition methods. *Proceedings of the 8th international ACM SIGACCESS conference on Computers and accessibility*, Assets '06, Portland, Oregon, USA, pp 111-118, ACM, New York, NY,

M. & Koch DD. (2004). A prospective multicenter clinical trial to evaluate the safety and effectiveness of the implantable miniature telescope. *Am J Ophthalmol*, 137 (6), pp

G. & Palanker D. (2007). Optoelectronic retinal prosthesis: system design and

a non invasive system for stimulating the cortex. *IEEE Transaction on Circuits and*

*Proceedings of International Conference on Computational Intelligence for Modelling,*

visual perception?. *Journal of Vision*, Vol. 8, No. 3, Jan -2008, pp 1-16

subjects. *Invest Ophthalmol Vis Sci* 51, pp 1223-1233.

degeneration. *Curr Opin Ophthalmol*, 17 (1), pp 94-98

performance. *J Neural Engineering* 4 (1), pp 572-584

gist recognition. *Journal of Vision*, Vol. 9, No. 6, Jan 2009, pp 1-6

*Processing of Visual Information*, eds. W.H. Freeman, San Francisco.

*Systems I* , Vol.52, No.12, (Dec.2005), pp 2648-2662, ISSN 1549-8328

*Control and Automation (CIMCA'99)*, pp 1-6, Austria, Feb. 1999, Vienna

geometry optimization and experimental validation. *J Neural Eng*, 8(4) Dobelle WH. (2000). Artificial vision for the blind by connecting a television camera to visual

cortex. *ASAIO Journal*, 46, pp 3-9

*Faible Consommation (FTFC)*, pp 1-4

pp 1-7, Kobe , Japan

USA

993-1001

*Neural Eng*, 7.

Rousseau L.; Benosman R.; Ieng SH.; Joucla S.; Yvert B.; Bergonzo P.; Sahel J. & Picaud S. (2011). Three-dimensional electrode arrays for retinal prostheses: modeling,

blind people navigation assistance. *World Automation Congress (WAC'10)*, 19-23 Sept,

design techniques for ultra-low-power mixed-signal microsystems, *Faible Tension*


Each time it is impossible for humans to reach a place, the bionic eye can be used to make decision or help making decision in order to drive a robot.

#### **7. Conclusion**

In this chapter, the bionic eye principle has been presented in order to demonstrate how powerful such a system is. Different approaches can be considered to stimulate either the retina or the primary visual cortex, but all the presented systems use a separate system for image acquisition. Images are then processed and data are turned into electrical pulses stimulating either retinal cells or cortical neurons.

The originality of our system lays in the fact that images are not only processed but analyzed in order to determine a sequence of focusing points. This sequence allows to explore automatically a complex scene. This principle is directly inspired by the human visual system behavior. Furthermore foveated images are used instead of classical images (sampled at a constant step in two orthogonal directions). In this way, every image processing algorithm even basic has to be redefined to fit to foveated images.

In particular, an algorithm for detection of points of interest on foveated images has been set up in order to determine sequences of points of interest. These sequences are compared to those obtained from a human observer by eye-tracking in order to validate the computational process. A comparison between detection of points of interest on regular images and foveated images has also been made. Results show that detection on foveated images is more efficient because it suppresses noise that is far enough from the focusing point while detecting as well significant points of interest. This is particularly interesting as the amount of data to process is greatly decreased by the radial re-sampling step.

In future works the two sequences of points of interest must be compared more accurately and their differences analyzed. Furthermore the computed sequence is the basis for the animation of the bionic eye in order to discover dynamically the new scene. Such a process assumes that the bionic eye is servo-controlled in several directions.

#### **8. References**


20 Will-be-set-by-IN-TECH

Each time it is impossible for humans to reach a place, the bionic eye can be used to make

In this chapter, the bionic eye principle has been presented in order to demonstrate how powerful such a system is. Different approaches can be considered to stimulate either the retina or the primary visual cortex, but all the presented systems use a separate system for image acquisition. Images are then processed and data are turned into electrical pulses

The originality of our system lays in the fact that images are not only processed but analyzed in order to determine a sequence of focusing points. This sequence allows to explore automatically a complex scene. This principle is directly inspired by the human visual system behavior. Furthermore foveated images are used instead of classical images (sampled at a constant step in two orthogonal directions). In this way, every image processing algorithm

In particular, an algorithm for detection of points of interest on foveated images has been set up in order to determine sequences of points of interest. These sequences are compared to those obtained from a human observer by eye-tracking in order to validate the computational process. A comparison between detection of points of interest on regular images and foveated images has also been made. Results show that detection on foveated images is more efficient because it suppresses noise that is far enough from the focusing point while detecting as well significant points of interest. This is particularly interesting as the amount of data to process

In future works the two sequences of points of interest must be compared more accurately and their differences analyzed. Furthermore the computed sequence is the basis for the animation of the bionic eye in order to discover dynamically the new scene. Such a process assumes that

Archambeau C.; Delbeke J.; Veraart C. & Verleysen M. (2004). Prediction of visual perceptions

Chun DW.; Heier JS. & Raizman MB. (2005). Visual prosthetic device for bilateral end-stage

Coulombe J.; Sawan M.& Gervais J. (2007). A highly flexible system for microstimulation of

macular degeneration. *Expert Rev Med Devices*, 2 (6), pp 657-665

*and Systems*, Vol.14, No.4, (Dec.2007), pp 258-269, ISSN 1932-4545

with artificial neural networks in a visual prosthesis for the blind. *Artificial Intelligence*

the visual cortex: Design and implementation. *IEEE Transaction on Biomedical Circuits*

• in archaeology and exploration in environments inaccessible to humans,

• in environmental protection and monitoring, • in ship hull and infrastructure inspection,

• in military applications,

• etc.

**7. Conclusion**

**8. References**

• in infrastructure inspection of nuclear power plants,

decision or help making decision in order to drive a robot.

stimulating either retinal cells or cortical neurons.

even basic has to be redefined to fit to foveated images.

is greatly decreased by the radial re-sampling step.

the bionic eye is servo-controlled in several directions.

*in Medicine*, 32(3), pp 183-194


**2** 

*Czech Republic* 

**Methods of Measurement and Evaluation of** 

*2Charles University in Prague, Department of Neurology, 2nd Faculty of Medicine* 

The position of the eye, head and shoulders can be negatively influenced by many diseases of the nervous system, (particularly by visual and vestibular disorders) (Cerny R. et al, 2006). Disturbances of the cervical vertebral column are another frequent cause of abnormal head position. In this chapter we describe advanced methods of measuring the precise position of the eye, head and shoulders in space. The systems and methods are designed for use in neurology to discover relationships between some neurological disorders (such as disorders of vestibular system) and postural head alignment. We have designed a system and a set of procedures for evaluating the inclination (roll), flexion (pitch) and rotation (yaw) of the head and the inclination (roll) and rotation (yaw) of the shoulders with resolution and accuracy from 1 to 2 (Hozman et al, 2007). We will also deal with systems designed for parallel measurement of eye and head positions and a new portable system for studying eye and head movements at the same time is described as well (Charfreitag et al, 2008). The main goal of this study is to describe new systems and possibilities of the present methods determined for diagnostics and therapy support in clinical neurology.

Furthermore, we describe the benefits of each method for diagnosis in neurology.

The measurement of eye position is an important diagnostic instrument in both clinical and experimental examination of human vestibular system (Cerny R. et al, 2006). Also, the simultaneous measurement of head (Murphy et al, 1991) and shoulders position (Raine et al, 1997) could contribute to better definition of diseases affecting the vestibular system

Abnormal head posture (AHP) is an important clinical sign of disease in many medical specialities. AHP is a consequence of dysfunction of musculoskeletal, visual and vestibular systems (Brandt et al, 2003). AHP is of particular importance in childhood, when

**1. Introduction** 

**2. Background and related works** 

**2.1 Clinical significance of head posture measurement** 

(labyrinthine) function in man.

**Eye, Head and Shoulders Position in** 

Patrik Kutilek1, Jiri Hozman1, Rudolf Cerny2 and Jan Hejda1 *1Czech Technical University in Prague, Faculty of Biomedical Engineering* 

**Neurological Practice** 


## **Methods of Measurement and Evaluation of Eye, Head and Shoulders Position in Neurological Practice**

Patrik Kutilek1, Jiri Hozman1, Rudolf Cerny2 and Jan Hejda1 *1Czech Technical University in Prague, Faculty of Biomedical Engineering 2Charles University in Prague, Department of Neurology, 2nd Faculty of Medicine Czech Republic* 

## **1. Introduction**

22 Will-be-set-by-IN-TECH

24 Advanced Topics in Neurological Disorders

Robert-Inacio F.; Stainer Q.; Scaramuzzino R. & Kussener E. (2010). Visual attention simulation

Sawan M.; Gosselin B. & Coulombe J. (2008). Learning from the primary visual cortex to

Tong, F. (2003). Primary visual cortex and visual awareness, *Nature reviews: neuroscience*, 4, pp

Troyk, P., Bak, M., Berg, J., Bradley, D., Cogan, S., Erickson, R., Kufta, C., McCreery, D.,

Wyatt, J. (2011). The retinal implant project, *Research Laboratory of Electronics (RLE) report at the*

Zrenner E. (2008). Visual sensations mediated by subretinal microelectrode arrays implanted

*Massachussetts Institute of Technology, Chapter 19*, 20 March, pp 1-11.

Yagi, T. (2005). Vision prosthesis (artificial retina), *The Cell*, 37, 2, pp 18-21.

*in Graphics, Imaging and Vision (CGIV 2010)*, pp 19-26, Finland, Joensuu Sarpeshkar, R. (2010). *Ultra Low Power Bioelectronics: Fundamentals, Biomedical Applications and*

*Bio-inspired Systems*, Cambridge, ISBN 978-0-521-85727-7

pp 1-4, ISBN 978-1-4244-2492-4 , Estony, Tallinn

*Artificial Organs*, 27, 11, pp 1005-1015.

219-229.

Germany

in rgb and hsv color spaces. *Proceedings of of 4th IS&T International Conference on Colour*

recover vision for the blind by microstimulation,. *Proceedings of Norchip conference*,

Schmidt, E. & Towle, V. (2003). A Model for Intracortical Visual Prosthesis Research,

into blind retinitis pigmentosa patients. *Proc. 13th Ann. Conf. of the IFESS*, Freiburg,

The position of the eye, head and shoulders can be negatively influenced by many diseases of the nervous system, (particularly by visual and vestibular disorders) (Cerny R. et al, 2006). Disturbances of the cervical vertebral column are another frequent cause of abnormal head position. In this chapter we describe advanced methods of measuring the precise position of the eye, head and shoulders in space. The systems and methods are designed for use in neurology to discover relationships between some neurological disorders (such as disorders of vestibular system) and postural head alignment. We have designed a system and a set of procedures for evaluating the inclination (roll), flexion (pitch) and rotation (yaw) of the head and the inclination (roll) and rotation (yaw) of the shoulders with resolution and accuracy from 1 to 2 (Hozman et al, 2007). We will also deal with systems designed for parallel measurement of eye and head positions and a new portable system for studying eye and head movements at the same time is described as well (Charfreitag et al, 2008). The main goal of this study is to describe new systems and possibilities of the present methods determined for diagnostics and therapy support in clinical neurology. Furthermore, we describe the benefits of each method for diagnosis in neurology.

## **2. Background and related works**

The measurement of eye position is an important diagnostic instrument in both clinical and experimental examination of human vestibular system (Cerny R. et al, 2006). Also, the simultaneous measurement of head (Murphy et al, 1991) and shoulders position (Raine et al, 1997) could contribute to better definition of diseases affecting the vestibular system (labyrinthine) function in man.

#### **2.1 Clinical significance of head posture measurement**

Abnormal head posture (AHP) is an important clinical sign of disease in many medical specialities. AHP is a consequence of dysfunction of musculoskeletal, visual and vestibular systems (Brandt et al, 2003). AHP is of particular importance in childhood, when

Methods of Measurement and Evaluation of

& Wojtowicz, 1996).

Eye, Head and Shoulders Position in Neurological Practice 27

horizontal plane are also easily appreciated even by naked eye during examination. Little is known about head turn in vestibular syndromes. This type of deviation is hard to assess by observation only, indeed, only gross deviations in cases of ocular torticollis are used in clinical practice and regularly cited in literature. Vestibular imbalance due to unilateral labyrinthine failure causes vestibulospinal deviations towards the weaker labyrinth (Hautant reaction, Romberg deviation in standing with closed eyes etc.). It is reasonable to expect head turns of several degrees due to the functional imbalance between the activity horizontal channels. In contrast to the tilt reaction such a finding was not well described until now. Probably, this type of vestibular rotatocollis is compensated by spatial visual clues with open eyes and can be easily overlooked. In this situation, precise technique for

Last, but not least, precise 3D head position measurement has many potential implications for physical medicine and rehabilitation, particularly in the management and diagnosis of disorders affecting cervical spine. Head position in the sagittal plane is very variable and influenced by many factors, particularly habitual holding of the spine as a whole. Habitual head anteflexion with chronic overload of cervical and upper thoracic spine and muscle imbalance is typical consequence of uncompensated sedentary way of life, starting already in school age. Main reference for sagittal plane is so called Frankfort horizontal (line connecting meatus acusticus with the orbital floor or line connecting tragus with the outer eye canthus), see Figure 1. In most subjects this line is inclined forward bellow the space horizontal, in the extensor type of cervical positions is reclined backwards. The real position of Frankfort horizontal can vary more than 20 in the normative population, in comparison, the position of the eyes in frontal plane is held tightly within several degrees only (Harrison

a) Anatomical horizontal

b) Anatomical axis

Precise measurement of head position in rehabilitation and physical medicine is important not only for objective diagnosis of the cervical spine abnormalities, but also as a means of cervical kinesthesia assessment. In this test the ability of the tested subject to assume exact position in space without visual clues is examined (Palmgren et al, 2009). Normal subjects are able to attain desired position with precision of several degrees. Again, these differences are below the discrimination capacity of simple observation or protractor measurement. It is hypothesized, that abnormal setting of cervical proprioception can play important role in many conditions like whiplash injury syndrome, chronic tension headache, cervicogennic

Fig. 1. Anatomical Frankfort horizontal and axis.

head rotation measurement would be of paramount importance.

developmental abnormalities of different origin can manifest with AHP as a main clinical symptom. The differential diagnosis is broad and quantitative assessment of head position in space it is important for both treatment and evaluation of disease evolution. In an Italian study 73 children referred by paediatricians the most common cause of AHP was orthopaedic disease (congenital muscular torticollis, 35 cases) followed by ocular motor palsy (mostly superior oblique palsy, 25 cases). Neurological disease was found in 5 cases, in 8 cases no underlying disease was indentified (Nucci et al, 2005).

Most peculiar forms of AHP are due to cervical dystonia, a movement disorder due to the disturbance of motor control of cervical muscles. Exact pathophysiology of this disabling and hard to treat condition is not known and includes local, suprasegmental and psychological factors. It can be classified according to the abnormal positioning of the head and spine into ante/retrocollis (sagittal plane), laterocollis (frontal plane) and rotatocollis (horizontal plane), pure forms are rare, typical is combination (torticollis). The pattern of muscles involved in generation of the AHP can be inferred from the head position. Objective and quantitative measurement of head position is of great importance, as treatment with botulotoxin (nowadays first choice) requires exact identification of muscles involved in AHP generation and follow up of treatment efficacy with objective head positions recordings is important for choosing optimal long term treatment strategy. Standard assessment scales for torticollis use semiquantative clinical scores or simple goniometers with low precision (Galardi et al, 2003; Novak et al, 2010).

Blockades and disease of cervical spine due to spondylosis or trauma are very common cause of AHP in clinical practice. Here the quantitative head posture measurement is not imperative, but simple objective recording of abnormality evolution can be useful in chronic cases and when cervical spine surgery is considered.

AHP is a frequent and important sign in ophthalmology, particularly in childhood. It represents compensation of abnormal eye position and/or motility. Paralyses of eye muscles are compensated by a tilt of the head in direction of the weakened muscle. In congenital nystagmus the AHP tends to shift gaze direction in the null zone of the nystagmus. As a result of the compensatory head position, the vision acuity is enhanced or restored, but unbalanced muscle activation can lead to cervical spine disorders in the long term. Surgical procedures aimed at correction of the eyeball position are effective in repairing the AHP and are considered treatment of choice (artificial divergence, Kestenbaum surgery). The dosage of ocular muscle retroposition/resection depends on the angle of AHP with fixation of distant target. The reduction of abnormal head turn with 1mm muscle resection was 1.4 head turn on average in one study (Gräf et al, 2001).

Ocular tilt reaction is a well established symptom of dysfunction of the graviceptive pathways starting from the otholithic maculae of the inner ear to the vestibular nuclei and paramedian thalamus. This syndrome is defined by the triad of signs – head tilt, ocular globe rotation a deviation of the subjective visual vertical. All deviations directs towards the weak labyrinth, or to the contralateral side after crossing at the pontine level, in the case of brain stem lesions. Head tilt in the frontal plane is usually quickly compensated, after the acute phase is over, but more subtle signs (ocular rotation and subjective vertical) can last for weeks and months. Horizontal eyes alignment is precisely regulated within narrow range of several degrees (Halmagyi et al, 1991), (Brandt & Dieterich, 1994). Deviations in the

developmental abnormalities of different origin can manifest with AHP as a main clinical symptom. The differential diagnosis is broad and quantitative assessment of head position in space it is important for both treatment and evaluation of disease evolution. In an Italian study 73 children referred by paediatricians the most common cause of AHP was orthopaedic disease (congenital muscular torticollis, 35 cases) followed by ocular motor palsy (mostly superior oblique palsy, 25 cases). Neurological disease was found in 5 cases, in

Most peculiar forms of AHP are due to cervical dystonia, a movement disorder due to the disturbance of motor control of cervical muscles. Exact pathophysiology of this disabling and hard to treat condition is not known and includes local, suprasegmental and psychological factors. It can be classified according to the abnormal positioning of the head and spine into ante/retrocollis (sagittal plane), laterocollis (frontal plane) and rotatocollis (horizontal plane), pure forms are rare, typical is combination (torticollis). The pattern of muscles involved in generation of the AHP can be inferred from the head position. Objective and quantitative measurement of head position is of great importance, as treatment with botulotoxin (nowadays first choice) requires exact identification of muscles involved in AHP generation and follow up of treatment efficacy with objective head positions recordings is important for choosing optimal long term treatment strategy. Standard assessment scales for torticollis use semiquantative clinical scores or simple goniometers with low precision

Blockades and disease of cervical spine due to spondylosis or trauma are very common cause of AHP in clinical practice. Here the quantitative head posture measurement is not imperative, but simple objective recording of abnormality evolution can be useful in chronic

AHP is a frequent and important sign in ophthalmology, particularly in childhood. It represents compensation of abnormal eye position and/or motility. Paralyses of eye muscles are compensated by a tilt of the head in direction of the weakened muscle. In congenital nystagmus the AHP tends to shift gaze direction in the null zone of the nystagmus. As a result of the compensatory head position, the vision acuity is enhanced or restored, but unbalanced muscle activation can lead to cervical spine disorders in the long term. Surgical procedures aimed at correction of the eyeball position are effective in repairing the AHP and are considered treatment of choice (artificial divergence, Kestenbaum surgery). The dosage of ocular muscle retroposition/resection depends on the angle of AHP with fixation of distant target. The reduction of abnormal head turn with 1mm muscle resection was 1.4

Ocular tilt reaction is a well established symptom of dysfunction of the graviceptive pathways starting from the otholithic maculae of the inner ear to the vestibular nuclei and paramedian thalamus. This syndrome is defined by the triad of signs – head tilt, ocular globe rotation a deviation of the subjective visual vertical. All deviations directs towards the weak labyrinth, or to the contralateral side after crossing at the pontine level, in the case of brain stem lesions. Head tilt in the frontal plane is usually quickly compensated, after the acute phase is over, but more subtle signs (ocular rotation and subjective vertical) can last for weeks and months. Horizontal eyes alignment is precisely regulated within narrow range of several degrees (Halmagyi et al, 1991), (Brandt & Dieterich, 1994). Deviations in the

8 cases no underlying disease was indentified (Nucci et al, 2005).

(Galardi et al, 2003; Novak et al, 2010).

cases and when cervical spine surgery is considered.

head turn on average in one study (Gräf et al, 2001).

horizontal plane are also easily appreciated even by naked eye during examination. Little is known about head turn in vestibular syndromes. This type of deviation is hard to assess by observation only, indeed, only gross deviations in cases of ocular torticollis are used in clinical practice and regularly cited in literature. Vestibular imbalance due to unilateral labyrinthine failure causes vestibulospinal deviations towards the weaker labyrinth (Hautant reaction, Romberg deviation in standing with closed eyes etc.). It is reasonable to expect head turns of several degrees due to the functional imbalance between the activity horizontal channels. In contrast to the tilt reaction such a finding was not well described until now. Probably, this type of vestibular rotatocollis is compensated by spatial visual clues with open eyes and can be easily overlooked. In this situation, precise technique for head rotation measurement would be of paramount importance.

Last, but not least, precise 3D head position measurement has many potential implications for physical medicine and rehabilitation, particularly in the management and diagnosis of disorders affecting cervical spine. Head position in the sagittal plane is very variable and influenced by many factors, particularly habitual holding of the spine as a whole. Habitual head anteflexion with chronic overload of cervical and upper thoracic spine and muscle imbalance is typical consequence of uncompensated sedentary way of life, starting already in school age. Main reference for sagittal plane is so called Frankfort horizontal (line connecting meatus acusticus with the orbital floor or line connecting tragus with the outer eye canthus), see Figure 1. In most subjects this line is inclined forward bellow the space horizontal, in the extensor type of cervical positions is reclined backwards. The real position of Frankfort horizontal can vary more than 20 in the normative population, in comparison, the position of the eyes in frontal plane is held tightly within several degrees only (Harrison & Wojtowicz, 1996).

b) Anatomical axis

#### Fig. 1. Anatomical Frankfort horizontal and axis.

Precise measurement of head position in rehabilitation and physical medicine is important not only for objective diagnosis of the cervical spine abnormalities, but also as a means of cervical kinesthesia assessment. In this test the ability of the tested subject to assume exact position in space without visual clues is examined (Palmgren et al, 2009). Normal subjects are able to attain desired position with precision of several degrees. Again, these differences are below the discrimination capacity of simple observation or protractor measurement. It is hypothesized, that abnormal setting of cervical proprioception can play important role in many conditions like whiplash injury syndrome, chronic tension headache, cervicogennic

Methods of Measurement and Evaluation of

angles.

electromagnetic system by other laboratory systems.

head and the adaptation of the system is complicated.

Eye, Head and Shoulders Position in Neurological Practice 29

Galardi et al, 2003, developed an objective method for measuring posture and voluntary movements in patients with cervical dystonia using Fastrack. Fastrack is commercial widely used electromagnetic system consisting of a stationary transmitter station and four sensors placed on patient's head. The head position in space was reconstructed based on sensor signals and exact values of angles were observed from the axial, sagittal and coronal planes. The drawback is its inaccuracy in determining the exact position in space because of relatively large sensors placed on the patient's head and therefore inaccurate determination of the anatomical axes. Second drawback is the negatively affected accuracy of an

Hozman et al, 2004, proposed a new method based on the application of three digital cameras placed on a stand and appropriate image processing software. The method was designed for use in neurology to discover relationships between some neurological disorders (such as disorders of vestibular system) and postural head alignment. The objective was to develop a technique for precise head posture measurement or, in other words, for measuring the native position of the head in 3D space. The technique was aimed at determining differences between the anatomical coordinate system (ACS) and the physical coordinate system (PCS). Pictures of the head marked on tragus and outer eye canthus are taken simultaneously by three digital cameras aligned by laser beam. Head position was measured with precision of 0.5 in three planes (rotation-yaw, flexion-pitch and inclination-roll). Hozman et al, 2005, described the new modified system and results are shown and measured on normal subjects. The disadvantage is complicated calibration and

the impossibility of a frontal view of the measured subject (Hozman et al, 2007).

Cerny et al, 2006, described second advanced generation of the system. Head position was measured with precision of 0.5 in three planes. Mean values of the head position (100 healthy controls) are : retro flexion 21.7; inclination to the right 0.2; head rotation to the left 1.7º.

Meers et al, 2008, developed accurate methods for pinpointing the position of infrared LEDs using an inexpensive USB camera and low-cost algorithms for estimating the 3D coordinates of the LEDs based on known geometry. LEDs are implemented in the frame of eye-glasses. The system is accurate low-cost head-pose tracking system. Experimental results are provided demonstrating a head pose tracking accuracy of less than 0.5 when the user is within one meter from the camera. However, the system does not define the anatomical axis of the head and the adaptation of the system is impossible for measurement of anatomical

Recently a number of instruments and tools based on commercial systems have been developed for evaluating the position of the head and shoulders. An example is Zebris motion analysis system (zebris Medical GmbH). Special instruments primarily allow studying ranges of motion of the head, ranges of motion of the spine and coordination of movement. The modified Zebris also allows studying the movement of the jaw. Zebris detects small misalignment of the lower jaw. The three-dimensional measuring coordinates of the ultrasonic markers can be recorded with an overall scanning rate of 200 measurements per second. The modified system consists of a face bow with integrated receiver module and an optimally balanced mandible which measures sensor close to the mandible joint. Unfortunately, the system also does not define the anatomical axis of the

vertigo, anteflexion headache etc. (Raine & Twomey, 1997). Evidence of abnormal cervical proprioception would be an important step in better understanding of these common clinical problems.

#### **2.2 Monitoring head and shoulders movements**

At present, an orthopedic goniometer is the widely used and standard way to simply and rapidly measure angles in clinical practice. However, there are some limitations, especially in case of head and shoulder posture measurement. Due to the combination of three movement components (in the three dimensional space), the measurement using only one goniometer is clearly insufficient. The following overview serves as enumeration of the applications related to the technology available during the last years. This enumeration is not exhaustive but the most important works in the area are included. The methods are typical by using some tools or technology.

Young, 1988, designeda new method to study head position by mirrors. The main principle of new approach is based on using three mirrors and special head markers. The resulting images are taken by one camera. After this, a set of vertical or horizontal lines is drawn with respect to the reference points i.e. markers. The last step is measurement of the relevant angles by a protractor. Head tilt (inclination), head turn (rotation) and chin elevation or depression (flexion/extension) is evaluated. One drawback is the evaluation method based on vertical or horizontal lines defined by reference points, i.e. markers and thus wide variation in cranial configuration found between patients and associated with age.

Murphy et al, 1991, described a system for measuring and recording cranial posture in a dynamic manner. Measurement of the declination and inclination was performed by inclinometers. Inclinometers are widely used instruments for measuring angles of elevation or inclination of an object with respect to gravity based on the accelerometers. Inclinometer was attached to the spectacle rims. Processing of the inclinometer voltages was performed by a modified universal data logger. The inclinometer was calibrated by plastic visor and a perpendicular spirit level. However, principle of the inclinometer does not provide measurement of head rotation.

Ferrario et al, 1994, integrated a method based on the photographic technique, radiographic technique, cephalometric measurements and photographic measurements. The measured subjects were photographed and X-rayed in the same room. The set of standardized marks was traced on all the records. On all photographs, the soft tissues were traced, and the angle between the soft tissue marks and true vertical was calculated. The same angle was calculated on the cephalometric films, and the difference between the two measurements was used to compute the position of the soft and hard tissues. These new values were compared with the values previously observed. The main drawback is exposition of patients to X-ray and relatively time consuming procedures.

Ferrario et al, 1995, developed a new method based on television technology that was faster than conventional analysis. Subject's body and face were identified by 12 points. All subjects were pictured using a standardized technique for frontal views of the total body and lateral views of the neck and face. After 20 seconds of standing, two 2-second films were taken of each subject. On the basis of an image analysis program, the specified angles were calculated after digitizing the recorded films.

vertigo, anteflexion headache etc. (Raine & Twomey, 1997). Evidence of abnormal cervical proprioception would be an important step in better understanding of these common

At present, an orthopedic goniometer is the widely used and standard way to simply and rapidly measure angles in clinical practice. However, there are some limitations, especially in case of head and shoulder posture measurement. Due to the combination of three movement components (in the three dimensional space), the measurement using only one goniometer is clearly insufficient. The following overview serves as enumeration of the applications related to the technology available during the last years. This enumeration is not exhaustive but the most important works in the area are included. The methods are

Young, 1988, designeda new method to study head position by mirrors. The main principle of new approach is based on using three mirrors and special head markers. The resulting images are taken by one camera. After this, a set of vertical or horizontal lines is drawn with respect to the reference points i.e. markers. The last step is measurement of the relevant angles by a protractor. Head tilt (inclination), head turn (rotation) and chin elevation or depression (flexion/extension) is evaluated. One drawback is the evaluation method based on vertical or horizontal lines defined by reference points, i.e. markers and thus wide

Murphy et al, 1991, described a system for measuring and recording cranial posture in a dynamic manner. Measurement of the declination and inclination was performed by inclinometers. Inclinometers are widely used instruments for measuring angles of elevation or inclination of an object with respect to gravity based on the accelerometers. Inclinometer was attached to the spectacle rims. Processing of the inclinometer voltages was performed by a modified universal data logger. The inclinometer was calibrated by plastic visor and a perpendicular spirit level. However, principle of the inclinometer does not provide

Ferrario et al, 1994, integrated a method based on the photographic technique, radiographic technique, cephalometric measurements and photographic measurements. The measured subjects were photographed and X-rayed in the same room. The set of standardized marks was traced on all the records. On all photographs, the soft tissues were traced, and the angle between the soft tissue marks and true vertical was calculated. The same angle was calculated on the cephalometric films, and the difference between the two measurements was used to compute the position of the soft and hard tissues. These new values were compared with the values previously observed. The main drawback is exposition of patients

Ferrario et al, 1995, developed a new method based on television technology that was faster than conventional analysis. Subject's body and face were identified by 12 points. All subjects were pictured using a standardized technique for frontal views of the total body and lateral views of the neck and face. After 20 seconds of standing, two 2-second films were taken of each subject. On the basis of an image analysis program, the specified angles were

variation in cranial configuration found between patients and associated with age.

clinical problems.

**2.2 Monitoring head and shoulders movements** 

typical by using some tools or technology.

measurement of head rotation.

to X-ray and relatively time consuming procedures.

calculated after digitizing the recorded films.

Galardi et al, 2003, developed an objective method for measuring posture and voluntary movements in patients with cervical dystonia using Fastrack. Fastrack is commercial widely used electromagnetic system consisting of a stationary transmitter station and four sensors placed on patient's head. The head position in space was reconstructed based on sensor signals and exact values of angles were observed from the axial, sagittal and coronal planes. The drawback is its inaccuracy in determining the exact position in space because of relatively large sensors placed on the patient's head and therefore inaccurate determination of the anatomical axes. Second drawback is the negatively affected accuracy of an electromagnetic system by other laboratory systems.

Hozman et al, 2004, proposed a new method based on the application of three digital cameras placed on a stand and appropriate image processing software. The method was designed for use in neurology to discover relationships between some neurological disorders (such as disorders of vestibular system) and postural head alignment. The objective was to develop a technique for precise head posture measurement or, in other words, for measuring the native position of the head in 3D space. The technique was aimed at determining differences between the anatomical coordinate system (ACS) and the physical coordinate system (PCS). Pictures of the head marked on tragus and outer eye canthus are taken simultaneously by three digital cameras aligned by laser beam. Head position was measured with precision of 0.5 in three planes (rotation-yaw, flexion-pitch and inclination-roll). Hozman et al, 2005, described the new modified system and results are shown and measured on normal subjects. The disadvantage is complicated calibration and the impossibility of a frontal view of the measured subject (Hozman et al, 2007).

Cerny et al, 2006, described second advanced generation of the system. Head position was measured with precision of 0.5 in three planes. Mean values of the head position (100 healthy controls) are : retro flexion 21.7; inclination to the right 0.2; head rotation to the left 1.7º.

Meers et al, 2008, developed accurate methods for pinpointing the position of infrared LEDs using an inexpensive USB camera and low-cost algorithms for estimating the 3D coordinates of the LEDs based on known geometry. LEDs are implemented in the frame of eye-glasses. The system is accurate low-cost head-pose tracking system. Experimental results are provided demonstrating a head pose tracking accuracy of less than 0.5 when the user is within one meter from the camera. However, the system does not define the anatomical axis of the head and the adaptation of the system is impossible for measurement of anatomical angles.

Recently a number of instruments and tools based on commercial systems have been developed for evaluating the position of the head and shoulders. An example is Zebris motion analysis system (zebris Medical GmbH). Special instruments primarily allow studying ranges of motion of the head, ranges of motion of the spine and coordination of movement. The modified Zebris also allows studying the movement of the jaw. Zebris detects small misalignment of the lower jaw. The three-dimensional measuring coordinates of the ultrasonic markers can be recorded with an overall scanning rate of 200 measurements per second. The modified system consists of a face bow with integrated receiver module and an optimally balanced mandible which measures sensor close to the mandible joint. Unfortunately, the system also does not define the anatomical axis of the head and the adaptation of the system is complicated.

Methods of Measurement and Evaluation of

examined only separately.

Eye, Head and Shoulders Position in Neurological Practice 31

accuracy of these methods is rather poor, because in contrast to the pupil of the eye which is

For parallel measurement of head and eye position (Eui et al, 2007) the best way is to use a VOG method that is based on the principle of scanning the eye (Ruian et al, 2006) using a mobile set of video cameras and consequent data post-processing to a different result in IR (infra red) or visible light spectrum (e.g. nystagmogram, fixing the eye to the projected area etc.). The mobile set is then attached to the head position measurement system based, for example, on gyro-accelerometer sensors. This new parallel measurement method has not been systematically studied and bothmentioned measurement methods have been

Numerous systems for evaluation of eye and upper body parts positions are currently offered on the market, but their wider application is impeded by high financial demands and inaccuracy, because these universal systems are not usually designed for application to study a particular body part - head and shoulders. In the following part of the chapter we will describe the specialized systems designed at CTU Prague and the other labs, to

Our new non-invasive head position measurement method was designed for use in neurology to discover relationships between some neurological disorders and postural alignment. The objective was to develop a technique for precise and non-invasive head posture measurement in 3D space. The technique is aimed at determining differences between the anatomical coordinate system and the physical coordinate system with accuracy from one to two degrees for inclination and rotation (Harrison & Wojtowicz, 1996). Pictures or recordings of the head marked on the tragus and the outer eye canthus, see Figure 1, and the shoulders marked on acromions are taken simultaneously by cameras

visible during measuring, the interface between sclera and iris is often hidden.

**3. Precise advanced eye, head and shoulders position measurement** 

precisely measure the eye, head and shoulders posture at the same time.

**3.1 Precise head and shoulders posture measurement** 

aligned by a laser beam, magnetometers or inertial systems.

Fig. 2. Geometry used for measuring the head position (top view).

There are other modified commercial diagnostic systems based on ultrasonic measurement method (sonoSens Monitor), a camera method (Vicon motion systems, LUKOtronic AS100/AS200), a gyro-accelerometer sensors (Xsens motion trackers), etc. But the systems have the similar disadvantages such as complex preparation, very large sensors or the inability to accurately define the anatomical coordinate system.

#### **2.3 Monitoring eye and head movements**

Monitoring eye movements and plotting their trajectories have a long tradition in medical practice. The measurement of eye position is an important examination tool in understanding human vestibular system. It is used as a diagnostic tool in neurology and psychology (Brandt et al, 2003). Eye tracking is a widely used method of measuring the point of gaze or the motion of an eye relative to the head. An eye tracker is a device for measuring eye position and movement. There are number of methods of measuring eye movement. Eye trackers fall into three categories:

One type uses an attachment to the eye, such as a special contact lens with an embedded mirror or magnetic field sensor. Measurements with contact lenses have provided extremely sensitive recordings of eye movement. However, mechanical elements attached to the eye can negatively influence patient's eye.

The second category uses electric potentials measured by electrodes placed around the eyes. The eyes are the origin of a steady electric potential field. The electric signal that can be derived using two pairs of contact electrodes placed on the skin around eye is called Electrooculogram (EOG). This EOG is sensitive to the saccadic spike potentials from the ocular muscles. The electric potential field can also be detected in total darkness and if the eyes are closed.

The third category uses non-contact, optical method for measuring eye motion. The method is called Videooculography (VOG). Optical methods are widely used for gaze tracking and are favoured for being non-invasive and inexpensive. By looking to the eye we can see its elements – outer filamentous layer with title sclera, further is cornea, iris and eye pupilla. Light, typically infrared, is reflected from the eye and sensed by a camera. Video based eye trackers usually use the corneal reflection and the centre of the pupil as subjects to track over time. The videooculography based on IR spectrum usually uses the infrared light created by a LED (light emitting diode) diode with a wavelength approximately λ=880- 940nm. The VOG method in the IR spectrum detects the pupil using an appropriate light that makes it completely black. The Advantage of this method is relatively easy pupil detection and good quality reflection, most often using an IR LED diode. Disadvantage and limitation is a need to make measurements without access of visible light, i.e. in conditions that do not correspond with patient's real situation.

Eye analysis in the visible light spectrum is far more complicated. The method is called passive, because the eye is scanned in the visible light spectrum due to the diffused visible light. The method without the IR supplementary light is not only safer for the patient (undesirably warms up the eye), but also much more preferable, because it does not necessarily need suppression of background light. Detection can be done due to the sclera and iris interface. Disadvantages of these methods are the uncontrolled lighting from scattered sources, considerable luminous artefacts and high computational power. Also

There are other modified commercial diagnostic systems based on ultrasonic measurement method (sonoSens Monitor), a camera method (Vicon motion systems, LUKOtronic AS100/AS200), a gyro-accelerometer sensors (Xsens motion trackers), etc. But the systems have the similar disadvantages such as complex preparation, very large sensors or the

Monitoring eye movements and plotting their trajectories have a long tradition in medical practice. The measurement of eye position is an important examination tool in understanding human vestibular system. It is used as a diagnostic tool in neurology and psychology (Brandt et al, 2003). Eye tracking is a widely used method of measuring the point of gaze or the motion of an eye relative to the head. An eye tracker is a device for measuring eye position and movement. There are number of methods of measuring eye

One type uses an attachment to the eye, such as a special contact lens with an embedded mirror or magnetic field sensor. Measurements with contact lenses have provided extremely sensitive recordings of eye movement. However, mechanical elements attached to the eye

The second category uses electric potentials measured by electrodes placed around the eyes. The eyes are the origin of a steady electric potential field. The electric signal that can be derived using two pairs of contact electrodes placed on the skin around eye is called Electrooculogram (EOG). This EOG is sensitive to the saccadic spike potentials from the ocular muscles. The electric potential field can also be detected in total darkness and if the

The third category uses non-contact, optical method for measuring eye motion. The method is called Videooculography (VOG). Optical methods are widely used for gaze tracking and are favoured for being non-invasive and inexpensive. By looking to the eye we can see its elements – outer filamentous layer with title sclera, further is cornea, iris and eye pupilla. Light, typically infrared, is reflected from the eye and sensed by a camera. Video based eye trackers usually use the corneal reflection and the centre of the pupil as subjects to track over time. The videooculography based on IR spectrum usually uses the infrared light created by a LED (light emitting diode) diode with a wavelength approximately λ=880- 940nm. The VOG method in the IR spectrum detects the pupil using an appropriate light that makes it completely black. The Advantage of this method is relatively easy pupil detection and good quality reflection, most often using an IR LED diode. Disadvantage and limitation is a need to make measurements without access of visible light, i.e. in conditions

Eye analysis in the visible light spectrum is far more complicated. The method is called passive, because the eye is scanned in the visible light spectrum due to the diffused visible light. The method without the IR supplementary light is not only safer for the patient (undesirably warms up the eye), but also much more preferable, because it does not necessarily need suppression of background light. Detection can be done due to the sclera and iris interface. Disadvantages of these methods are the uncontrolled lighting from scattered sources, considerable luminous artefacts and high computational power. Also

inability to accurately define the anatomical coordinate system.

**2.3 Monitoring eye and head movements** 

movement. Eye trackers fall into three categories:

that do not correspond with patient's real situation.

can negatively influence patient's eye.

eyes are closed.

accuracy of these methods is rather poor, because in contrast to the pupil of the eye which is visible during measuring, the interface between sclera and iris is often hidden.

For parallel measurement of head and eye position (Eui et al, 2007) the best way is to use a VOG method that is based on the principle of scanning the eye (Ruian et al, 2006) using a mobile set of video cameras and consequent data post-processing to a different result in IR (infra red) or visible light spectrum (e.g. nystagmogram, fixing the eye to the projected area etc.). The mobile set is then attached to the head position measurement system based, for example, on gyro-accelerometer sensors. This new parallel measurement method has not been systematically studied and bothmentioned measurement methods have been examined only separately.

## **3. Precise advanced eye, head and shoulders position measurement**

Numerous systems for evaluation of eye and upper body parts positions are currently offered on the market, but their wider application is impeded by high financial demands and inaccuracy, because these universal systems are not usually designed for application to study a particular body part - head and shoulders. In the following part of the chapter we will describe the specialized systems designed at CTU Prague and the other labs, to precisely measure the eye, head and shoulders posture at the same time.

#### **3.1 Precise head and shoulders posture measurement**

Our new non-invasive head position measurement method was designed for use in neurology to discover relationships between some neurological disorders and postural alignment. The objective was to develop a technique for precise and non-invasive head posture measurement in 3D space. The technique is aimed at determining differences between the anatomical coordinate system and the physical coordinate system with accuracy from one to two degrees for inclination and rotation (Harrison & Wojtowicz, 1996). Pictures or recordings of the head marked on the tragus and the outer eye canthus, see Figure 1, and the shoulders marked on acromions are taken simultaneously by cameras aligned by a laser beam, magnetometers or inertial systems.

Fig. 2. Geometry used for measuring the head position (top view).

Methods of Measurement and Evaluation of

where

profile image.

Eye, Head and Shoulders Position in Neurological Practice 33

*a1x* is the x-axis coordinate and *a1y* is the y-axis coordinate of the tragus in the profile image, and *a2x* is the x-axis coordinate and *a2y* is the y-axis coordinate of the outer eye canthus in the

For evaluating shoulder inclination and shoulder rotation, we use a method similar to the method that we use for evaluating head position. Picture of acromions on shoulders is taken by cameras. A medical doctor indicates acromions with marks for easy location of these anatomical points in the pictures. If the clinical investigation is carried out by an experienced medical doctor, it is not necessary to apply coloured marks to the anatomical parts of the body before making an examination using camera system. The software can also determine the

*px a px a*, *px a px* , (5)

*v* 1,0 . (6)

(7)

(8)

*u a* 1 21 2 *x xy y*

Fig. 3. Geometry used for measuring the shoulder position (top view).

angular displacement of the head to the shoulders for rotation, using the formula

2). The angular displacement of the head to the shoulders for inclination

(roll), flexion (pitch) and rotation (yaw) of the head and shoulders.

 

 

where the *ζ* is angle of shoulders inclination and *σ* is angle of head inclination.

where the *η* is angle of shoulders rotation (Figure 3) and *φ* is angle of head rotation (Figure

In the way described above, based on identifying anatomical points with the use of two cameras (Figure 4) we can avoid influencing patients while we are measuring the inclination

The last proposed system is a combination of camera and gyroscope-accelerometer system. One camera is placed vertically above the patient's head. The second camera is positioned behind the patient in order not to impede the frontal view of the patient during the examination. Shoulders are marked in 3D space by a two IR diodes. Position of the two anatomical points (acromions) on the shoulders is defined by the two IR diodes. Diodes are placed on patient's shoulders by the physician before the examination. The position of

In our recent method for studying only head position (Kutilek & Hozman, 2009), only two cameras are required for determining head positions i.e. anatomical coordinate system in the physical coordinate system. The rotation and inclination of the head is calculated from the difference between the tragus coordinates in the left-profile and right-profile image or eventually in the frontal picture and top view picture of head. The coordinates of the left and right tragus are evaluated by finding the centre of the rounded mark attached to the tragus. After evaluating the coordinates of the tragus in the captured images, the angles of head rotation and inclination are calculated. The images are captured simultaneously, using two cameras, which are calibrated. The adjustment of the camera system defines the position of the head in the instrument coordinate system. The position of the head in the physical coordinate system is determined by special software and the exactly known adjustment of the camera system.

Generally, the angles of rotation and inclination can be determined by vector *v* and vector *u*, which represent the coordinates of the points evaluated in the image, see Figure 2. The angle (for example rotation *φ*) is calculated as follows (1):

$$\varphi = \operatorname{arctg}\left(\frac{a\_{1y}\lceil px\rceil - b\_{1y}\lceil px\rceil}{a\_{1x}\lceil px\rceil - b\_{1x}\lceil px\rceil}\right) \tag{1}$$

where

$$
\mu = \left( a\_{1x} \left[ \underline{\ulcorner p} \underline{\urcorner} \right], a\_{1y} \left[ \underline{\urcorner} p \underline{\urcorner} \right] \right) . \tag{2}
$$

$$
\sigma = \left( b\_{1x} \lceil p\mathbf{x} \rceil, b\_{1y} \lceil p\mathbf{x} \rceil \right). \tag{3}
$$

The *a1x* is the x-axis coordinate and *a1y* is the y-axis coordinate of the left tragus in the top view image. The *b1x* is the x-axis coordinate and *b1y* is the y-axis coordinate of the right tragus in top view image. We can determine the angle of inclination in the similar way in the frontal view image. After calculation of the angles in the instrument coordinate system, the angles in physical coordinate system are derived by mathematical transformation.

If we use profile photographs (side shots) and want to evaluate the elevation/depression of the head in the instrument coordinate system and sagittal plane, it is also a mathematically simple problem. The flexion value is measured relatively as the inclination of the connecting line between the coordinates of tragus and the exterior eye corner. The coordinates are evaluated by finding the centre of the rounded mark attached to the tragus and outer eye canthus of the patient. The angle between the anatomical and physical or instrument horizontal (depends on adjustment of instrument) is determined by the angle between vector *v* (horizontal vector), here given by the camera position, and vector *u*, which here represents the coordinates of the points (corresponding to the tragus marker and exterior eye corner marker) evaluated in the image. The angle is calculated as follows (4):

$$\rho o = \arccos\left(\frac{\vec{u} \cdot \vec{v}}{|\vec{u}| \cdot |\vec{v}|}\right) = \arccos\left(\frac{u\_x \cdot v\_x + u\_y \cdot v\_y}{\sqrt{u\_x^2 + u\_y^2} \cdot \sqrt{v\_x^2 + v\_y^2}}\right) \tag{4}$$

where

(1)

32 Advanced Topics in Neurological Disorders

In our recent method for studying only head position (Kutilek & Hozman, 2009), only two cameras are required for determining head positions i.e. anatomical coordinate system in the physical coordinate system. The rotation and inclination of the head is calculated from the difference between the tragus coordinates in the left-profile and right-profile image or eventually in the frontal picture and top view picture of head. The coordinates of the left and right tragus are evaluated by finding the centre of the rounded mark attached to the tragus. After evaluating the coordinates of the tragus in the captured images, the angles of head rotation and inclination are calculated. The images are captured simultaneously, using two cameras, which are calibrated. The adjustment of the camera system defines the position of the head in the instrument coordinate system. The position of the head in the physical coordinate system is determined by special software and the exactly known

Generally, the angles of rotation and inclination can be determined by vector *v* and vector *u*, which represent the coordinates of the points evaluated in the image, see Figure 2. The angle

> 1 1 1 1 *y y x x a px b px arctg a px b px*

*u a px a px* 1 1 *x y* , , (2)

*v b px b px* 1 1 *x y* , . (3)

22 22

(4)

*x y x y*

The *a1x* is the x-axis coordinate and *a1y* is the y-axis coordinate of the left tragus in the top view image. The *b1x* is the x-axis coordinate and *b1y* is the y-axis coordinate of the right tragus in top view image. We can determine the angle of inclination in the similar way in the frontal view image. After calculation of the angles in the instrument coordinate system, the

If we use profile photographs (side shots) and want to evaluate the elevation/depression of the head in the instrument coordinate system and sagittal plane, it is also a mathematically simple problem. The flexion value is measured relatively as the inclination of the connecting line between the coordinates of tragus and the exterior eye corner. The coordinates are evaluated by finding the centre of the rounded mark attached to the tragus and outer eye canthus of the patient. The angle between the anatomical and physical or instrument horizontal (depends on adjustment of instrument) is determined by the angle between vector *v* (horizontal vector), here given by the camera position, and vector *u*, which here represents the coordinates of the points (corresponding to the tragus marker and exterior

arccos arccos *xx yy*

*u v uv uv u v uu vv*

angles in physical coordinate system are derived by mathematical transformation.

eye corner marker) evaluated in the image. The angle is calculated as follows (4):

adjustment of the camera system.

where

(for example rotation *φ*) is calculated as follows (1):

$$
\mu = \left( a\_{1x} \boxed{p} \text{x} \right) - a\_{2x} \boxed{p} \text{x} \left[ \text{.} a\_{1y} \boxed{p} \text{.} a\_{2y} \boxed{p} \text{.} a\_{2y} \boxed{p} \text{.} a\_{1} \boxed{p} \text{.} \tag{5}
$$

$$w = \begin{pmatrix} 1,0 \end{pmatrix} \tag{6}$$

*a1x* is the x-axis coordinate and *a1y* is the y-axis coordinate of the tragus in the profile image, and *a2x* is the x-axis coordinate and *a2y* is the y-axis coordinate of the outer eye canthus in the profile image.

Fig. 3. Geometry used for measuring the shoulder position (top view).

For evaluating shoulder inclination and shoulder rotation, we use a method similar to the method that we use for evaluating head position. Picture of acromions on shoulders is taken by cameras. A medical doctor indicates acromions with marks for easy location of these anatomical points in the pictures. If the clinical investigation is carried out by an experienced medical doctor, it is not necessary to apply coloured marks to the anatomical parts of the body before making an examination using camera system. The software can also determine the angular displacement of the head to the shoulders for rotation, using the formula

$$
\kappa = \eta - \varphi \tag{7}
$$

where the *η* is angle of shoulders rotation (Figure 3) and *φ* is angle of head rotation (Figure 2). The angular displacement of the head to the shoulders for inclination

$$
\lambda = \zeta - \sigma \tag{8}
$$

where the *ζ* is angle of shoulders inclination and *σ* is angle of head inclination.

In the way described above, based on identifying anatomical points with the use of two cameras (Figure 4) we can avoid influencing patients while we are measuring the inclination (roll), flexion (pitch) and rotation (yaw) of the head and shoulders.

The last proposed system is a combination of camera and gyroscope-accelerometer system. One camera is placed vertically above the patient's head. The second camera is positioned behind the patient in order not to impede the frontal view of the patient during the examination. Shoulders are marked in 3D space by a two IR diodes. Position of the two anatomical points (acromions) on the shoulders is defined by the two IR diodes. Diodes are placed on patient's shoulders by the physician before the examination. The position of

Methods of Measurement and Evaluation of

**3.2 Measurement of eye and head position** 

systematically studied (Hozman et al, 2008).

special HMD display unit in 2D or 3D space.

Fig. 6. Location of the pupil center with the rays.

movements present angular change approximately 400 – 450 º/s.

Eye, Head and Shoulders Position in Neurological Practice 35

Despite the fact that an accurate method for measuring the head position and the eye position could contribute to diagnosis of vestibular system, this issue has not been

Horizontal and vertical eye movements can be measured by an image of the eye (Moore et al, 2006) by detecting the edges of the pupil (iris) and fitting them into an ellipse (Li, 2006). The main aim of the analysis of eye movements is to obtain the centre of the pupil or the iris. The torsion measurement needs high quality iris description. The video system PAL (NTSC) record video with frequency 50 Hz (60 Hz) non-interlace. These video systems are too slow to capture images of the eye movements, e.g. torsion iris description. In medical practice documented eye movements were with frequency approximately 200–250 Hz. These

We used the detection method which searches interface points between the pupil and the iris or between the iris and the sclera. The points are base of the mathematical function (e.g. circle or ellipse). Goal of our solution is to use eye movements' detection in comparison with the stimulation scene. The scene can be showed on the LCD screen or through the

We used a new system based on finding the outline pupil of the eye. We applied modified Starburst (Duchowski et al, 2009) algorithm (Ruian et al, 2006), which we used in the IR spectrum or in the visible spectrum. The system and algorithm was first published by Iowa University in 2006 (Li, 2006). Thanks to the special 3D HMD projection displays we used Starburst algorithm for measuring in the IR spectrum and appropriate LED diode to illuminate the eye. The method is called active, because the eye is scanned in the infra red spectrum. Goal of eye movements' measurement was location of the centre of the pupil area. Method of finding the margins of the pupil (IR spectrum) or the iris (visible spectrum) is limited by quantity of the rays. The starting point shoots the rays to generate candidate pupil points. The candidate pupil points shoot rays back towards the start point to detect more candidate pupil points. This two-stage detection method takes advantage of the elliptical profile of the pupil contour to preferentially detect features on the pupil contour.

head's anatomical coordinate system is defined by a special instrument, see Figure 5. A physician puts the instrument on patient's head. The physician defines the position of the anatomical axes of the head by a special ruler on the instrument. Position of the head is defined by the two IR light diodes mounted on the instrument and gyroscope-accelerometer system. Rotation and inclination is then determined from the position of two infrared diodes mounted on the instrument and by two cameras. Flexion/extension is determined by accurate gyroscope-accelerometer system made by the company TRIVISIO, which is also part of the special instrument. The proposed special instrument is in Figure 5.

Fig. 4. The two-arm stand with fixed two cameras and laser collimators designed for the precise adjustment of the system.

Fig. 5. Combined inertial system with infrared marker for detecting the position by IR camera system.

### **3.2 Measurement of eye and head position**

34 Advanced Topics in Neurological Disorders

head's anatomical coordinate system is defined by a special instrument, see Figure 5. A physician puts the instrument on patient's head. The physician defines the position of the anatomical axes of the head by a special ruler on the instrument. Position of the head is defined by the two IR light diodes mounted on the instrument and gyroscope-accelerometer system. Rotation and inclination is then determined from the position of two infrared diodes mounted on the instrument and by two cameras. Flexion/extension is determined by accurate gyroscope-accelerometer system made by the company TRIVISIO, which is also

part of the special instrument. The proposed special instrument is in Figure 5.

Fig. 4. The two-arm stand with fixed two cameras and laser collimators designed for the

Fig. 5. Combined inertial system with infrared marker for detecting the position by IR

precise adjustment of the system.

camera system.

Despite the fact that an accurate method for measuring the head position and the eye position could contribute to diagnosis of vestibular system, this issue has not been systematically studied (Hozman et al, 2008).

Horizontal and vertical eye movements can be measured by an image of the eye (Moore et al, 2006) by detecting the edges of the pupil (iris) and fitting them into an ellipse (Li, 2006). The main aim of the analysis of eye movements is to obtain the centre of the pupil or the iris. The torsion measurement needs high quality iris description. The video system PAL (NTSC) record video with frequency 50 Hz (60 Hz) non-interlace. These video systems are too slow to capture images of the eye movements, e.g. torsion iris description. In medical practice documented eye movements were with frequency approximately 200–250 Hz. These movements present angular change approximately 400 – 450 º/s.

We used the detection method which searches interface points between the pupil and the iris or between the iris and the sclera. The points are base of the mathematical function (e.g. circle or ellipse). Goal of our solution is to use eye movements' detection in comparison with the stimulation scene. The scene can be showed on the LCD screen or through the special HMD display unit in 2D or 3D space.

We used a new system based on finding the outline pupil of the eye. We applied modified Starburst (Duchowski et al, 2009) algorithm (Ruian et al, 2006), which we used in the IR spectrum or in the visible spectrum. The system and algorithm was first published by Iowa University in 2006 (Li, 2006). Thanks to the special 3D HMD projection displays we used Starburst algorithm for measuring in the IR spectrum and appropriate LED diode to illuminate the eye. The method is called active, because the eye is scanned in the infra red spectrum. Goal of eye movements' measurement was location of the centre of the pupil area. Method of finding the margins of the pupil (IR spectrum) or the iris (visible spectrum) is limited by quantity of the rays. The starting point shoots the rays to generate candidate pupil points. The candidate pupil points shoot rays back towards the start point to detect more candidate pupil points. This two-stage detection method takes advantage of the elliptical profile of the pupil contour to preferentially detect features on the pupil contour.

Fig. 6. Location of the pupil center with the rays.

Methods of Measurement and Evaluation of

at the same time).

3D stimulation.

2D and 3D stimulation et al.).

Eye, Head and Shoulders Position in Neurological Practice 37

On the previous base type we made the new projection displays, which allowed tracing projection on the LCD monitor or the projection screen in visible spectrum. Detection algorithm can measure the eye position and the scene position. The video files of positions are merged into the date file. The second version of projection displays has lower weight, does not contain any sharp edges and includes the cameras, which are connected with the help of USB (Universal Serial Bus) interface and does not use any special recorder. Power supply is solved over the USB port. We used record software TVideoGrabber. The software TVideoGrabber can set capture parameters (30 FPS – Frames Per Second, 640 x 480 pixels, RGB24, data format AVI). We used an external flash from photographic apparatus for synchronisation between two cameras (In the future we will use the TVideoGrabber component with more threads. The threads will start recording from several video sources

The next type of our projection displays is designated for measuring in the IR spectrum. The third projection displays use the IR USB cameras which record eye movements. The type of these projection displays must use cameras which support eye movements scans in the IR spectrum because lighting is already poor. This type of projection displays combines a unique system for measuring eye movements and head position with 2D or

The specialized glasses - projection displays for neurological examination can be used without the LCD monitor thanks to build-in HMD 3D projections displays eMagin Z800 3DVisor. These projections displays can be used as a mobile system as well.. We use experimental systems for monitoring eye movements with different luminous conditions (in the visible spectrum or in the IR spectrum) or using different stimulation sources (e.g. record eye movements at specific activities - "eye Holter", long time eye movements record,

For analysis of the positions we use simple methods based on combination of the basic methods of evaluation of individual body parts. Below we describe methods for accurate interpretation of the measured data. Given that the systems provide the processed data, the assessment is simple for physicians. Physicians only need to observe the conditions of measurement, such as the precise adjustment of the system before the measurement. They

It is mathematically a simple problem to determine the inclination, rotation and flexion/extension of the head from photographs and by gyro-accelerometer sensor. The angle values are measured and transformed automatically to physical coordinate system. The measurement process is usually carried out according to a predefined procedure to be followed, see Figure 7. The process is based on two main steps/parts of measurement and computational algorithm. First part of algorithm is designed for the precise adjustment of the system. The second part is intended to measure patient's body segments and to

**4. Interpretation and evaluation of eye, head and shoulders position** 

also need to respect the maximum certified accuracy of systems.

**4.1 Evaluation of head and shoulders position** 

calculate of the angles in physical coordinate system.

The circle (see Figure 6) shows the centre of the pupil after the second iteration (and after the next iteration) and includes determined points. The iteration to find the centre of the pupil was stopped when the detected centre of the new points turned less than d=10 pixels. Thanks to exponential calculation the error of the pupil centre is about ±10 pixels of the whole circle bearings and it is important from the point of view of the found points´ fit resulting ellipse. At the end, we can find centre of the pupil from the resulting ellipse.

There are more possible methods how we can put together the resulting ellipse. We chose method (Ruian et al, 2006) Random Sample Consensus (RANSAC) to solve the problem with large error points (Lee & Park, 2009). The method RANSAC is an efficient technique for completion of the model in the presence of large, but unknown percentage outlines in sample measurement. In our case all internal found points are probable points that correspond with outline of the pupil. RANSAC algorithm was used to estimate parameters of the mathematical model from a set of observed data which contains outliers. On the basis of the MATLAB documentation we used optimal mathematical model to create the ellipse - Nelder-Mead's algorithm.

For the eye stimulation we used commercial HMD system eMagin Z800 3DVisor personal display with integrated head tracker which can measure head position in the 3D space. The Z800 3DVisor is the personal display system to combine two OLED (organic light-emitting diode) micro displays with stereovision 3D capabilities. Stereo vision refers to the human ability to see in three dimensions and most often refers to depth perception (the ability to determine the approximate distance of objects). Stereovision 3D provides this experience by delivering two distinct images simultaneously on two separate screens, one for each eye. The Z800 3DVisor personal display is used to stimulate the eye in 2D or 3D space. The position of eye and the position of head can be recorded simultaneously by the video camera and integrated head tracker to the laptop.

The HMD displays eMagin Z800 3DVisor's integrated head tracker uses MEMS (micro-electromechanical system) accelerometers and gyroscopes to detect motion. The head tracker features three gyroscopes, one each for the x-, y-, and z-axis. In addition, the head tracker contains corresponding compasses and accelerometers to ensure performance over varying forms of motion. Such equipment has not been used before in medical practice. From the point of view of contemporary technology there is a possibility to use more accurate miniature 3D inertial measurement unit/motion sensors (IMU) with accelerometer, magnetometer and gyroscope (for example Xsens motion technologies) and custom made Head Mounted Display (HMD). We used the head tracker for the measurement of head position.

For acquisition of the head motion we programmed special software based on Z800 3DVisor SDK 2.2. The software retrieves position of the head from the build-in head tracker through the USB connection and saves the measured results in to the CSV (comma-separated values) file. The result of measurement can be presented graphically as a graph of the head position. By this set we are able to measure eye and head movements continuously and simultaneously.

During the measurement problematic parts had to be solved before the next biomedical tests. The problems were with weight, sharp edges on the semi-permeable mirrors and minimal place between personal display Z800 3DVisor and the eyes.

The circle (see Figure 6) shows the centre of the pupil after the second iteration (and after the next iteration) and includes determined points. The iteration to find the centre of the pupil was stopped when the detected centre of the new points turned less than d=10 pixels. Thanks to exponential calculation the error of the pupil centre is about ±10 pixels of the whole circle bearings and it is important from the point of view of the found points´ fit resulting ellipse. At the end, we can find centre of the pupil from the resulting ellipse.

There are more possible methods how we can put together the resulting ellipse. We chose method (Ruian et al, 2006) Random Sample Consensus (RANSAC) to solve the problem with large error points (Lee & Park, 2009). The method RANSAC is an efficient technique for completion of the model in the presence of large, but unknown percentage outlines in sample measurement. In our case all internal found points are probable points that correspond with outline of the pupil. RANSAC algorithm was used to estimate parameters of the mathematical model from a set of observed data which contains outliers. On the basis of the MATLAB documentation we used optimal mathematical model to create the ellipse -

For the eye stimulation we used commercial HMD system eMagin Z800 3DVisor personal display with integrated head tracker which can measure head position in the 3D space. The Z800 3DVisor is the personal display system to combine two OLED (organic light-emitting diode) micro displays with stereovision 3D capabilities. Stereo vision refers to the human ability to see in three dimensions and most often refers to depth perception (the ability to determine the approximate distance of objects). Stereovision 3D provides this experience by delivering two distinct images simultaneously on two separate screens, one for each eye. The Z800 3DVisor personal display is used to stimulate the eye in 2D or 3D space. The position of eye and the position of head can be recorded simultaneously by the video

The HMD displays eMagin Z800 3DVisor's integrated head tracker uses MEMS (micro-electromechanical system) accelerometers and gyroscopes to detect motion. The head tracker features three gyroscopes, one each for the x-, y-, and z-axis. In addition, the head tracker contains corresponding compasses and accelerometers to ensure performance over varying forms of motion. Such equipment has not been used before in medical practice. From the point of view of contemporary technology there is a possibility to use more accurate miniature 3D inertial measurement unit/motion sensors (IMU) with accelerometer, magnetometer and gyroscope (for example Xsens motion technologies) and custom made Head Mounted Display (HMD).

For acquisition of the head motion we programmed special software based on Z800 3DVisor SDK 2.2. The software retrieves position of the head from the build-in head tracker through the USB connection and saves the measured results in to the CSV (comma-separated values) file. The result of measurement can be presented graphically as a graph of the head position. By this set we are able to measure eye and head movements continuously and

During the measurement problematic parts had to be solved before the next biomedical tests. The problems were with weight, sharp edges on the semi-permeable mirrors and

Nelder-Mead's algorithm.

simultaneously.

camera and integrated head tracker to the laptop.

We used the head tracker for the measurement of head position.

minimal place between personal display Z800 3DVisor and the eyes.

On the previous base type we made the new projection displays, which allowed tracing projection on the LCD monitor or the projection screen in visible spectrum. Detection algorithm can measure the eye position and the scene position. The video files of positions are merged into the date file. The second version of projection displays has lower weight, does not contain any sharp edges and includes the cameras, which are connected with the help of USB (Universal Serial Bus) interface and does not use any special recorder. Power supply is solved over the USB port. We used record software TVideoGrabber. The software TVideoGrabber can set capture parameters (30 FPS – Frames Per Second, 640 x 480 pixels, RGB24, data format AVI). We used an external flash from photographic apparatus for synchronisation between two cameras (In the future we will use the TVideoGrabber component with more threads. The threads will start recording from several video sources at the same time).

The next type of our projection displays is designated for measuring in the IR spectrum. The third projection displays use the IR USB cameras which record eye movements. The type of these projection displays must use cameras which support eye movements scans in the IR spectrum because lighting is already poor. This type of projection displays combines a unique system for measuring eye movements and head position with 2D or 3D stimulation.

The specialized glasses - projection displays for neurological examination can be used without the LCD monitor thanks to build-in HMD 3D projections displays eMagin Z800 3DVisor. These projections displays can be used as a mobile system as well.. We use experimental systems for monitoring eye movements with different luminous conditions (in the visible spectrum or in the IR spectrum) or using different stimulation sources (e.g. record eye movements at specific activities - "eye Holter", long time eye movements record, 2D and 3D stimulation et al.).

## **4. Interpretation and evaluation of eye, head and shoulders position**

For analysis of the positions we use simple methods based on combination of the basic methods of evaluation of individual body parts. Below we describe methods for accurate interpretation of the measured data. Given that the systems provide the processed data, the assessment is simple for physicians. Physicians only need to observe the conditions of measurement, such as the precise adjustment of the system before the measurement. They also need to respect the maximum certified accuracy of systems.

### **4.1 Evaluation of head and shoulders position**

It is mathematically a simple problem to determine the inclination, rotation and flexion/extension of the head from photographs and by gyro-accelerometer sensor. The angle values are measured and transformed automatically to physical coordinate system. The measurement process is usually carried out according to a predefined procedure to be followed, see Figure 7. The process is based on two main steps/parts of measurement and computational algorithm. First part of algorithm is designed for the precise adjustment of the system. The second part is intended to measure patient's body segments and to calculate of the angles in physical coordinate system.

Methods of Measurement and Evaluation of

**4.2 Eye and head movement analysis** 

i.e. shots, see Figure 8.

and simultaneously.

Fig. 8. Example of graph of pupil center movements

position [px]

Eye, Head and Shoulders Position in Neurological Practice 39

The Goal of our new designed methods is to use eye movements' detection together with the stimulation scene. Thanks to the special 3D HMD projection displays, we used Starburst algorithm for measuring in the IR spectrum and appropriate LED diode for illumination of the eye (Charfreitag et al, 2008). The Goal of the eye movements' measurement was to locate the centre of the pupil area (Stampe, 1993). We used a new system based on finding the contour line of the pupil of the eye. Finally, at the end, we can find the centre of the pupil from the resulting ellipse at the camera coordinate system

The second part was to use the headtracker to measure the head position. The first measured values were used as initial, i.e. zero and were used as correction for all subsequent values. The new systems provide direct information for physicians on the current position of the pupil centre represented by pixels or millimetres and patient's head represented by three angles, see Figure 9. There is no further information processing and the physician may use the data to evaluate patient's health. The head position was measured by modified 3D HMD (Z800 3DVisor ) with precision of 1.0 in three planes (Charfreitag et al, 2009). Thus, we can study the three dimensional motion of head defined by three angles – inclination, rotation and flexion/extension. By this method we can also study, analyze and measure eye and head movements continuously

> frame number [FPS]

position in the x-axis position in the y-axis

Fig. 7. Flowchart of clinical measurements using designed camera system.

The described systems provide direct information for physicians on the current position of the patient's head and patient's shoulders represented by the angles. There is no further information processing and the physician may use the data to evaluate patient's health. The designed systems measure head position with precision of 0,5º (Hozman et al, 2008) in three planes (rotation -yaw, flexion-pitch and inclination-roll). Our experimental measurement of the head position was completed with measurement of subjective perception of vertical (SPV). The subject tried to align a needle to vertical position when peering into white sphere. Final angle of the needle was measured. The measured data shows that healthy subject holds his head aligned with physical coordinate system in the range of ±5 degrees for inclination. The set of data was measured on recruited volunteers. The results also predict that there is a correlation between values of inclination and SPV.

#### **4.2 Eye and head movement analysis**

38 Advanced Topics in Neurological Disorders

Begin measurements

First measurement ?

Capture a picture of patient's head and shoulders

Calculate the inclinations, flexion and rotations of head and shoulders

Calculate the position of head to the shoulders

No

Yes

Capture pictures to correct the calculation

> Analyse images

Calculate the correction values

Display measurement results and store data in a database

between values of inclination and SPV.

Fig. 7. Flowchart of clinical measurements using designed camera system.

The described systems provide direct information for physicians on the current position of the patient's head and patient's shoulders represented by the angles. There is no further information processing and the physician may use the data to evaluate patient's health. The designed systems measure head position with precision of 0,5º (Hozman et al, 2008) in three planes (rotation -yaw, flexion-pitch and inclination-roll). Our experimental measurement of the head position was completed with measurement of subjective perception of vertical (SPV). The subject tried to align a needle to vertical position when peering into white sphere. Final angle of the needle was measured. The measured data shows that healthy subject holds his head aligned with physical coordinate system in the range of ±5 degrees for inclination. The set of data was measured on recruited volunteers. The results also predict that there is a correlation The Goal of our new designed methods is to use eye movements' detection together with the stimulation scene. Thanks to the special 3D HMD projection displays, we used Starburst algorithm for measuring in the IR spectrum and appropriate LED diode for illumination of the eye (Charfreitag et al, 2008). The Goal of the eye movements' measurement was to locate the centre of the pupil area (Stampe, 1993). We used a new system based on finding the contour line of the pupil of the eye. Finally, at the end, we can find the centre of the pupil from the resulting ellipse at the camera coordinate system i.e. shots, see Figure 8.

The second part was to use the headtracker to measure the head position. The first measured values were used as initial, i.e. zero and were used as correction for all subsequent values. The new systems provide direct information for physicians on the current position of the pupil centre represented by pixels or millimetres and patient's head represented by three angles, see Figure 9. There is no further information processing and the physician may use the data to evaluate patient's health. The head position was measured by modified 3D HMD (Z800 3DVisor ) with precision of 1.0 in three planes (Charfreitag et al, 2009). Thus, we can study the three dimensional motion of head defined by three angles – inclination, rotation and flexion/extension. By this method we can also study, analyze and measure eye and head movements continuously and simultaneously.

Fig. 8. Example of graph of pupil center movements

Methods of Measurement and Evaluation of

measurement (Kutilek & Hozman, 2009).

human vestibular system.

measurement.

Republic.

**7. References** 

pp. 337–347.

**6. Acknowledgment** 

Eye, Head and Shoulders Position in Neurological Practice 41

The measurement results of mean values of the head position being (100 healthy controls): retro flexion 21.7; inclination to the right 0.2; head rotation to the left 1.7. The rotation measurement has a greater error in comparison with the inclination and flexion/extension

We have also described related systems and designed system for monitoring eye movements. Our equipment designed for measurement of the eye and head movements is based on display units - specialized glasses with eMagin 3DVisor. We modified the specialized projection displays for neurological examination which can perform measurements using a variable set of visual stimuli and active head movements. The solution combines system for measurement of the eye movements and the head posture in the 3D space with 2D or 3D eye stimulation. We came to the conclusion that it is possible to join together the two important and closely related methods for the measurement of the

A result of this study is the recommendation to use the video cameras with higher frequency (approximately 200 Hz) for the measurement of eye movements and the head tracker with lower dynamic error (less than 0.3/s) for the measurement of head position. The overall accuracy of our designed system could increase significantly because the accuracy of the method alone is in eights of degree per the ten measurements. This is the dynamic error due to the low-cost head tracker which needs long time to stabilise after the previous

Above described and designed ways of measuring eye, head and shoulder position and motion could also be applied in other areas of engineering, medicine and science. Our

The work presented here was carried out at the Czech Technical University in Prague, Faculty of Biomedical Engineering within the framework of research program No. MSM 6840770012 "Transdisciplinary Biomedical Engineering Research II" of the Czech Technical University, sponsored by the Ministry of Education, Youth and Sports of the Czech

Brandt, T., Cohen, B., Siebold, Ch. (2003). *The Oculomotor and Vestibular Systems: Their* 

Brandt T., Dieterich M. (1994). Vestibular Syndromes in the Roll Plane:

Cerny R., Strohm K., Hozman J., Stoklasa J., Sturm D. (2006). Head in Space - Noninvasive

Cerny R., Hozman J., Charfreitag J., Kutílek P. (2009). Position of the head measured by

Topographic Diagnosis from Brain Stem to Cortex, *Annals of Neurology*, Vol. 36,

Measurement of Head Posture, *The 11th Danube Symposium - International* 

digital photograph analysis, *World Congress on Medical Physics and Biomedical* 

*Function and Disorders*, Vol. 1004, Ann. N.Y. Acad. Sci.

*Otorhinolaryngological Congress*, Bled, pp. 39-42.

systems can be used anywhere to study the posture of a person.

Fig. 9. Example of graph of head movements

## **5. Conclusion**

In this chapter we have described related works and designed special equipment and measurement methods for very accurate evaluation of eye, head and shoulder position in neurological practice. Possible applications and perspectives for clinical practice are also described in the chapter.

We have described systems and sets of procedures for evaluation of the inclination, flexion and rotation of the head and the inclination and rotation of the shoulders with resolution and accuracy to 2. This accuracy is the minimum accuracy required in clinical practice. The described ways of measuring and evaluating eye, head and shoulder positions could also be applied in other areas of medicine and science.

Our designed systems are based on cameras or/ possibly on gyro-accelerometer (inertial) sensors. The new two or three camera equipment designed to measure the head and shoulders positions is cheaper and more accurate than sophisticated systems which use accelerometers and magnetometers. The second advantage of our camera system over conventional and commercial systems such as Zebris motion analysis system (zebris Medical GmbH), LUKOtronic AS100/AS200 (Lukotronic Lutz-Kovacs-Electronics Oeg.) or sonoSens Monitor (sensomotion, Inc.) is that it can measure a patient without the influence of mechanical elements on patient's body segments or that the system allows direct detection of anatomical axes of patient's head and shoulders, which cannot be done when using current systems (Hozman et al, 2005). The systems based on two cameras have cameras placed on both sides (lateral profiles) or in front and above the patient. This is a very important advantage for medical doctors, because they can make various examinations which require open space in front of the face. Our systems based on combination of infrared cameras and inertial systems are also sufficient and more accurate and cheaper than commercial systems for broader use than just to analyze the position of the head and shoulders.

In this chapter we have described related works and designed special equipment and measurement methods for very accurate evaluation of eye, head and shoulder position in neurological practice. Possible applications and perspectives for clinical practice are also

We have described systems and sets of procedures for evaluation of the inclination, flexion and rotation of the head and the inclination and rotation of the shoulders with resolution and accuracy to 2. This accuracy is the minimum accuracy required in clinical practice. The described ways of measuring and evaluating eye, head and shoulder positions could also be

Our designed systems are based on cameras or/ possibly on gyro-accelerometer (inertial) sensors. The new two or three camera equipment designed to measure the head and shoulders positions is cheaper and more accurate than sophisticated systems which use accelerometers and magnetometers. The second advantage of our camera system over conventional and commercial systems such as Zebris motion analysis system (zebris Medical GmbH), LUKOtronic AS100/AS200 (Lukotronic Lutz-Kovacs-Electronics Oeg.) or sonoSens Monitor (sensomotion, Inc.) is that it can measure a patient without the influence of mechanical elements on patient's body segments or that the system allows direct detection of anatomical axes of patient's head and shoulders, which cannot be done when using current systems (Hozman et al, 2005). The systems based on two cameras have cameras placed on both sides (lateral profiles) or in front and above the patient. This is a very important advantage for medical doctors, because they can make various examinations which require open space in front of the face. Our systems based on combination of infrared cameras and inertial systems are also sufficient and more accurate and cheaper than commercial systems for broader use than just to analyze the position of the head and

time [ms]

flexion/extension rotation inclination

angle [°]

Fig. 9. Example of graph of head movements

applied in other areas of medicine and science.

**5. Conclusion** 

shoulders.

described in the chapter.

The measurement results of mean values of the head position being (100 healthy controls): retro flexion 21.7; inclination to the right 0.2; head rotation to the left 1.7. The rotation measurement has a greater error in comparison with the inclination and flexion/extension measurement (Kutilek & Hozman, 2009).

We have also described related systems and designed system for monitoring eye movements. Our equipment designed for measurement of the eye and head movements is based on display units - specialized glasses with eMagin 3DVisor. We modified the specialized projection displays for neurological examination which can perform measurements using a variable set of visual stimuli and active head movements. The solution combines system for measurement of the eye movements and the head posture in the 3D space with 2D or 3D eye stimulation. We came to the conclusion that it is possible to join together the two important and closely related methods for the measurement of the human vestibular system.

A result of this study is the recommendation to use the video cameras with higher frequency (approximately 200 Hz) for the measurement of eye movements and the head tracker with lower dynamic error (less than 0.3/s) for the measurement of head position. The overall accuracy of our designed system could increase significantly because the accuracy of the method alone is in eights of degree per the ten measurements. This is the dynamic error due to the low-cost head tracker which needs long time to stabilise after the previous measurement.

Above described and designed ways of measuring eye, head and shoulder position and motion could also be applied in other areas of engineering, medicine and science. Our systems can be used anywhere to study the posture of a person.

## **6. Acknowledgment**

The work presented here was carried out at the Czech Technical University in Prague, Faculty of Biomedical Engineering within the framework of research program No. MSM 6840770012 "Transdisciplinary Biomedical Engineering Research II" of the Czech Technical University, sponsored by the Ministry of Education, Youth and Sports of the Czech Republic.

## **7. References**


Methods of Measurement and Evaluation of

255.

515.

State University

Eye, Head and Shoulders Position in Neurological Practice 43

Ferrario V., Sforza C., Tartaglia G., Barbini E., Michielon G. (1995). New Television

Halmagyi M. G., Curthoys I. S., Brandt T., Dieterich M. (1991). Ocular Tilt Reaction: Clinical Sign of Vestibular Lesion, *Acta Otolaryngologica*, Suppl. 481, pp. 47-50. Kutilek P., Hozman J. (2009). Non-contact method for measurement of head posture by two

Lee E. C., Park K. R. (2009). A robust eye gaze tracking method based on a virtual eyeball

Li D. (2006) *Low-cost eye-tracking for human computer interaction*, Master's thesis, Iowa: Iowa

Meers S., Ward K., Piper I. (2006). Robust and Accurate Head-Pose Tracking Using a Single

Moore S., Curthoys I., Haslwanter T., Halmagyi M. (2006). *Measuring Three-Dimensional Eye* 

Murphy K., Preston Ch., Evans W. (1991). The Development of Instrumentation for the

Novak I., Campbell L., Boyce M., Fung V. S. (2010). Botulinum Toxin Assessment,

Nucci P., Kushner J. B., Serafino M, Orzalesi N. (2005). A Multi-Disciplinary Study of the

Palmgren P.J., Andreasson D., Eriksson M., Hägglund A. (2009). Cervicocephalic

Raine S., Twomey L.T. (1997). Head and Shoulder Posture Variations in 160 Asymptomatic

Ruian L., Shijiu J., Xiaorong W. (2006). *Single Camera Remote Eye Gaze Tracking Under Natural* 

Stampe D. M. (1993). Heuristic filtering and reliable calibration methods for video-based

Children, *American Journal of Ophthalmology*, Vol. 140, pp. 65-68.

*Biomedical Engineering*, Bratislava, pp. 51-54, ISBN 978-80-227-3105-8 Lee J. J., Park K. R., Kim J. H., (2003). Gaze detection system under HMD environment for

model, *Machine Vision and Applications*, Vol. 20, No. 5, pp. 319-337.

Business Media, pp. 111-122, ISBN 978-3-540-74026-1.

*and Dentofacial Orthopedics*, Vol. 99, No. 6, pp. 520-526.

Psychology, University of Sydney

17, Suppl. 2, pp. 94-108.

doi:10.1186/1746-1340-17-6

Tianjin Normal University

25, No. 2, pp. 137-142.

pp. 1215-1223.

Technique for Natural Head and Body Posture Analysis, *Cranio*, Vol. 13, pp. 247-

cameras and calibration means, *The 8th Czech-Slovak Conference on Trends in* 

user interface, *Joint International conference ICANN/ICONIP 2003*, Istanbul, pp. 512-

Camera, *Mechatronics and Machine Vision in Practice*, Berlin: Springer Science +

*Position Using Image Processint – The VTM System*, Sydney: Department of

Dynamic Measurement of Changing Head Posture, *American Journal of Orthodontics* 

Intervention and Aftercare for Cervical Dystonia and other Causes of Hypertonia of the Neck : International Consensus Statement, *European Journal of Neurology*, Vol.

Ocular, Orthopedic, and Neurologic Causes of Abnormal Head Postures in

Kinesthetic Sensibility and Postural Balance in Patients with Nontraumatic Chronic Neck Pain – a Pilot Study, *Chiropractic & Osteopathy*, Vol. 17, No. 6,

Women and Men, *Archives of Physical Medicine and Rehabilitation*, Vol. 78, No. Nov.,

*Head Movements*, Tianjin: College of Physics and Electronic Information Science,

pupil tracking systems, *Behavior Research Methods, Instruments and Computers*, Vol.

*Engineering*, September 7 - 12, 2009, Munich, Germany [CD-ROM]. Berlin: Springer Science+Business Media , p. 562-565. ISBN 978-3-642-03881-5.


Springer Science+Business Media , p. 562-565. ISBN 978-3-642-03881-5. Charfreitag J., Hozman J., Cerny R. (2009). Measurement of eye and head position in

Charfreitag J., Hozman J., Černý R. (2008). Specialized glasses - projection displays for

Duchowski A. T., Medlin E., Cournia N. A., Murphy H. A., Gramopadhye A. K., Nair S. N.,

Galardi G., Micera S., Carpaneto J., Scolari S., Gambini M., Dario P. (2003). Automated

Gräf M., Droutsas K., Kaufmann H. (2001). Surgery for nystagmus related head turn:

Harrison A., Wojtowicz G. (1996). Clinical Measurement of Head and Shoulder Posture

Hozman J., Sturm D., Stoklasa J. (2004). Measurement of Head Position in Neurological

Hozman J., Kutílek P., Szabó Z., Krupička R., Jiřina M. (2008). Digital Wireless

Hozman J., Sturm D., Stoklasa J., Cerny R. (2005). Measurement of Postural Head

Hozman J., Zanchi V., Cerny R., Marsalek P., Szabo Z. (2007). Precise Advanced Head

Eui C. L., Kang R. P. (2007). *A robust eye gaze tracking method based on a virtual eyeball model*,

Ferrario V., Sforza C., Germann D., Dalloca L., Miani A. (1994). Head Posture and

Science+Business Media, p. 57-60. ISBN 978-3-642-03881-5.

*Instruments, & Computers*, Vol. 34, No. 4 , pp. 18.

Vol. 239, No. 5, pp. 334–341. doi:10.1007/s004170100270.

ISBN 978-3-540-89207-6.

1367.

pp. 353-361.

379-2.

6.

4229-4232.

pp. 257-264.

*(REMOTE´07)*, WSEAS Press, pp. 18-26.

Soul: Electronic Engineering of Yonsei University

*Engineering*, September 7 - 12, 2009, Munich, Germany [CD-ROM]. Berlin:

neurological practice, *World Congress on Medical Physics and Biomedical Engineering*, September 7 - 12, 2009, Munich, Germany [CD-ROM]. Berlin: Springer

neurology investigation, *IFMBE Proceedings*, Berlin: Springer, 2008, Vol. 1, p. 97-101.

Vorah J., Melloy B. J. (2002). 3D Eye Movement Analysis, *Behavior Research Methods,* 

Assessment of Cervical Dystonia, *Movement Disorders*, Vol. 18, No. 11, pp. 1358-

Kestenbaum procedure and artificial divergence, *Graefes Arch Clin Exp Ophthalmol*,

Variables, *The Journal of Orthopaedic & Amp; Sports Physical Therapy (JOSPT)*, Vol. 23,

Practice, *Biomedical Engineering*, Zürich: Acta Press, p. 586-589. ISBN 0-88986-

Craniocorpography with Sidelong Scanning by TV Fisheye Camera, *IFMBE Proceedings*, Berlin: Springer, Vol. 1, pp. 102-105. ISBN 978-3-540-89207-

Alignment in Neurological Practice, *The 3rd European Medical and Biological Engineering Conference - EMBEC´05*, Society of Biomedical Engineering and Medical Informatics of the Czech Medical Association JEP, Vol. 11, Prague, pp.

Posture Measurement, *The 3rd WSEAS International Conference on Remote Sensing* 

Cephalometric Analyses: An Integrated Photographic/ Radiographic Technique, *American Journal of Orthodontics & Dentofacial Orthopedics*, Vol. 106,


Brain injury occurs from either a traumatic (mechanical), ischemic (decreased oxygen; accounts for 83% of stroke cases), or hemorrhagic (ruptured blood vessel; accounts for 17% of stroke cases) insult to the brain. Stroke and traumatic brain injury (TBI) are major contributors worldwide to both deaths and persistent disabilities. Stroke is the third leading cause of death (behind heart disease and cancer) in the United States, with 137,000 Americans dying from stroke each year (Heron *et al.*, 2009). Stroke is the leading cause of serious, long-term disability in the United States. Currently, 795,000 people have a stroke each year and 15-30% of survivors have a permanent disability (Roger *et al.*, 2011). Annually, 1.7 million people sustain a TBI in the United States, resulting in 52,000 deaths and over 124,000 permanent disabilities each year (Faul *et al.*, 2010). Annual direct (e.g., medical) and indirect (e.g., loss of productivity) costs to the United States are \$41 billion and

\$60 billion for stroke and TBI, respectively (Finkelstein *et al.*, 2006; Roger *et al.*, 2011).

Though the etiology differs between traumatic and ischemic injury, there are many similarities in their pathology (Bramlett & Dietrich, 2004; Leker & Shohami, 2002). The primary insult initiates a cascade of secondary events such as edema, excitotoxicity, and increases in free radicals, which act to spread the injury to surrounding tissue (for reviews of the pathology, see Greve & Zink, 2009 for TBI and Mitsios *et al*., 2006 for ischemic stroke). Note that ischemia is part of the secondary injury response for TBI (Coles, 2004; Garnett *et al.*, 2001). The brain attempts to repair and regenerate, but depending on such factors as injury severity, age of onset, and prior injuries, these endogenous attempts are often insufficient to restore normal function. A treatment that limits the spread of secondary damage and/or promotes repair and regeneration is needed. Current clinical treatment practices for TBI primarily aim to reduce intracranial pressure in an effort to minimize brain damage caused by swelling. For ischemic stroke, the only FDA-approved treatment is breaking down blood clots with tissue plasminogen activator. However, patients must meet strict criteria for receiving this therapy, including a 4 hour time window and no evidence of the following: bleeding, a severely elevated blood pressure or blood sugar, recent surgery, low platelet count, or end-stage liver or kidney disorders. Numerous pharmacological treatments that seemed promising in animal models have failed in clinical trials (Maas *et al.*, 2010; O'Collins *et al.*, 2006). Patients with brain injury vary widely with respect to demographics, severity of injury, location of injury, and co-morbidity factors making clinical trials challenging. Most treatments previously tested involved pathways that are both deleterious and beneficial, making the dosage and timing critical to not interfere with

**1. Introduction** 

Ciara C. Tate and Casey C. Case *SanBio, Inc., Mountain View, California* 

*USA* 

Young, J. D. (1988). Head Posture Measurement, *Journal of Pediatric Ophthalmology and Strabismus*, Vol. 25, No.2, pp. 86-89. **3** 

Ciara C. Tate and Casey C. Case *SanBio, Inc., Mountain View, California USA* 

#### **1. Introduction**

44 Advanced Topics in Neurological Disorders

Young, J. D. (1988). Head Posture Measurement, *Journal of Pediatric Ophthalmology and* 

Brain injury occurs from either a traumatic (mechanical), ischemic (decreased oxygen; accounts for 83% of stroke cases), or hemorrhagic (ruptured blood vessel; accounts for 17% of stroke cases) insult to the brain. Stroke and traumatic brain injury (TBI) are major contributors worldwide to both deaths and persistent disabilities. Stroke is the third leading cause of death (behind heart disease and cancer) in the United States, with 137,000 Americans dying from stroke each year (Heron *et al.*, 2009). Stroke is the leading cause of serious, long-term disability in the United States. Currently, 795,000 people have a stroke each year and 15-30% of survivors have a permanent disability (Roger *et al.*, 2011). Annually, 1.7 million people sustain a TBI in the United States, resulting in 52,000 deaths and over 124,000 permanent disabilities each year (Faul *et al.*, 2010). Annual direct (e.g., medical) and indirect (e.g., loss of productivity) costs to the United States are \$41 billion and \$60 billion for stroke and TBI, respectively (Finkelstein *et al.*, 2006; Roger *et al.*, 2011).

Though the etiology differs between traumatic and ischemic injury, there are many similarities in their pathology (Bramlett & Dietrich, 2004; Leker & Shohami, 2002). The primary insult initiates a cascade of secondary events such as edema, excitotoxicity, and increases in free radicals, which act to spread the injury to surrounding tissue (for reviews of the pathology, see Greve & Zink, 2009 for TBI and Mitsios *et al*., 2006 for ischemic stroke). Note that ischemia is part of the secondary injury response for TBI (Coles, 2004; Garnett *et al.*, 2001). The brain attempts to repair and regenerate, but depending on such factors as injury severity, age of onset, and prior injuries, these endogenous attempts are often insufficient to restore normal function. A treatment that limits the spread of secondary damage and/or promotes repair and regeneration is needed. Current clinical treatment practices for TBI primarily aim to reduce intracranial pressure in an effort to minimize brain damage caused by swelling. For ischemic stroke, the only FDA-approved treatment is breaking down blood clots with tissue plasminogen activator. However, patients must meet strict criteria for receiving this therapy, including a 4 hour time window and no evidence of the following: bleeding, a severely elevated blood pressure or blood sugar, recent surgery, low platelet count, or end-stage liver or kidney disorders. Numerous pharmacological treatments that seemed promising in animal models have failed in clinical trials (Maas *et al.*, 2010; O'Collins *et al.*, 2006). Patients with brain injury vary widely with respect to demographics, severity of injury, location of injury, and co-morbidity factors making clinical trials challenging. Most treatments previously tested involved pathways that are both deleterious and beneficial, making the dosage and timing critical to not interfere with

hours post-stroke), 7 trials during the sub-acute phase (between 4 days and 6 weeks poststroke), and 2 studies are delivering cells during the chronic phase (over 6 months poststroke). As trials more definitively reveal that MSCs transplantation is both safe and effective for treating brain injury in humans, issues of delivery timing and route and donor source, as well as dosage and the use of immunosuppression will need to be more carefully

**Issue Options Advantages Disadvantages** 

volatile environment strict timing may limit

endogenous regeneration efforts are stabilized

requires high cell numbers possible systemic effects requires blood brain barrier permeability (thus limits time

extent and location of injury is

requires storage of cell product

availability

and spleen

window)

variable

procedures

immunosuppression

**Autologous** immunocompatible patients undergo additional

MSCs are immunoprivileged may require

more invasive

less invasive cells accumulate in the lungs

**phase** supports neuroprotection

supports regeneration

normal recovery targets larger patient

population

**Intracerebral** cells placed at site of injury

more cost-effective

Table 1. Clinical considerations for using MSCs to treat brain injury

better for repeat dosing off-the-shelf treatment

cells can be manipulated *ex vivo* without treatment delays

easier to distinguish between effects of cell therapy and

cells home to site of injury

compared.

**Acute** 

**Chronic phase** 

**Intravenous or Intraarterial** 

**Allogeneic** 

**Timing** 

**Delivery** 

**Donor Source** 

normal homeostasis or reparative mechanisms in the brain. Furthermore, these treatments targeted single mechanisms, which may not be enough in light of the multi-faceted pathology. Therapies that currently seem more promising, such as progesterone administration (Wright *et al.*, 2007) and cell transplantation, address multiple pathological events.

## **2. Mesenchymal stromal cells to treat brain injury**

## **2.1 Mesenchymal stromal cells (MSCs)**

Mesenchymal stem cells are multipotent cells that can differentiate into cells of the mesoderm germ layer. These cells can be isolated from adipose tissue, amniotic fluid, placenta and umbilical cord, though are most commonly and efficiently derived from adult bone marrow. Marrow-derived cells that adhere to tissue-culture plastic *in vitro* are a heterogeneous population of cells that contain mesenchymal stem cells, but the entire population is more correctly defined as mesenchymal stromal cells (Horwitz *et al.*, 2005). As we learn more about these cell populations, the terminology evolves and the acronym MSC is used (and sometimes misused) for mesenchymal stem cell, mesenchymal stromal cell, multipotent stromal cell, and marrow stromal cell. For the purposes of this chapter, we will not distinguish amongst these cell populations and use MSC as a general acronym.

#### **2.2 Using MSCs to treat brain injury**

MSCs are an attractive cell source for transplantation because they are relatively easy to obtain, expand, and manipulate *in vitro*. In addition, adult human MSCs do not have the tumorigenicity risks that pluripotent cells carry. Ample preclinical data demonstrate that MSC transplantation promotes functional recovery following experimental cerebral ischemic or TBI (for review, see Li & Chopp, 2009 or Parr *et al.*, 2007). Autologous MSC therapy has already shown promise for treating clinical stroke (Battistella *et al.*, 2011; Honmou *et al.*, 2011; Lee *et al.*, 2010; Suarez-Monteagudo *et al.*, 2009) and TBI (Cox *et al.*, 2011; Zhang *et al.*, 2008). Collectively, these trials demonstrate that transplanting MSCs either intra-arterially, intravenously, or intracerebrally is safe and no cell-related adverse events were reported. These groups also indicate that some patients receiving MSCs had improved functional outcome; however, these hints at efficacy must be cautiously interpreted because these were primarily safety trials and were not designed to show robust efficacy.

Important considerations for using MSCs in the clinic include timing (acute versus chronic), delivery route (most commonly intravenous, intra-arterial, or intracerebral), and donor source (autologous versus allogeneic). There are advantages and disadvantages for each of these issues, which are outlined in Table 1. According to www.clinicaltrials.gov (searched in August 2011; summarized in Table 2), there are 11 ongoing clinical trials worldwide using MSCs (either primary or derivatives) to treat stroke. Of these 11 studies, 5 are using autologous MSCs and the other 6 are using allogeneic MSCs from either bone marrow, placenta (1 study) or umbilical cord (1 study). Two of the trials are injecting cells directly into the injured brain (either into the injury cavity or the peri-infarct tissue), 1 trial is injecting cells into the carotid artery, and the other 8 are injecting MSCs intravenously. With regard to timing, 2 of the trials are delivering the cells during the acute phase (within 72

normal homeostasis or reparative mechanisms in the brain. Furthermore, these treatments targeted single mechanisms, which may not be enough in light of the multi-faceted pathology. Therapies that currently seem more promising, such as progesterone administration (Wright *et al.*, 2007) and cell transplantation, address multiple pathological

Mesenchymal stem cells are multipotent cells that can differentiate into cells of the mesoderm germ layer. These cells can be isolated from adipose tissue, amniotic fluid, placenta and umbilical cord, though are most commonly and efficiently derived from adult bone marrow. Marrow-derived cells that adhere to tissue-culture plastic *in vitro* are a heterogeneous population of cells that contain mesenchymal stem cells, but the entire population is more correctly defined as mesenchymal stromal cells (Horwitz *et al.*, 2005). As we learn more about these cell populations, the terminology evolves and the acronym MSC is used (and sometimes misused) for mesenchymal stem cell, mesenchymal stromal cell, multipotent stromal cell, and marrow stromal cell. For the purposes of this chapter, we will

not distinguish amongst these cell populations and use MSC as a general acronym.

MSCs are an attractive cell source for transplantation because they are relatively easy to obtain, expand, and manipulate *in vitro*. In addition, adult human MSCs do not have the tumorigenicity risks that pluripotent cells carry. Ample preclinical data demonstrate that MSC transplantation promotes functional recovery following experimental cerebral ischemic or TBI (for review, see Li & Chopp, 2009 or Parr *et al.*, 2007). Autologous MSC therapy has already shown promise for treating clinical stroke (Battistella *et al.*, 2011; Honmou *et al.*, 2011; Lee *et al.*, 2010; Suarez-Monteagudo *et al.*, 2009) and TBI (Cox *et al.*, 2011; Zhang *et al.*, 2008). Collectively, these trials demonstrate that transplanting MSCs either intra-arterially, intravenously, or intracerebrally is safe and no cell-related adverse events were reported. These groups also indicate that some patients receiving MSCs had improved functional outcome; however, these hints at efficacy must be cautiously interpreted because these were primarily safety trials and were not designed to show robust

Important considerations for using MSCs in the clinic include timing (acute versus chronic), delivery route (most commonly intravenous, intra-arterial, or intracerebral), and donor source (autologous versus allogeneic). There are advantages and disadvantages for each of these issues, which are outlined in Table 1. According to www.clinicaltrials.gov (searched in August 2011; summarized in Table 2), there are 11 ongoing clinical trials worldwide using MSCs (either primary or derivatives) to treat stroke. Of these 11 studies, 5 are using autologous MSCs and the other 6 are using allogeneic MSCs from either bone marrow, placenta (1 study) or umbilical cord (1 study). Two of the trials are injecting cells directly into the injured brain (either into the injury cavity or the peri-infarct tissue), 1 trial is injecting cells into the carotid artery, and the other 8 are injecting MSCs intravenously. With regard to timing, 2 of the trials are delivering the cells during the acute phase (within 72

**2. Mesenchymal stromal cells to treat brain injury** 

**2.1 Mesenchymal stromal cells (MSCs)** 

**2.2 Using MSCs to treat brain injury** 

events.

efficacy.

hours post-stroke), 7 trials during the sub-acute phase (between 4 days and 6 weeks poststroke), and 2 studies are delivering cells during the chronic phase (over 6 months poststroke). As trials more definitively reveal that MSCs transplantation is both safe and effective for treating brain injury in humans, issues of delivery timing and route and donor source, as well as dosage and the use of immunosuppression will need to be more carefully compared.



Transplanting stem cells is attractive because they can potentially differentiate into multiple cell types and replace cells lost to injury or disease. MSCs normally give rise to cells along the mesodermal lineage (including bone, cartilage, and adipose tissue); however, there are reports suggesting that they can transdifferentiate into neural cells in certain *in vitro* (Sanchez-Ramos *et al.*, 2000; Woodbury *et al.*, 2000) and *in vivo* (Kopen *et al.*, 1999; Munoz-Elias *et al.*, 2004) environments. Though some studies show a small percentage of donor MSCs express neuronal markers in the injured brain, there is little evidence that these cells functionally incorporate into the endogenous neuronal circuitry. In fact, there is a decidedly lack of evidence that neuronal replacement is the primary mechanism of action for MSC therapy; moreover, there are data demonstrating artifacts associated with MSC to neuron transdifferentiation (Barnabe *et al.*, 2009; Lu *et al.*, 2004; Neuhuber *et al.*, 2004; Phinney & Prockop, 2007; Wells, 2002). There is also the possibility that MSCs replace supporting glial cells (astrocytes, oligodendrocytes, or microglia), which outnumber neurons 10:1 in the brain (reviewed in Boucherie & Hermans, 2009). However, ample evidence shows that benefits and functional recovery occur rapidly and persist long after the donor cells are gone, indicating permanent cell replacement is not required. The most likely governing mechanism is that MSCs provide trophic support to the injured brain, which augments endogenous repair and regeneration pathways. Trophic support, by definition, acts through secreted molecules called trophic factors. MSCs may act as mini-pumps delivering beneficial factors to their microenvironment. Using cells as pumps is preferred to actual engineered pumps because they can deliver a plethora of factors at the site of injury in physiologic concentrations and also respond to the needs of the injured tissue with appropriate feedback. Trophic factors can either directly or indirectly (via a mediator cell) promote neuroprotection (enhance cell survival through repair) or neuroregeneration. MSCs also secrete factors that augment angiogenesis – another important aspect of regeneration after brain injury. An additional likely mechanism of action contributing to the benefit of MSCs is immunosuppression. MSCs can affect immune cells via secreted factors, which would fall under trophic support. For the purposes of this chapter, we will treat it as a separate category since targeting immune functions indirectly promotes recovery compared to acting directly on neural or vascular cells. There is a great deal of overlap between these functions and these categories are fluid. Figure 1 summarizes hypothesized mechanisms of action for MSCs in the injured brain, which are mediated by secreted factors

Trophic support classically means to provide nutrition, but the definition has been expanded to include promoting cellular growth, survival, differentiation, or migration. Similarly, the terms "trophic factor" and "growth factor" have also become more inclusive. Neurotrophic factors are trophic factors acting specifically on neural cells, i.e., promoting the growth, survival, differentiation, or migration of primarily neurons, but also glial cells (astrocytes, oligodendrocytes, microglia and Schwann cells). The name neurotrophin is sometimes used synonymously with neurotrophic factor; however neurotrophins specify a family of four structurally-related proteins: nerve growth factor (NGF), brain-derived neurotrophic factor (BDNF), neurotrophin-3 (NT-3), and neurotrophin-4/5 (NT-4/5). The

**3. Mechanisms of action underlying beneficial effects** 

and direct cell-cell contacts.

**3.1 Terminology** 

