**Meet the editor**

Professor Dr. Fayez Bahmad Jr. is the Editor in Chief of *The International Tinnitus Journal*. He completed the ENT Residency Program at the University of Brasilia Hospital (Otolaryngology) and received his PhD at the University of Brasilia Medical School under the orientation of Prof. Carlos A. Oliveira, MD, PhD. Professor Oliveira is well known for establishing one of the most successful

research groups in Otology in Brazil and South America. Professor Dr. Fayez Bahmad Jr. was awarded the prestigious Schuknecht Prize at The International Otopathology Society Meeting held in Boston in 2003. He was a Fellow in Otology and Neurotology at the Massachusetts Eye and Ear Infirmary and Fellow in Human Genetics at Seidman Laboratory, Department of Genetics, both at Harvard Medical School when he was engaged in projects under the mentorship of Prof. Saumil N. Merchant, MD, PhD, one of the foremost professors and researchers in otology and otopathology in the Harvard Medical School.

### Contents



Chapter 7 **Hearing Impairment in Professional Musicians and Industrial Workers — Profession-Specific Auditory Stimuli Used to Evoke**

#### **Event-Related Brain Potentials and to Show Different Auditory Perception and Processing 131**

Edeltraut Emmerich, Marcus Engelmann, Melanie Rohmann and Frank Richter


Miguel A. Hyppolito and Eduardo T. Massuda

## Preface

**Event-Related Brain Potentials and to Show Different Auditory**

Edeltraut Emmerich, Marcus Engelmann, Melanie Rohmann and

Klára Procházková, Ivan Šejna, Petr Schalek, Jozef Rosina and Aleš

Stavros Hatzopoulos, Henryk Skarzynski and Piotr H Skarzynski

Chapter 8 **A Combination of EGb 761 and Soft Laser Therapy in Chronic**

Chapter 9 **Technological Advances in Universal Neonatal Hearing**

**Perception and Processing 131**

Frank Richter

**VI** Contents

**Tinnitus 159**

**Section 5 Hearing Screening 167**

**Screening (UNHS) 169**

**Rehabilitation 187**

**Section 6 Hearing Loss and Cochlear Implants 185**

Chapter 10 **Bilateral Cochlear Implants, Minimizing Auditory**

Miguel A. Hyppolito and Eduardo T. Massuda

Hahn

**Section 4 Hearing Loss and Tinnitus 157**

*Update on Hearing Loss* is directed toward medical students, clinicians, and otolaryngologists and provides detailed information on hearing loss, many different forms of hearing loss, and their treatment as well as an overview of what is new and known about their patho‐ physiology.

Accordingly, this book does not cover all the different theories and management strategies of hearing loss, but it does present up-to-date information for those who deal with hearing loss in their clinical practice, such as otolaryngologists, neurologists, psychiatrists, neurosur‐ geons, clinical audiologists, dentists, and psychologists.

This book encompasses both the theoretical background of the different forms of hearing loss and detailed knowledge on state-of-the-art treatment, written for clinicians by special‐ ists and researchers. Realizing the complexity of hearing loss has highlighted the importance of interdisciplinary research. Therefore, all the authors contributing to *Update on Hearing Loss* were chosen from many different specialties of medicine, like surgery, psychology, and neuroscience, and came from diverse areas of expertise, such as neurology, neurosurgery, audiology and speech therapy, otolaryngology, psychiatry, clinical and experimental psy‐ chology, pharmacology, dentistry, and neuroscience.

Many structures of the body, such as the ear, the auditory nervous system, the somatosenso‐ ry system, other parts of the brain, and muscles of the head and the neck, are directly or indirectly involved in different forms of hearing loss. Treating and understanding the path‐ ology of hearing loss require better knowledge of otopathology and the involvement of many specialties of medicine, such as surgery, psychology, and neuroscience.

Hearing loss may occur due to genetic defects, presbycusis, viral or bacterial infection, tem‐ poral bone trauma, noise exposure, or administration of ototoxical agents. Hearing loss is often accompanied by symptoms such as hyperacusis (lowered tolerance to sound) and dis‐ tortion of sounds. Affective disorders such as phonophobia (fear of sound) and depression often occur in individuals with severe hearing loss.

Chapter 1 provides the reader with current knowledge on the Cochlear Model for Hearing Loss.

Chapter 2 describes the newest Classification of Hearing Loss.

Chapter 3 is an Update on Etiology and Epidemiology of Hearing Loss.

Chapter 4 discusses the Advances in Genetic Diagnosis and Treatment of Hearing Loss.

Chapter 5 is about Hearing Loss in Infectious and Contagious Diseases.

Chapter 6 presents a critical overview of Hearing Loss and Its Impact on Voice.

Chapter 7 discusses the components of Noise-Induced Hearing Loss.

Chapter 8 offers new alternative treatments of Tinnitus as Therapy with Laser and EGb 761.

Chapter 9 presents the Technological Advances in Universal Neonatal Hearing Screening.

Chapter 10 describes Cochlear Implantation on Hearing-Impaired Patients.

It is a huge challenge to translate the results from basic research into clinical practice, and all the authors have attempted to present the pathophysiological model in a clear way. Still, the principles on which it is based and its mechanisms are complex, and their understanding requires knowledge from various areas of neuroscience; the fact that hearing loss is not a simple disease necessitates the involvement of several disciplines of health care.

The editor would like to thank Ms. Iva Lipović for her support in the preparation of this book.

Special thanks go to the chapter authors.

#### **Fayez Bahmad Jr., MD, PhD**

President of the Brasiliense Institute of Otolaryngology Professor at the Health Science Faculty, University of Brasilia, Brasil

**Section 1**

## **Basis of Hearing Loss**

Chapter 6 presents a critical overview of Hearing Loss and Its Impact on Voice.

Chapter 10 describes Cochlear Implantation on Hearing-Impaired Patients.

simple disease necessitates the involvement of several disciplines of health care.

book.

VIII Preface

Special thanks go to the chapter authors.

Chapter 8 offers new alternative treatments of Tinnitus as Therapy with Laser and EGb 761. Chapter 9 presents the Technological Advances in Universal Neonatal Hearing Screening.

It is a huge challenge to translate the results from basic research into clinical practice, and all the authors have attempted to present the pathophysiological model in a clear way. Still, the principles on which it is based and its mechanisms are complex, and their understanding requires knowledge from various areas of neuroscience; the fact that hearing loss is not a

The editor would like to thank Ms. Iva Lipović for her support in the preparation of this

**Fayez Bahmad Jr., MD, PhD**

University of Brasilia,

Brasil

Professor at the Health Science Faculty,

President of the Brasiliense Institute of Otolaryngology

Chapter 7 discusses the components of Noise-Induced Hearing Loss.

#### **Chapter 1**

### **Cochlear Model for Hearing Loss**

#### Miriam Furst

Additional information is available at the end of the chapter

http://dx.doi.org/10.5772/61189

#### **Abstract**

In many psychoacoustical tasks, hearing-impaired subjects display abnormal audiograms and poor understanding of speech compared to normal listeners. Existing models that ex‐ plain the performance of the hearing impaired indicate that possible sources for cochlear hearing loss may be the dysfunction of the outer and inner hair cells. In this study, a model of the auditory system is introduced. It includes two stages: (1) a nonlinear time domain cochlear model with active outer hair cells that are driven by the tectorial mem‐ brane motion and (2) a synaptic model that generates the auditory nerve instantaneous rate as a response to the basilar membrane motion and is affected by the inner hair cell transduction efficiency. The model can fit both a normal auditory system and an abnor‐ mal auditory system with easily induced pathologies.

In typical psychoacoustical detection experiments, the ability of subjects to perceive a minimum difference in a physical property is measured. We use the model presented here to predict these performances by assuming that the brain behaves as an optimal pro‐ cessor that estimates a particular physical parameter. The performance of the optimal processor is derived by calculating its lower bound. Since neural activity is described as a nonhomogeneous Poisson process whose instantaneous rate was derived, the Cramer– Rao lower bound can be analytically obtained for both rate coding and all information coding.

We compared the model predictions of normal and abnormal cochleae to human thresh‐ olds of pure tones in quiet and in the presence of background noise.

**Keywords:** Cochlear model, outer hair cell, audiogram, hearing impairment, auditory nerve

#### **1. Introduction**

When sound waves enter the ear, they cause the basilar membrane (BM) that is located in the inner ear to vibrate. Since each place on the BM is tuned to a specific characteristic frequency

© 2015 The Author(s). Licensee InTech. This chapter is distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

(CF), the BM is able to separate the frequency components of sounds. The BM vibrations excite both the outer hair cells (OHC) and the inner hair cells (IHC). The OHCs act as local amplifiers, while the IHCs transduce the sound-induced vibrations into electrical impulses that propagate up the auditory cortex through the fiber tracks of the auditory pathway where the neural information is processed in a set of nuclei located in the auditory brainstem.

Damage can occur to the auditory system at any point along the auditory pathway. One of the most common impairments is OHC loss, frequently due to noise exposure. Often, when there is OHC loss , it is followed by IHC loss. Various diseases or old age can also injure different neurons along the auditory pathway.

Hearing impairment is characterized by abnormal audiograms and poor understanding of speech. The most frequent complaint is the inability to understand speech in a noisy environ‐ ment. In many psychoacoustical tasks, hearing-impaired subjects yield lower thresholds than normal listeners (review by Moore [1]). For example, in monaural experiments, hearingimpaired subjects perform poorly in frequency discrimination tasks and in signal detection with a noisy background.

Models explaining the performance of hearing-impaired people [e.g., 2–9] indicate that the possible sources for cochlear hearing loss are the dysfunction of the outer hair cells and the loss of inner hair cells. The dysfunction of the OHCs reduces the gain of the active mechanism, which then tends to broaden the tuning curve and decrease the nonlinear effects. However, these models do not adequately predict hearing impairment performance [10, 11].

The purpose of this chapter is to introduce a comprehensive, nonlinear time domain cochlear model [6, 12–14], followed by a model of the auditory nerve (AN) response [7, 13, 16, 17] that can be used to predict hearing abilities of people with normal cochlea as well as with abnormal cochlea that suffers from either OHC loss and/or IHC loss.

Quantitative psychoacoustical measures that determine the human ability to detect the smallest difference in the physical property of a stimulus are usually implemented by forcedchoice experiments. This difference is referred to as a "just-noticeable difference" (JND). Siebert [18] showed that if one assumes that the brain is behaving as an optimal processor, then psychoacoustical JND measurements can be predicted from auditory nerve instantaneous rates. In this chapter, we use this approach to compare the model predictions to human hearing thresholds, both normal and impaired, in both a quiet environment and in the presence of background noise.

#### **2. The human ear model**

The mammalian ear is composed of the outer ear, the middle ear, and the inner ear. The outer ear includes the pinna, the ear canal, and the ear drum. The middle ear is an air-filled cavity behind the ear drum, which includes three small ear bones, the ossicles. The inner ear includes a snail-shaped structure, the cochlea (see schematic description in Figure 1A).The sound is directed by the outer ear through the ear canal to the eardrum. When sound strikes the ear drum, the movement is transferred through the three bones of the middle ear to a flexible tissue called the oval window, finally reaching the upper fluid-filled ducts of the cochlea (see Figure 1). The upper cochlear ducts are called scala vestibuli, and the bottom duct is referred to as scala tympani. The space between the top and bottom ducts is labeled as scala media.

(CF), the BM is able to separate the frequency components of sounds. The BM vibrations excite both the outer hair cells (OHC) and the inner hair cells (IHC). The OHCs act as local amplifiers, while the IHCs transduce the sound-induced vibrations into electrical impulses that propagate up the auditory cortex through the fiber tracks of the auditory pathway where the neural

Damage can occur to the auditory system at any point along the auditory pathway. One of the most common impairments is OHC loss, frequently due to noise exposure. Often, when there is OHC loss , it is followed by IHC loss. Various diseases or old age can also injure different

Hearing impairment is characterized by abnormal audiograms and poor understanding of speech. The most frequent complaint is the inability to understand speech in a noisy environ‐ ment. In many psychoacoustical tasks, hearing-impaired subjects yield lower thresholds than normal listeners (review by Moore [1]). For example, in monaural experiments, hearingimpaired subjects perform poorly in frequency discrimination tasks and in signal detection

Models explaining the performance of hearing-impaired people [e.g., 2–9] indicate that the possible sources for cochlear hearing loss are the dysfunction of the outer hair cells and the loss of inner hair cells. The dysfunction of the OHCs reduces the gain of the active mechanism, which then tends to broaden the tuning curve and decrease the nonlinear effects. However,

The purpose of this chapter is to introduce a comprehensive, nonlinear time domain cochlear model [6, 12–14], followed by a model of the auditory nerve (AN) response [7, 13, 16, 17] that can be used to predict hearing abilities of people with normal cochlea as well as with abnormal

Quantitative psychoacoustical measures that determine the human ability to detect the smallest difference in the physical property of a stimulus are usually implemented by forcedchoice experiments. This difference is referred to as a "just-noticeable difference" (JND). Siebert [18] showed that if one assumes that the brain is behaving as an optimal processor, then psychoacoustical JND measurements can be predicted from auditory nerve instantaneous rates. In this chapter, we use this approach to compare the model predictions to human hearing thresholds, both normal and impaired, in both a quiet environment and in the presence of

The mammalian ear is composed of the outer ear, the middle ear, and the inner ear. The outer ear includes the pinna, the ear canal, and the ear drum. The middle ear is an air-filled cavity behind the ear drum, which includes three small ear bones, the ossicles. The inner ear includes a snail-shaped structure, the cochlea (see schematic description in Figure 1A).The sound is directed by the outer ear through the ear canal to the eardrum. When sound strikes the ear

these models do not adequately predict hearing impairment performance [10, 11].

cochlea that suffers from either OHC loss and/or IHC loss.

information is processed in a set of nuclei located in the auditory brainstem.

neurons along the auditory pathway.

with a noisy background.

4 Update On Hearing Loss

background noise.

**2. The human ear model**

The middle ear's task is to match the impedance of the sound pressure in the air to that of the fluid. Movement of the fluid inside the upper cochlear duct results in a pressure difference between the upper and lower ducts. This pressure difference in turn causes the basilar membrane (the membrane that separates the scala tympani and scala media) to move.

Two types of auditory receptor cells inhabit the scala media, the inner and outer hair cells. The defining feature of those cells is the hair bundle on top of each cell. The hair bundle comprises dozens to hundreds of streocilia, which are cylindrical actin-filled rods. The streocilia are immersed in endolymph, a fluid that is rich in potassium and characterized by an endocochlear potential of +80 mV. The streocilia move with the basilar membrane displacement. Their deflection opens mechanically gated ion channels that allow any small, positively charged ions (primarily potassium and calcium) to enter the cell. The influx of positive ions from the endolymph in the scala media depolarizes the cell, resulting in a receptor potential. The roles of the OHCs and IHCs on the function of the cochlea are very different. While the OHCs act as local amplifiers, the IHCs innervate the auditory nerve. The OHCs lay on the basilar membrane, and their upper part is embedded in a gel-like membrane, the tectorial membrane (TM). An increase in the OHC receptor potential causes a decrease in its length [19], which in turn enhances the BM movement. The hair bundles of the IHC move freely in the scala media. The change in their receptor potential opens voltage-gated calcium channels that release neurotransmitters at the basal end of the cell, which trigger action potentials in the attached nerve.

**Figure 1.** Schematic representation of the cochlea: (A) the snail-shaped structure of the cochlea; (B) schematic descrip‐ tion of the Organ of Corti, emphasizing that the BM and the TM are attached by the OHCs.

Modeling the human ear requires a detailed model of the cochlea and the middle and outer ears. A common approach is to model the inner ear as a one-dimensional structure [e.g., 6, 14, 20–23] with the cochlea regarded as an uncoiled structure with two fluid-filled compartments with rigid walls that are separated by an elastic partition, the basilar membrane. The cochlear partition, whose mechanical properties are describable in terms of point-wise mass density, stiffness, and damping, is regarded as a flexible boundary between scala tympani and scala vestibuli. Thus, at every point along the cochlear duct, the pressure difference *P*(*x*, *t*) across the partition drives the partition's velocity. By applying fundamental physical principles, such as the conservation of mass and the dynamics of deformable bodies, the differential equation for *P* is obtained by [e.g. 6]

$$\frac{\partial^2 P(\mathbf{x}, t)}{\partial \mathbf{x}^2} = \frac{2\rho\rho}{A} \frac{\partial^2 \tilde{\xi}\_{\text{bm}}(\mathbf{x}, t)}{\partial t^2} \,, \tag{1}$$

where *ξ*bm is the BM displacement, *A* represents the cross-sectional area of scala tympani and scala vestibuli, *β* is the BM width, and *ρ* is the density of the fluid in both the scala vestibuli and the scala tympani. The pressure on the BM (*P*bm) is a result of both the difference in fluid pressure and the pressure caused by the OHCs (*P*ohc). The relation between the pressures of BM, TM, and OHC is shown in Figure 1 [13], which can be interpreted as

$$\begin{aligned} P\_{\rm bm}\left(\mathbf{x},t\right) &= P\left(\mathbf{x},t\right) + P\_{\rm ohc}\left(\mathbf{x},t\right) \\ \mathbf{0} &= P\_{\rm ohc}\left(\mathbf{x},t\right) + P\_{\rm tm}\left(\mathbf{x},t\right) \end{aligned} \tag{2}$$

The mechanical properties of both BM and TM are simulated as second-order oscillators that yield

$$\begin{aligned} P\_{\rm bm}\left(\mathbf{x},t\right) &= M\_{\rm bm}\left(\mathbf{x}\right) \cdot \frac{\hat{\mathcal{O}}^2 \tilde{\boldsymbol{\xi}}\_{\rm bm}}{\hat{\mathcal{O}} t^2}\left(\mathbf{x},t\right) + R\_{\rm bm}\left(\mathbf{x}\right) \cdot \frac{\hat{\mathcal{O}} \tilde{\boldsymbol{\xi}}\_{\rm bm}}{\hat{\mathcal{O}} t}\left(\mathbf{x},t\right) + K\_{\rm bm}\left(\mathbf{x}\right) \cdot \tilde{\boldsymbol{\xi}}\_{\rm bm}\left(\mathbf{x},t\right) \Bigg| \\ P\_{\rm m}\left(\mathbf{x},t\right) &= M\_{\rm m}\left(\mathbf{x}\right) \cdot \frac{\hat{\mathcal{O}}^2 \tilde{\boldsymbol{\xi}}\_{\rm bm}}{\hat{\mathcal{O}} t^2}\left(\mathbf{x},t\right) + R\_{\rm bm}\left(\mathbf{x}\right) \cdot \frac{\hat{\mathcal{O}} \tilde{\boldsymbol{\xi}}\_{\rm bm}}{\hat{\mathcal{O}} t}\left(\mathbf{x},t\right) + K\_{\rm bm}\left(\mathbf{x}\right) \cdot \tilde{\boldsymbol{\xi}}\_{\rm bm}\left(\mathbf{x},t\right) \end{aligned} \tag{3}$$

where *K*bm, *K*tm, *R*bm, *R*tm, *M*bm, and *M*tm are the effective stiffness, damping, and mass per unit area of BM and TM, respectively (see Table 1). The TM displacement is defined as *ξ*tm.

Since the OHCs lie between the two membranes, their displacement is considered as

$$
\xi\_{\rm coh} = \xi\_{\rm tm} - \xi\_{\rm lm}. \tag{4}
$$

Each OHC is modeled by two sections, the apical and basal parts. The apical part is directed toward the endolymph of the gap between the TM and the reticular lamina (RL), while the basolateral part is embedded in the perilymph next to the supporting cells that are aligned along the BM. When the OHCs' stereocilia move due to the relative displacement of the BM and the TM, the conductance of the apical part of the OHC is affected, which in turn causes a flow of potassium and calcium ions to the endolymph. Thus, a voltage drop is developed on the basal part of the OHC membrane [24].

An outer hair cell model is described by an equivalent electrical circuit in Figure 2 [6, 25]. The apical part is presented by its variable conductance (*Ga* ≈*α* ⋅*ξ*ohc) and its constant capacitance (*Ca*), while the basal part is presented by its constant conductance and capacitance, *Gb* and *Cb*, respectively. The electrical potential of the endolymph is *V*sm =80 mV, and the perilymph resting potential is *ψ*<sup>0</sup> = −70 mV. Solving the equivalent electrical circuit by using Kirchhoff laws [6] yields the differential equation for *ψ*ohc, the OHC's membrane voltage:

$$\frac{d\boldsymbol{\psi}\_{\rm coh}}{dt} + \boldsymbol{\alpha}\_{\rm coh} \cdot \left(\boldsymbol{\psi} - \boldsymbol{\nu}\_{\rm o}\right) = \boldsymbol{\eta} \cdot \boldsymbol{\xi}\_{\rm coh} \tag{5}$$

where *ω*ohc≈*Gb* / *Cb* =1000 Hz, which represents the cutoff frequency of the OHC's membrane and *η* =*α* ⋅*V*sm /(*Cb* + *Ca*) =const. (see Table 1).

**Figure 2.** The equivalent electrical circuit of the outer hair cell.

20–23] with the cochlea regarded as an uncoiled structure with two fluid-filled compartments with rigid walls that are separated by an elastic partition, the basilar membrane. The cochlear partition, whose mechanical properties are describable in terms of point-wise mass density, stiffness, and damping, is regarded as a flexible boundary between scala tympani and scala vestibuli. Thus, at every point along the cochlear duct, the pressure difference *P*(*x*, *t*) across the partition drives the partition's velocity. By applying fundamental physical principles, such as the conservation of mass and the dynamics of deformable bodies, the differential equation

¶ ¶ ( )

BM, TM, and OHC is shown in Figure 1 [13], which can be interpreted as

x

2

x

2

2 2

rb x( ) <sup>=</sup> ¶ ¶

2 2 , , <sup>2</sup> , *P xt x t*

( ) ( ) ( ) ( ) ( )

bm ohc ohc tm

= + ïþ

( ) ( ) ( ) ( ) ( ) ( ) ( )

, , ,,

bm bm bm bm 2 bm bm bm

*P xt M x xt R x xt K x xt t t P xt M x xt R x xt K x xt t t*

( ) ( ) ( ) ( ) ( ) ( ) ( )

, , ,,

area of BM and TM, respectively (see Table 1). The TM displacement is defined as *ξ*tm.

Since the OHCs lie between the two membranes, their displacement is considered as

 xx

x

tm tm tm tm 2 tm tm tm

,, , . 0, , *P xt P xt P xt*

= + ü

The mechanical properties of both BM and TM are simulated as second-order oscillators that

 x

¶ ¶ ü

 x

= × +× +× <sup>ï</sup> ¶ ¶ <sup>ï</sup>

<sup>ý</sup> ¶ ¶ <sup>ï</sup> = × +× +× <sup>ï</sup> ¶ ¶ <sup>þ</sup>

where *K*bm, *K*tm, *R*bm, *R*tm, *M*bm, and *M*tm are the effective stiffness, damping, and mass per unit

Each OHC is modeled by two sections, the apical and basal parts. The apical part is directed toward the endolymph of the gap between the TM and the reticular lamina (RL), while the basolateral part is embedded in the perilymph next to the supporting cells that are aligned

ï ý

*P xt P xt* (2)

x

,

(3)

x

= - ohc tm bm . (4)

bm

where *ξ*bm is the BM displacement, *A* represents the cross-sectional area of scala tympani and scala vestibuli, *β* is the BM width, and *ρ* is the density of the fluid in both the scala vestibuli and the scala tympani. The pressure on the BM (*P*bm) is a result of both the difference in fluid pressure and the pressure caused by the OHCs (*P*ohc). The relation between the pressures of

*x t <sup>A</sup>* (1)

for *P* is obtained by [e.g. 6]

6 Update On Hearing Loss

yield

An OHC's length changes due to the electrical potential developed on the OHC membrane and is defined as Δ*l* ohc. It is usually described as a sigmoid function [26–28]:

$$
\Delta l\_{\text{coh}} = \alpha\_s \frac{e^{\left(^{-2\cdot\alpha\_l \cdot \nu}\right)} - 1}{e^{\left(^{-2\cdot\alpha\_l \cdot \nu}\right)} + 1} = \alpha\_s \tanh\left(-\alpha\_l \cdot \nu\right), \tag{6}
$$

where *α<sup>l</sup>* and *αs* are constants (see Table 1).

The pressure developed by each OHC (*P*ohc) is obtained from the spring properties of the OHC [6]. Let's define *γ*ohc(*x*) as the OHC effective index. It represents the effective distribution of the OHCs along the cochlear partition. Therefore, the OHC pressure is obtained by

$$P\_{\rm obs}\left(\mathbf{x},t\right) = \mathcal{Y}\_{\rm obs}\left(\mathbf{x}\right) \cdot K\_{\rm obs}\left(\mathbf{x}\right) \cdot \left[\mathcal{F}\_{\rm obs}\left(\mathbf{x},t\right) - \Delta l\_{\rm obs}\left(\mathbf{x},t\right)\right],\tag{7}$$

where *K*ohc is the OHC's stiffness (Table 1). A cochlea with no active OHC is obtained by *γ*ohc(*x*)=0, whereas 0.5≤*γ*ohc(*x*)≤0.6 yielded an optimal cochlea that best fits physiological data [13].

The ear model described by Eqs. (1)–(7) is solved by applying initial and boundary conditions. The boundary conditions are

$$\left. \frac{\partial P(\mathbf{x}, t)}{\partial \mathbf{x}} \right|\_{\mathbf{x} = 0} = 2 \rho \mathbf{C}\_{\text{ow}} \frac{\partial^2 \xi\_{\text{ow}}(t)}{\partial t^2} \Bigg|\_{\mathbf{x} = \mathbf{0}} $$
 
$$P(L\_{\text{co}}, t) = \mathbf{0} $$

where *L* co =3.5 cm is the cochlear length, *ξ*owis the oval window displacement, and *C*ow is the coupling factor of the oval window to the perilymph. In order to obtain *ξ*ow, the middle ear model was applied [29] as expressed by the following differential equation:

$$\frac{d^2\tilde{\xi}\_{\rm{ow}}(t)}{d^2t} + \mathcal{I}\_{\rm{OW}} \cdot \frac{d\tilde{\xi}\_{\rm{ow}}(t)}{dt} + o\_{\rm{OW}}^2 \tilde{\xi}\_{\rm{OW}}(t) = \frac{1}{\sigma\_{\rm{OW}}} \Big[ P(o, t) + \Gamma\_{\rm{mc}} P\_{\rm{in}}(t) \Big],\tag{9}$$

where *σ*ow is the oval window areal density, *γ*ow is the oval window resistance, and *ω*ow is the oval window resonance frequency. The mechanical gain of the ossicles is denoted by Γme (see Table 1). *P*in(*t*) is the input acoustic stimulus.

The initial conditions are

$$\begin{aligned} \left. \tilde{\varphi}\_{\text{bm}} \left( \mathbf{x}, 0 \right) \right| &= \left. \frac{\partial \tilde{\varphi}\_{\text{bm}} \left( \mathbf{x}, t \right)}{\partial t} \right|\_{t=0} = 0; \quad \left. \tilde{\varphi}\_{\text{m}} \left( \mathbf{x}, 0 \right) \right| = \left. \frac{\partial \tilde{\varphi}\_{\text{m}} \left( \mathbf{x}, t \right)}{\partial t} \right|\_{t=0} = 0; \quad \left. \tilde{\varphi}\_{\text{ow}} \left( \mathbf{0} \right) \right| = \left. \frac{\mathbf{d} \tilde{\varphi}\_{\text{ow}} \left( \mathbf{0} \right)}{\mathbf{d}t} \right|\_{t=0} = 0 \right|. \end{aligned} \tag{10}$$


**Table 1.** List of model parameters

where *α<sup>l</sup>*

8 Update On Hearing Loss

[13].

The boundary conditions are

x

2

The initial conditions are

x

 y

( )

*x*

ohc 0

,0

x

y

and *αs* are constants (see Table 1).

g

( )

co

*PL t*

 x

bm tm ow

ow ow 2

g

Table 1). *P*in(*t*) is the input acoustic stimulus.

*x x*

=

*x*

, 0

model was applied [29] as expressed by the following differential equation:

 wx

, <sup>2</sup>

0

The pressure developed by each OHC (*P*ohc) is obtained from the spring properties of the OHC [6]. Let's define *γ*ohc(*x*) as the OHC effective index. It represents the effective distribution of

> x

where *K*ohc is the OHC's stiffness (Table 1). A cochlea with no active OHC is obtained by *γ*ohc(*x*)=0, whereas 0.5≤*γ*ohc(*x*)≤0.6 yielded an optimal cochlea that best fits physiological data

The ear model described by Eqs. (1)–(7) is solved by applying initial and boundary conditions.

<sup>ï</sup> <sup>=</sup> <sup>ï</sup>

where *L* co =3.5 cm is the cochlear length, *ξ*owis the oval window displacement, and *C*ow is the coupling factor of the oval window to the perilymph. In order to obtain *ξ*ow, the middle ear

+× + = é ù + G ë û

where *σ*ow is the oval window areal density, *γ*ow is the oval window resistance, and *ω*ow is the oval window resonance frequency. The mechanical gain of the ossicles is denoted by Γme (see

2 OW OW OW me in

( ) ( ) ( ) ( ) ( ) ( )

bm tm ow

, ,d ,0 0; ,0 0; 0 <sup>0</sup>

*t tt*

xx

*x t x t t*

<sup>ï</sup> <sup>=</sup> ïþ

= == ¶ ¶ <sup>ü</sup> = = = == = ïï ¶ ¶ <sup>ý</sup>

*t tt*

0 00

() () <sup>1</sup> () ( ,) () , *d t dt t Pot P t*

s

OW

*d t dt* (9)

xx

d .

(10)

x

2 ow ow 2

> ï ý

,

(8)

ï þ

( ) ( )

r

¶ ¶ <sup>ü</sup> <sup>=</sup> <sup>ï</sup> ¶ ¶ ï

*P xt t C x t*

( ) ( ) é ù ( ) ( ) ohc ohc ohc ohc ë û ohc *P xt x K x xt l xt* , , ,, (7)

the OHCs along the cochlear partition. Therefore, the OHC pressure is obtained by

( ) = × × -D

#### **2.1. Simulation results: The effect of outer hair cells loss**

The above cochlear model was solved in the time domain by implementing a parallel algorithm on a commodity graphics processor unit (GPU) [14].The output of the model is the BM velocity (*ξ*˙ bm(*x*, *t*)) as a response to an acoustic stimulus *P*in(*t*).

Figure 3 represents the basilar membrane velocity relative to the input level at two points along the cochlear partition. The response was obtained by applying the model for a set of simple tones *P*0sin(2*πft*) with a frequency of 100 Hz< *f* <8 kHz at different levels 0<*P*<sup>0</sup> ≤120 dB SPL. The gain plotted in Figure 3 was derived by |*ξ*˙ bm(*x*)| / *P*0, where *x* =0.67 cm from the stapes (Figure 3A) and *x* =1.8 cm from the stapes (Figure 3B). Each solid line was obtained from a different level for a normal cochlea (*γ*ohc(*x*)=0.5). The broken line represents an abnormal cochlea with 100% OHC loss, which was derived by the model by substituting *γ*ohc(*x*)=0. For the normal cochlea, the maximum sensitivity at *x* =0.67 cm from the stapes (Figure 3A) was obtained when the stimulus was at 4 kHz and 0 dB SPL. The sensitivity is reduced with the increase in the input level, and the maximum sensitivity was shifted to a lower frequency (about 1 kHz). These results are in agreement with experimental results [30]. Figure 3B represents a characteristic frequency of 1 kHz that yielded wider responses as a function of frequency for all input levels. However, the gain of the damaged cochlea (broken line in Figure 3) was independent of the input level at both locations. When substituting *γ*ohc(*x*)=0 in the cochlear model's equations, the nonlinear terms are zeroed and the model becomes linear.

**Figure 3.** Derivation of the basilar membrane gain (|*ξ*˙ bm(*x*0)| / *P*0) as a function of input frequency at two locations along the cochlear partition: *x* = 0.67 cm from the stapes (A) and *x* = 1.8 cm from the stapes (B). Each solid line repre‐ sents a different input level and a normal cochlea (*γ*ohc=0.5). The broken line represents a damaged cochlea (*γ*ohc=0). A similar gain was obtained for all input levels.

Figure 5 represents the relative BM velocity obtained by the model when the Hebrew word "SHEN" was introduced. The input word is presented in Figure 4 as a function of time (upper panel) and by its spectrogram (lower panel).

The absolute BM velocity in dB is presented in a color-coded two-dimensional image, whose *x*-axis represents the poststimulus time in milliseconds with its *y*-axis representing the distance from the stapes in cm. There are four images in Figure 5. The images in the left column represent a relative low input level (20 dB SPL), while the images in the right column represent an input level of 70 dB SPL. The upper panels represent a damaged cochlea with a 98% OHC loss (*γ*ohc=0.01), while the lower panels represent a normal cochlea (*γ*ohc=0.5). The difference between the normal and the damaged cochleae is clearly demonstrated in Figure 5 in both levels. In the damaged cochlea, the low-level stimulus yielded a BM vibration, which most likely will not be sufficient to evoke the neural response. Note that the maximum difference in the BM velocity between the normal and the damaged cochlea in response to the low-level stimuli is almost 40 dB. However, the maximum response between the two cochleae for the 70 dB input level is only 6 dB. This difference was induced by the nonlinear properties of the OHCs in the normal cochlea.

**2.1. Simulation results: The effect of outer hair cells loss**

10 Update On Hearing Loss

(*ξ*˙ bm(*x*, *t*)) as a response to an acoustic stimulus *P*in(*t*).

the nonlinear terms are zeroed and the model becomes linear.

(*γ*ohc=0). A similar gain was obtained for all input levels.

panel) and by its spectrogram (lower panel).

The above cochlear model was solved in the time domain by implementing a parallel algorithm on a commodity graphics processor unit (GPU) [14].The output of the model is the BM velocity

Figure 3 represents the basilar membrane velocity relative to the input level at two points along the cochlear partition. The response was obtained by applying the model for a set of simple tones *P*0sin(2*πft*) with a frequency of 100 Hz< *f* <8 kHz at different levels 0<*P*<sup>0</sup> ≤120 dB SPL. The gain plotted in Figure 3 was derived by |*ξ*˙ bm(*x*)| / *P*0, where *x* =0.67 cm from the stapes (Figure 3A) and *x* =1.8 cm from the stapes (Figure 3B). Each solid line was obtained from a different level for a normal cochlea (*γ*ohc(*x*)=0.5). The broken line represents an abnormal cochlea with 100% OHC loss, which was derived by the model by substituting *γ*ohc(*x*)=0. For the normal cochlea, the maximum sensitivity at *x* =0.67 cm from the stapes (Figure 3A) was obtained when the stimulus was at 4 kHz and 0 dB SPL. The sensitivity is reduced with the increase in the input level, and the maximum sensitivity was shifted to a lower frequency (about 1 kHz). These results are in agreement with experimental results [30]. Figure 3B represents a characteristic frequency of 1 kHz that yielded wider responses as a function of frequency for all input levels. However, the gain of the damaged cochlea (broken line in Figure 3) was independent of the input level at both locations. When substituting *γ*ohc(*x*)=0 in the cochlear model's equations,

**Figure 3.** Derivation of the basilar membrane gain (|*ξ*˙ bm(*x*0)| / *P*0) as a function of input frequency at two locations along the cochlear partition: *x* = 0.67 cm from the stapes (A) and *x* = 1.8 cm from the stapes (B). Each solid line repre‐ sents a different input level and a normal cochlea (*γ*ohc=0.5). The broken line represents a damaged cochlea

Figure 5 represents the relative BM velocity obtained by the model when the Hebrew word "SHEN" was introduced. The input word is presented in Figure 4 as a function of time (upper

**Figure 4.** The Hebrew word "SHEN" pronounced by a female speaker. The sound pressure as a function of time (up‐ per panel) and the correspondent spectrogram (lower panel).

The BM velocity in response to the consonant "sh" is very different in the four images in Figure 5. The maximum response was shifted toward the stapes when the amplitude was increased in the normal cochlea. In response to the high level stimuli, the maximum BM velocity obtained was closer to the stapes in the damaged cochlea than in the normal one.

#### **3. Model of the Inner hair cell—auditory nerve synapse**

The basilar membrane motion is transformed into neural spikes of the auditory nerve by the inner hair cells. The deflection of the hair-cell stereocilia opens mechanically gated ion channels that allow any small, positively charged ions (primarily potassium and calcium) to enter the cell [31]. Unlike many other electrically active cells, the hair cell itself does not fire an action

**Figure 5.** Relative BM velocity as a function of time along the cochlear partition as a response to the word "SHEN." The upper panels represent a damaged cochlea with outer hair cells loss and the lower panels represent a normal coch‐ lea.

potential. Instead, the influx of positive ions from the endolymph in the scala media depolar‐ izes the cell, resulting in a receptor potential. This receptor potential opens voltage-gated calcium channels; calcium ions then enter the cell and trigger the release of neurotransmitters at the basal end of the cell. The neurotransmitters diffuse across the narrow space between the hair cell and a nerve terminal, where they then bind to receptors and thus trigger action potentials in the nerve. In this way, the mechanical sound signal is converted into an electrical nerve signal. The IHCs chronically leak Ca+2. This leakage causes a tonic release of neuro‐ transmitter to the synapses. It is thought that this tonic release is what allows the hair cells to respond so quickly to mechanical stimuli. The quickness of the hair cell response may also be due to that fact that it can increase the amount of neurotransmitter release in response to a change as little as 100 μV in membrane potential.

Many models were developed for explaining the IHC's transduction abilities [16, 32, 33]. Some models focused on possible mechanisms for adaptation [17, 34–36]. Others were concerned with the biophysics of hair cells [37, 38] or the mechanoelectric transduction process [39].

One commonly simplified modeling approach to explain the IHC's role in the auditory system posits a nonlinear system that combines AC and DC responses followed by a random generator that creates spike trains [7, 16, 17, 40]. The model presented in this chapter is consistent with these principles.

The BM displacement stimulates the IHC cilia to move, its velocity *ξ*˙*ihc* corresponding to the BM velocity (*ξ*˙ *bm*) by a nonlinear function, e.g.,

$$\dot{\xi}\_{\rm hc} = \alpha\_1 \tanh\left(\alpha\_2 \cdot \dot{\xi}\_{\rm bm}\right) \approx \alpha\_1 \cdot \left[\alpha\_2 \cdot \dot{\xi}\_{\rm bm} - \frac{\left(\alpha\_2 \cdot \dot{\xi}\_{\rm bm}\right)^3}{3} + \frac{2\left(\alpha\_2 \cdot \dot{\xi}\_{\rm bm}\right)^5}{15} - \dots\right].\tag{11}$$

Since the BM displacement in this model is nonlinear as described by the mechanical model above, we ignore the nonlinear terms in Eq. (11) and assume that *α*<sup>1</sup> ⋅*α*<sup>2</sup> =1 ; therefore, *ξ*˙ihc≈*ξ*˙ bm.

The mechanoelectrical receptors that are located in the IHC membrane yield an increase in the electrical potential (*ψ*ihc) of the IHC membrane. A common modeling approach for the IHC's role in the auditory system is based on a nonlinear system that combines AC and DC responses [7, 40]. The DC level represents the firing responses without any synchrony to the input stimuli and the AC level represents the synchronized firing response (typical at low frequencies). The DC component includes a high-pass filter followed by a moving average filter of 2 ms long; the AC component consists of a low-pass filter. In order to account for physiological observa‐ tions that demonstrated a reduction in synchronization as the frequency of the stimulus increases[41], we chose a low-pass filter with a cutoff frequency of 1000 Hz, with a slope of 30 dB/decade. In practice, *ψ*ihc is obtained by

$$\boldsymbol{\Psi}\_{\rm hlc}\left(\mathbf{x},t\right) = e^{\boldsymbol{\gamma}\_{\rm hlc}\left(\mathbf{x}\right)} \cdot \left\langle \boldsymbol{\eta}\_{\rm hc} \cdot \dot{\boldsymbol{\xi}}\_{\rm hlc}\left(\mathbf{x},t\right)^{\star} \boldsymbol{h}\_{\rm hc}\left(t\right) + \boldsymbol{\eta}\_{\rm DC} \cdot \int\_{t-\Lambda}^{t} \left\langle \dot{\boldsymbol{\xi}}\_{\rm hlc}\left(\mathbf{x},t\right)^{\star} \left[\mathbf{1} - \boldsymbol{h}\_{\rm hc}\left(t\right)\right] \right\rangle^{2} dt \right\rangle,\tag{12}$$

potential. Instead, the influx of positive ions from the endolymph in the scala media depolar‐ izes the cell, resulting in a receptor potential. This receptor potential opens voltage-gated calcium channels; calcium ions then enter the cell and trigger the release of neurotransmitters at the basal end of the cell. The neurotransmitters diffuse across the narrow space between the hair cell and a nerve terminal, where they then bind to receptors and thus trigger action potentials in the nerve. In this way, the mechanical sound signal is converted into an electrical nerve signal. The IHCs chronically leak Ca+2. This leakage causes a tonic release of neuro‐ transmitter to the synapses. It is thought that this tonic release is what allows the hair cells to respond so quickly to mechanical stimuli. The quickness of the hair cell response may also be due to that fact that it can increase the amount of neurotransmitter release in response to a

**Figure 5.** Relative BM velocity as a function of time along the cochlear partition as a response to the word "SHEN." The upper panels represent a damaged cochlea with outer hair cells loss and the lower panels represent a normal coch‐

Many models were developed for explaining the IHC's transduction abilities [16, 32, 33]. Some models focused on possible mechanisms for adaptation [17, 34–36]. Others were concerned with the biophysics of hair cells [37, 38] or the mechanoelectric transduction process [39].

One commonly simplified modeling approach to explain the IHC's role in the auditory system posits a nonlinear system that combines AC and DC responses followed by a random generator that creates spike trains [7, 16, 17, 40]. The model presented in this chapter is consistent with

The BM displacement stimulates the IHC cilia to move, its velocity *ξ*˙*ihc* corresponding to the

é ù × × ê ú = × »× × - + -

& & & && <sup>K</sup>

ax

 ax) ( )

2 bm 2 bm

2

3 5

ë û

tanh . 3 15 (11)

change as little as 100 μV in membrane potential.

BM velocity (*ξ*˙ *bm*) by a nonlinear function, e.g.,

 ax

ihc 1 2 bm 1 2 bm

( ) (

 a ax

these principles.

lea.

12 Update On Hearing Loss

x a where *x* represents the location of the IHC along the cochlear partition, *h*ihc(*t*) is the impulse response of the low-pass filter that represents the IHC response, and *η*AC, *η*DC, and Δ are constants (see Table 1). The parameter *γ*ihc(*x*) represents the IHC efficiency index. It was defined as a function of *x*, to allow variability in IHC efficiency along the cochlear partition. For normal cochlea, we chose *γ*ihc(*x*)=8, which was found to match experimental data. The efficiency of the IHC is reduced with a decrease of *γ*ihc(*x*).

This IHC receptor potential opens voltage-gated calcium channels; calcium ions then enter the cell and trigger the release of neurotransmitters at the basal end of the cell. The neurotrans‐ mitters diffuse across the narrow space between the hair cell and a nerve terminal where they then bind to receptors and thus trigger action potentials in the nerve.

The neural activity in the auditory system is irregular since a specific neuron might respond with a single spike or several spikes to a given stimuli [42]. The origin of the stochastic activity of neurons is poorly understood. This activity results in both intrinsic noise sources that generate stochastic behavior on the level of the neuronal dynamics and extrinsic sources that arise from network effects and synaptic transmission [43]. Another source of noise that is specific to neurons arises from the finite number of ion channels in a neuronal mem‐ brane patch [31, 44].

There are a number of different ways that have emerged to describe the stochastic properties of neural activity. One possible approach relates to the train of spikes as a stochastic point process. For example, in their earlier studies, Alaoglu and Smith [45] and Rodieck et al. [46] suggested that the spontaneous activity of the cochlear nucleus can be described as a homo‐ geneous Poisson process. Further investigations of the auditory system described the neural response as a nonhomogeneous Poisson point process (NHPP) whose instantaneous rate depends on the input stimuli [47, 48].

In the present chapter, we relate to the neural activity as NHHP, and thus only the instanta‐ neous rate (IR) should be extracted. In order to derive IR, we use the Weber–Fechner law, which describes the relationship between the magnitude of a physical stimulus and the intensity or strength that people feel. This kind of relationship can be described by a differential equation:

$$dP = K \frac{dS}{S}$$

where *dP* is the differential change in perception, *dS* is the differential increase in the stimulus, and *S* is the stimulus at the instant. Integrating the above equation reveals *P* =*k* ⋅ lnS + *C*. Let us define *λ*AN(*x*, *t*) as the IR obtained by the auditory fiber attached to location *x* along the cochlear partition, and let us assume that it relates to the perception of the physical parameter. On the other hand, *ψ*ihc(*x*, *t*), the IHC electrical potential corresponds to the stimulus. There‐ fore, by applying the Weber–Fechner law, we obtained the relationship *λ*AN(*x*, *t*)=ln(*ψ*ihc(*x*, *t*)) + *C*. However, the AN's IR should satisfy the following conditions: 0<*λ*spont≤*λ*AN(*x*, *t*)≤*λ*sat, where *λspont* and *λsat* are the spontaneous and saturation rates of the AN, respectively. Therefore, *λ*AN(*x*, *t*) is obtained by

$$\mathcal{A}\_{\rm AN} \left( \mathbf{x}, t \right) = \min \left\langle \mathcal{A}\_{\rm sat}, \max \left\langle \mathcal{A}\_{\rm spot}, A\_{\rm hc} \cdot \ln \left( \mathbf{1} + \mathbf{u}(\mathcal{W}\_{\rm hc} \left( \mathbf{x}, t \right)) \right) \right\rangle \right\rangle \tag{13}$$

where *u* is the step function and *Aihc* is a constant (see Table 1).

In general, the auditory nerve response is divided into three types of fibers according to their spontaneous rates: a high spontaneous rate (HSR) that usually codes low-level stimuli, a medium spontaneous rate (MSR), and a high spontaneous rate (LSR) that generally codes high level stimuli. In order to include all types of auditory nerves, we substitute in Eq. (13) the relevant constants *λ*spont (H) , *<sup>A</sup>*H; *<sup>λ</sup>*spont (M) , *<sup>A</sup>*M; *<sup>λ</sup>*spont (L) , *<sup>A</sup>*L for the HSR, MSR, and LSR that yield the instantaneous rates *λ*AN (H)(*x*, *<sup>t</sup>*), *<sup>λ</sup>*AN (M)(*x*, *<sup>t</sup>*), *<sup>λ</sup>*AN (L)(*x*, *t*) , respectively. The different types of ANs are distributed uniformly along the cochlear partition, where the most frequent fibers are those with a low spontaneous rate (about 60%).

The IRs (spikes per second) for the LSR fibers, *λ*AN (L)(*x*, *t*), as a response to the Hebrew word "SHEN" are exhibited in Figure 6 by color-coded images as a function of time (*x*-axis) and along the cochlear partition (*y*-axis). The basilar membrane velocity as a response to this word was shown in Figure 5 for two different levels. In Figure 6, the response to the high level stimulus (70 dB SPL) is displayed. Four images are presented in Figure 6, each representing a different type of cochlea. Each cochlea is defined by the two indices, *γ*ohc and *γ*ihc, which represent the efficiency of the OHC and IHC, respectively. In this example, both indices were constant along the cochlear partition. For normal cochlea, we chose *γ*ohc=0.5 and *γ*ihc=8 ; these values exhibit the best fit to experimental data [13].

response as a nonhomogeneous Poisson point process (NHPP) whose instantaneous rate

In the present chapter, we relate to the neural activity as NHHP, and thus only the instanta‐ neous rate (IR) should be extracted. In order to derive IR, we use the Weber–Fechner law, which describes the relationship between the magnitude of a physical stimulus and the intensity or strength that people feel. This kind of relationship can be described by a differential

> <sup>=</sup> *dS dP K S*

where *dP* is the differential change in perception, *dS* is the differential increase in the stimulus, and *S* is the stimulus at the instant. Integrating the above equation reveals *P* =*k* ⋅ lnS + *C*. Let us define *λ*AN(*x*, *t*) as the IR obtained by the auditory fiber attached to location *x* along the cochlear partition, and let us assume that it relates to the perception of the physical parameter. On the other hand, *ψ*ihc(*x*, *t*), the IHC electrical potential corresponds to the stimulus. There‐ fore, by applying the Weber–Fechner law, we obtained the relationship *λ*AN(*x*, *t*)=ln(*ψ*ihc(*x*, *t*)) + *C*. However, the AN's IR should satisfy the following conditions: 0<*λ*spont≤*λ*AN(*x*, *t*)≤*λ*sat, where *λspont* and *λsat* are the spontaneous and saturation rates of the

In general, the auditory nerve response is divided into three types of fibers according to their spontaneous rates: a high spontaneous rate (HSR) that usually codes low-level stimuli, a medium spontaneous rate (MSR), and a high spontaneous rate (LSR) that generally codes high level stimuli. In order to include all types of auditory nerves, we substitute in Eq. (13) the

are distributed uniformly along the cochlear partition, where the most frequent fibers are those

"SHEN" are exhibited in Figure 6 by color-coded images as a function of time (*x*-axis) and along the cochlear partition (*y*-axis). The basilar membrane velocity as a response to this word was shown in Figure 5 for two different levels. In Figure 6, the response to the high level stimulus (70 dB SPL) is displayed. Four images are presented in Figure 6, each representing a different type of cochlea. Each cochlea is defined by the two indices, *γ*ohc and *γ*ihc, which

 y

(L) , *<sup>A</sup>*L for the HSR, MSR, and LSR that yield the

(L)(*x*, *t*) , respectively. The different types of ANs

(L)(*x*, *t*), as a response to the Hebrew word

AN (*x t*, min ,max , ln 1 ( , , ) = { sat { spont ihc *A u xt* × + ( ihc ( ))}} (13)

depends on the input stimuli [47, 48].

AN, respectively. Therefore, *λ*AN(*x*, *t*) is obtained by

where *u* is the step function and *Aihc* is a constant (see Table 1).

(H) , *<sup>A</sup>*H; *<sup>λ</sup>*spont

(H)(*x*, *<sup>t</sup>*), *<sup>λ</sup>*AN

The IRs (spikes per second) for the LSR fibers, *λ*AN

with a low spontaneous rate (about 60%).

ll

(M) , *<sup>A</sup>*M; *<sup>λ</sup>*spont

(M)(*x*, *<sup>t</sup>*), *<sup>λ</sup>*AN

l

relevant constants *λ*spont

instantaneous rates *λ*AN

equation:

14 Update On Hearing Loss

The upper-left image in Figure 6 represents a normal cochlea (*γ*ohc=0.5; *γ*ihc=8). The upper-right image corresponds to a cochlea with intact OHC but with 25% IHC loss (*γ*ihc=6). A clear reduction in the instantaneous rate is shown. The maximum instantaneous rate was reduced from 160 spikes/s in the normal cochlea to 100 in the damaged one. Moreover, in the damaged cochlea, about 25% more instances (time and location along the cochlear partition) reached the spontaneous rate 0.1 spikes/s relative to the normal cochlea.

The two lower images in Figure 6 represent cochleae with 98% OHC loss (*γ*ohc=0.01). The BM response was changed as Figure 5 shows. Thus, the reduction in the instantaneous rate corresponds entirely to the decrease in BM velocity when the cochlea has intact IHCs (lowerleft image). For a cochlea with both OHC and IHC loss (lower-right image), the instantaneous rate was reduced because of both losses. The response to the high frequencies that correspond to the syllable "SH" almost vanished.

**Figure 6.** Derived instantaneous rates as a response to the Hebrew word "SHEN" at 70 dB SPL. Each panel represents a different type of ear. The upper-left panel represents a normal cochlea. The upper-right panel represents a cochlea with IHC loss. The lower-left panel represents a cochlea with OHC loss and the lower-right panel represents both IHC and OHC loss.

#### **4. Threshold estimation based on the auditory nerve**

The hearing threshold, defined as the lowest threshold of acoustic pressure sensation, is usually determined by quantitative psychoacoustical experiments in which the human ability to detect the smallest difference in the stimulus' physical property is obtained. This difference is referred to as a just-noticeable difference (JND). In such experiments, a subject must distinguish between two close time (*t*) dependent stimuli: *s*(*t*, *α*) and *s*(*t*, *α* + Δ*α*), where *α* is a given physical property. The JND(*α*) will be the minimum Δ*α* a person can perceive. The parameter α represents any physical property of the stimulus that can be measured such as frequency or level in monaural stimulus.

Comparing the behavioral JND and the neural activity is possible if one assumes that the neural system estimates the measured parameters. Siebert [18] obtained such a comparison when the JND of a single tone's frequency and level was compared to the neural activity of the auditory nerve. Siebert's findings were based on the assumption that the auditory nerve (AN) response behaves as an NHPP, and the brain acts as an unbiased optimal estimator of the physical parameters. Thus, the JND is equal to the standard deviation of the estimated parameter and can be derived by lower bounds such as the Cramer–Rao lower bound. Heinz et al. [49]) generalized Sibert's results to a larger range of frequencies and levels.

In a psychoacoustical JND experiment, the yielded JND value is obtained when *d* ′ =1, which is expressed by:

$$d' = \frac{E\left[\hat{\alpha} \mid \alpha^\*\right] - E\left[\hat{\alpha} \mid \left(\alpha^\* + \Delta\alpha\right)\right]}{std\left(\hat{\alpha} \mid \alpha^\*\right)} = \frac{\Delta\alpha}{std\left(\hat{\alpha} \mid \alpha^\*\right)}\tag{14}$$

where *E α* ^ <sup>|</sup>*<sup>α</sup>* \* <sup>=</sup>*<sup>α</sup>* \*, *<sup>α</sup>* \* is the true value of *α*, and *<sup>α</sup>* ^ is the estimated value of *α*. Therefore, *d* ′ =1, yields the relations Δ*α* =*std*(*α* ^ <sup>|</sup>*<sup>α</sup>* \*), which implies

$$\text{JND}\left(a^\*\right) = \text{std}\left(\hat{a}|a^\*\right). \tag{15}$$

When the estimation is based on neural activity that behaves as NHPP, there are two possible ways to analyze the performance. The first way is referred to as "rate coding" (RA), which means that the performance is analyzed on the basis of the number of spikes. The second way is referred as "all information coding" (AI), indicating that in addition to the number of spikes in the interval, the timing of the discharge spikes is considered as well.

Let us define *N* (0, *T* ) as the random variable that represents the number of spikes in the time interval 0, *T* . For the RA coding, the probability density function (pdf) of getting *n* spikes in the time interval of length *T* is obtained by

$$P\_{\rm RA}\left(\mathcal{N}\left(0,T\right)=n\right)=\frac{1}{n!}\left[\int\_0^T \mathcal{\lambda}\left(t,\alpha\right)dt\right]^n \exp\left\{-\int\_0^T \mathcal{\lambda}\left(t,\alpha\right)dt\right\},\tag{16}$$

where *λ*(*t*, *α*) is the instantaneous rate of the nerve fiber that depends on both the time *t* and the physical parameter *α*. Given the RA pdf (Eq. (16)), the resulting Cramer–Rao lower bound (CRLB) is obtained by [50]

$$\text{CRLB}\_{\text{RA}}\left(\boldsymbol{\alpha}^{\*}\right) = \left\{ \frac{T}{\overline{\lambda}\left(\boldsymbol{a}^{\*}\right)} \left[ \left. \frac{\partial \overline{\lambda}\left(\boldsymbol{a}\right)}{\partial \boldsymbol{a}} \right|\_{\boldsymbol{a}=\boldsymbol{a}^{\*}} \right]^{\frac{1}{2}} \right\}^{-\frac{1}{2}} \tag{17}$$

where *λ*¯(*α*)= <sup>1</sup> *T ∫* 0 *T λ*(*t*, *α*)*dt* is the average rate.

to detect the smallest difference in the stimulus' physical property is obtained. This difference is referred to as a just-noticeable difference (JND). In such experiments, a subject must distinguish between two close time (*t*) dependent stimuli: *s*(*t*, *α*) and *s*(*t*, *α* + Δ*α*), where *α* is a given physical property. The JND(*α*) will be the minimum Δ*α* a person can perceive. The parameter α represents any physical property of the stimulus that can be measured such as

Comparing the behavioral JND and the neural activity is possible if one assumes that the neural system estimates the measured parameters. Siebert [18] obtained such a comparison when the JND of a single tone's frequency and level was compared to the neural activity of the auditory nerve. Siebert's findings were based on the assumption that the auditory nerve (AN) response behaves as an NHPP, and the brain acts as an unbiased optimal estimator of the physical parameters. Thus, the JND is equal to the standard deviation of the estimated parameter and can be derived by lower bounds such as the Cramer–Rao lower bound. Heinz et al. [49])

generalized Sibert's results to a larger range of frequencies and levels.

ˆ ˆ | |

aa

*d*

=1, yields the relations Δ*α* =*std*(*α*

the time interval of length *T* is obtained by

( ( ) )

*E E*

^ <sup>|</sup>*<sup>α</sup>* \* <sup>=</sup>*<sup>α</sup>* \*, *<sup>α</sup>* \* is the true value of *α*, and *<sup>α</sup>*

In a psychoacoustical JND experiment, the yielded JND value is obtained when *d* ′

 a a

é ù - +D é ù ë û ë û <sup>D</sup> ¢ = = \* \*

> a a

a

in the interval, the timing of the discharge spikes is considered as well.

*n*

( )

^ <sup>|</sup>*<sup>α</sup>* \*), which implies

JND \* \* . (

 aa

When the estimation is based on neural activity that behaves as NHPP, there are two possible ways to analyze the performance. The first way is referred to as "rate coding" (RA), which means that the performance is analyzed on the basis of the number of spikes. The second way is referred as "all information coding" (AI), indicating that in addition to the number of spikes

Let us define *N* (0, *T* ) as the random variable that represents the number of spikes in the time interval 0, *T* . For the RA coding, the probability density function (pdf) of getting *n* spikes in

la

*P N T n t dt t dt*

( ) ( ) é ùì ü ï ï = = ê ú í ý - ê ú ë ûî þ ï ï RA ò ò

0 0 <sup>1</sup> 0, , exp , , ! *<sup>n</sup> T T*

 a

( ) ( )

\* \*

, ˆ ˆ | |

a

*std std* (14)

) = *std*( ˆ ) (15)

^ is the estimated value of *α*. Therefore,

a a

 la =1, which

(16)

frequency or level in monaural stimulus.

is expressed by:

16 Update On Hearing Loss

where *E α*

*d* ′

For the AI coding, the probability density function of getting *n* successive neural spikes at a set of time instances is *t*1, *t*2, …, *tn* , where 0≤*t*<sup>1</sup> <*t*<sup>2</sup> < … <*tn* ≤*T* is obtained by

$$P\_{\rm Al}\left(\mathcal{N}\{0, T = n\}; t\_1, \dots, t\_n\right) = \frac{1}{n!} \prod\_{k=1}^n \mathcal{N}\{t\_{k'}, a\} \exp\left\{-\int\_0^T \mathcal{N}\{t, a\}dt\right\}.\tag{18}$$

The resulting CRLB was derived by Bar David [51], which yields

$$\text{CRLB}\_{\text{All}}\left(\boldsymbol{\alpha}^{\*}\right) = \left\{ \left[ \frac{1}{\boldsymbol{\alpha}} \frac{1}{\lambda\left(t,\boldsymbol{\alpha}^{\*}\right)} \left[ \frac{\partial\lambda\left(t,\boldsymbol{\alpha}\right)}{\partial\boldsymbol{\alpha}} \bigg|\_{\boldsymbol{\alpha}=\boldsymbol{\alpha}^{\*}} \right]^{2} \right]^{-\frac{1}{2}} \right. \tag{19}$$

In every unbiased system, the following relations hold:

$$std\left(\left[\hat{a}\middle|\left.a^\*\right.\right]\right) \ge \text{CRLB}\_{\text{RATE}}\left(\left.a^\*\right.\right) \ge \text{CRLB}\_{\text{Al}}\left(\left.a^\*\right.\right). \tag{20}$$

In an optimal unbiased system, the standard deviation of the estimator can achieve the lower bounds. Since JND(*α* \*)=*std*(*α* ^ <sup>|</sup>*<sup>α</sup>* \*) (Eq. 15), JND(*<sup>α</sup>* \*) can be estimated by calculating CRLBRA(*α* \*) or CRLBAI(*α* \*). Comparing the estimated thresholds to experimental results can resolve the question whether the brain estimates the auditory thresholds according to RA or AI coding.

In order to apply the above-mentioned method for determining the auditory threshold, we should consider the responses of all 30,000 AN fibers that innervate each ear. Since the AN fibers are statistically independent [2], the *d*′ theorem can be applied, which yields

$$\left(\left(d^{\circ}\right)^{2} = \sum\_{m=1}^{M} \left(d\_{m}^{\circ}\right)^{2}\right) \tag{21}$$

where *M* is the number of nerve fibers and *dm* ' is the *d*′ (Eq. 14) that was derived for the *m*th fiber. Moreover,

$$std\left(\hat{\alpha} \mid \alpha^\*\right) = \frac{1}{\cdot} \left\langle \sqrt{\sum\_{m=1}^{M} \left\{ std\_m \left(\hat{\alpha} \mid \alpha^\*\right)\right\}^{-2}}\right\rangle \tag{22}$$

where *stdm*(*α* ^ <sup>|</sup>*<sup>α</sup>* \*) is the standard deviation of the estimator obtained by the *m*th fiber. Since the threshold is obtained when *d* ′ =1, it implies that in an optimal system,

$$JND\left(\alpha^{\*}\right) = \sqrt[4]{\sum\_{m=1}^{M} \left\{ \text{CRLB}\_{m}\left(\alpha^{\*}\right) \right\}^{-2}},\tag{23}$$

where CRLB*m*(*<sup>α</sup>* \*) is the CRLB of the *m*th fiber.

Let us define the number of fibers attached to each location along the cochlear partition as *M* (*x*). Thus, ∑ *x*∈ *o*,*L co M* (*x*)=30, 000, where *L CO* is the cochlear length. For every location, three IRs were derived *λ*AN (H)(*x*, *<sup>t</sup>*), *<sup>λ</sup>*AN (M)(*x*, *<sup>t</sup>*), *<sup>λ</sup>*AN (L)(*x*, *t*) (Eq. 13), which correspond to the HSR, MSR,

and LSR fibers, respectively. They are distributed uniformly along the cochlear partition with corresponding weights *w*L, *w*M, *w*H (see Table 1). Therefore,

$$\text{JND}\left(\alpha \,\,^\*\right) = \frac{1}{\sqrt{F\_{\text{L}} + F\_{\text{M}} + F\_{\text{H}}}} \,\, ^{\prime} \tag{24}$$

where

$$\begin{split} F\_{\text{L}} &= \sum\_{\mathbf{x} \in \{o, \mathcal{L}\_{\text{CC}}\}} \sum\_{m=1}^{w\_{\text{L}} \cdot \mathcal{M}(\mathbf{x})} \left\{ \text{CRLB}\_{m}^{(\text{L})} \left( \boldsymbol{\alpha}^{\ast} \right) \right\}^{\cdot -2} \\ F\_{\text{M}} &= \sum\_{\mathbf{x} \in \{o, \mathcal{L}\_{\text{CC}}\}} \sum\_{m=1}^{w\_{\text{M}} \cdot \mathcal{M}(\mathbf{x})} \left\{ \text{CRLB}\_{m}^{(\text{M})} \left( \boldsymbol{\alpha}^{\ast} \right) \right\}^{\cdot -2} \\ F\_{\text{H}} &= \sum\_{\mathbf{x} \in \{o, \mathcal{L}\_{\text{CC}}\}} \sum\_{m=1}^{w\_{\text{M}} \cdot \mathcal{M}(\mathbf{x})} \left\{ \text{CRLB}\_{m}^{(\text{H})} \left( \boldsymbol{\alpha}^{\ast} \right) \right\}^{\cdot -2} \end{split} \tag{25}$$

Replacing *CRLB* in Eq. (24) with the corresponding CRLBRA(*α* \*) or CRLBAI(*α* \*), JND(*α* \*)is estimated by either RATE or AI coding.

#### **4.1. Simulation results: rate or all information?**

( ) ( ) = <sup>=</sup> å 2 2 1 ' ', *M*

*m*

) { (

= <sup>=</sup> å <sup>2</sup> \* \* 1 ˆ ˆ |1 | , *M*

) { ( )}

Let us define the number of fibers attached to each location along the cochlear partition as

and LSR fibers, respectively. They are distributed uniformly along the cochlear partition with

) <sup>=</sup> + + LMH

{ ( )} ( )

CRLB \*

*m*

<sup>×</sup> -

<sup>ü</sup> <sup>=</sup> <sup>ï</sup>

<sup>ï</sup> <sup>=</sup> <sup>ý</sup>

<sup>ï</sup> <sup>=</sup> <sup>ï</sup>

a

<sup>2</sup> (L)

a

CRLB \* .

<sup>2</sup> (M)

a

<sup>2</sup> (H)

{ ( )} ( )

*m*

<sup>×</sup> -

{ ( )} ( )

CRLB \*

*m*

<sup>×</sup> -

<sup>=</sup> å <sup>2</sup> 1 \* 1 \* , *M*

=

*m*

*m*

(a a

> (a

where CRLB*m*(*<sup>α</sup>* \*) is the CRLB of the *m*th fiber.

(H)(*x*, *<sup>t</sup>*), *<sup>λ</sup>*AN

L

*F*

M

*F*

H

*F*

corresponding weights *w*L, *w*M, *w*H (see Table 1). Therefore,

(M)(*x*, *<sup>t</sup>*), *<sup>λ</sup>*AN

(a

Îé ù = ë û

*CO M*

*x oL m*

å å

, 1

*L*

*w Mx*

*w Mx*

*w Mx*

Îé ù = ë û

*CO H*

*x oL m*

å å

, 1

Îé ù = ë û

*CO*

*x oL m*

å å

, 1

fiber. Moreover,

18 Update On Hearing Loss

where *stdm*(*α*

*M* (*x*). Thus, ∑

where

IRs were derived *λ*AN

*x*∈ *o*,*L co*

the threshold is obtained when *d* ′

*m*

where *M* is the number of nerve fibers and *dm* ' is the *d*′ (Eq. 14) that was derived for the *m*th

a a )} -

^ <sup>|</sup>*<sup>α</sup>* \*) is the standard deviation of the estimator obtained by the *m*th fiber. Since

=1, it implies that in an optimal system,

 a

*m*


*M* (*x*)=30, 000, where *L CO* is the cochlear length. For every location, three

(L)(*x*, *t*) (Eq. 13), which correspond to the HSR, MSR,

<sup>1</sup> JND \* , *FF F* (24)

ï ï

ï ï (25)

þ

*JND CRLB* (23)

*std std* (22)

*m*

*d d* (21)

In order to calculate both CRLBRATE(*α* \*) and CRLBAI(*α* \*), the derivative of the instantaneous rate should be derived. We have used the following approximation:

$$
\left\langle \left\| \lambda(t, a) \right\rangle \middle| \left\| a \right\|\_{a \star a^{\star}} \approx \frac{\lambda \left( t, a^{\star} + \Delta a \right) - \lambda \left( t, a^{\star} \right)}{\Delta a} . \tag{26}
$$

Therefore, in deriving JND(*α* \*) for any stimulus *s*(*t*, *α* \*), the IRs for both stimuli *s*(*t*, *α* \*) and *s*(*t*, *α* \* + Δ*α*) should be calculated. Two types of thresholds will be presented for tones in quiet and in the presence of noise. The quiet threshold was derived by substituting *α* \* =0 that yielded *<sup>λ</sup>*(*t*, *<sup>α</sup>* \*) <sup>=</sup>*λ*spont. For the thresholds in the presence of noise, *s*(*t*, *<sup>α</sup>* \*) is equal to the noise, and *s*(*t*, *α* \* + Δ*α*) is equal to the noise +tone with a level of Δ*α*.

We have calculated the amplitude thresholds as a function of frequency while using both types of coding, RA and AI. The derived thresholds are shown in Figure 7 along with normal equalloudness-level contour at threshold (ISO 226:2003) [52]. The rate coding successfully predicts the ISO 226 standard while the AI coding yielded performances that are better by a few decibels. This difference was not sufficient for deciding what type of coding is used by the brain in order to determine the absolute thresholds. Deriving the thresholds in the presence of noise revealed a more significant difference between the two types of coding.

**Figure 7.** Estimated thresholds as a function of frequency obtained by a normal cochlea according to both rate and AI coding along with normal equal-loudness-level contour at threshold (ISO 226:2003).

In order to present the threshold of tones in the presence of noise, the smallest perceivable difference is presented in terms of difference limen (DL), which are defined as

$$\text{DL} = 10 \cdot \log\_{10} \left( 1 + \frac{\Delta a}{a^\*} \right) \tag{27}$$

where *α* \* corresponds to the noise level in Volts and Δ*α* is the derived *JND* of the tone level in Volts. Figure 8 represents the DL of tones as a function of noise level for different frequencies. The noise was Gaussian white noise. The tone thresholds were derived by both types of coding (RA and AI), and they are presented in Figure 8 along with experimental data from Miller [54, 55]. Both types of coding succeeded in predicting the experimental result that the dependence of DL on noise level is independent of the tone's frequency. However, only RA coding yielded similar values of DL as a function of noise level. The AI coding revealed DL values that were lower by order of magnitude than the experimental result. This result convinced us that the brain is using rate coding in order to estimate tone amplitude.

**Figure 8.** DL as a function of noise level as obtained by a normal cochlea according to both rate (left panel) and AI (right panel) coding. Each color represents a different frequency. The black broken line was replotted from [55].

#### **4.2. Simulation results: Abnormal ears**

Audiograms of the hearing impaired were estimated by subtracting the threshold of the damaged ear from the threshold defined by the equal loudness at threshold [52]. The estimated audiograms of different types of pathologies are shown in Figure 9. In all the estimated audiograms, we assumed that both IHC and OHC loss were uniform along the cochlear partition, which implies that *γ*ihc(*x*)=const. and *γ*ohc(*x*)=const.

In order to present the threshold of tones in the presence of noise, the smallest perceivable

=× + ç ÷

where *α* \* corresponds to the noise level in Volts and Δ*α* is the derived *JND* of the tone level in Volts. Figure 8 represents the DL of tones as a function of noise level for different frequencies. The noise was Gaussian white noise. The tone thresholds were derived by both types of coding (RA and AI), and they are presented in Figure 8 along with experimental data from Miller [54, 55]. Both types of coding succeeded in predicting the experimental result that the dependence of DL on noise level is independent of the tone's frequency. However, only RA coding yielded similar values of DL as a function of noise level. The AI coding revealed DL values that were lower by order of magnitude than the experimental result. This result convinced us that the

**Figure 8.** DL as a function of noise level as obtained by a normal cochlea according to both rate (left panel) and AI (right panel) coding. Each color represents a different frequency. The black broken line was replotted from [55].

Audiograms of the hearing impaired were estimated by subtracting the threshold of the damaged ear from the threshold defined by the equal loudness at threshold [52]. The estimated audiograms of different types of pathologies are shown in Figure 9. In all the estimated

a

è ø DL 10 log 1 , <sup>10</sup> \* (27)

a

æ ö D

difference is presented in terms of difference limen (DL), which are defined as

20 Update On Hearing Loss

brain is using rate coding in order to estimate tone amplitude.

**4.2. Simulation results: Abnormal ears**

**Figure 9.** Estimated audiograms for different type of pathologies. Panel A represents cochleae with different degrees of OHC loss and intact IHC. Panel B represents cochleae with different degrees of IHC loss and intact OHC. Panel C rep‐ resents cochlea with both IHC and OHC loss.

Three audiograms are exhibited in panel A of Figure 9. They were obtained with *γ*ihc=8 (the normal value) and three values of *γ*ohc=0, 0.125, 0.25 that represent 100*%*, 75*%*, and 50*%* of OHC loss, respectively. Due to OHC loss of 50%, no hearing loss was obtained up to 2 kHz. With 100% OHC loss, the estimated audiogram revealed a maximum hearing loss of about 60 dB at 6 kHz. Panel B of Figure 9 represents cochleae with no OHC loss (*γ*ohc=0.5) but with different degrees of IHC loss, *γ*ihc=5, 6, 7, which represents 37.5*%*, 25*%*, and 12.5*%* of IHC efficiency. Reduction in IHC efficiency caused a maximum hearing loss at 1000–2000 Hz. A combination of IHC and OHC loss is probably a more common pathology; an example of its effect is shown in Figure 9C. It represents cochleae with 75% OHC loss (*γ*ohc=0.125) and different degrees of IHC loss. The maximum hearing loss was obtained at 4 kHz. The estimated audiogram with *γ*ihc=7 resembles a typical mild audiogram while the one with *γ*ihc=5 resembles a typical severe audiogram.

The effect of background noise on the threshold to tones is demonstrated in Figure 10, where DL is plotted as a function of noise level for different frequencies. As a result of OHC loss, *γohc* =0, and a significant increase in DL was yielded especially at high frequencies relative to normal cochlea. The combination of IHC and OHC loss caused an increase in DL at all frequencies. It seems that the effect of IHC loss causes an increase in DL at low frequencies below 1000 Hz. This result might explain the difficulties of people with mild hearing loss to understand speech in a noisy background. The information of speech sounds is mainly included in the low frequency range.

**Figure 10.** DL as a function of noise level as obtained by abnormal cochleae. Left panel represents a cochlea with 100% OHC loss (*γ*ohc=0) and intact IHC. Right panel represents a cochlea with both IHC and OHC loss (*γ*ohc=0.125; *γ*ihc=6). Each color represents a different frequency.

#### **5. Summary**

In this study, a comprehensive model for the auditory system was introduced. It included a detailed, nonlinear time domain cochlear model with active outer hair cells that are driven by the tectorial membrane motion. Outer hair cell loss was indicated by an OHC efficiency index that could change along the cochlear partition. The second part of the model included a synaptic model that generates the auditory nerve's instantaneous rate as a response to basilar membrane motion and is affected by inner hair cell transduction efficiency. Since both inner and outer hair cell loss can be easily integrated in the model, the model is useful for demon‐ strating those pathologies.

In order to compare normal and abnormal human abilities to the model predictions, a comprehensive technique was introduced. It was based on the assumption that the brain behaves as an optimal processor and its task in JND experiments is to estimate physical parameters. The performance of the optimal processor can be derived by calculating its lower bound. Since the neural activity was described as an NHPP, the Cramer–Rao lower bound was analytically derived for both rate and all information coding.

In this study, we have shown that the amplitude of tones in quiet and in the presence of background noise is most likely coded by the rate only. Pathological audiograms can be predicted by introducing reduced OHC and IHC efficiency indices. Moreover, the presence of noise causes a significant increase in DL. The effect of DL as a function of frequency depends on the type of hearing loss. In general, OHC loss mostly effects the high frequencies, while the effect of IHC loss is mostly expressed in the low frequencies.

The model presented in this paper can be used as a framework to explore different types of pathologies on the basis of audiograms obtained in quiet and in the presence of background noise.

### **Acknowledgements**

This research was partially supported by the Israel Science Foundation grant no. 563/12. I want to express my great appreciation to my students who participated in this research over the years: Ram Krips, Azi Cohen, Vered Weiss, Noam Elbaum, Oren Cohen, Dan Mekrantz, Oded Barzely, Yaniv Halmut, and Tal Kalp.

### **Author details**

#### Miriam Furst\*

**Figure 10.** DL as a function of noise level as obtained by abnormal cochleae. Left panel represents a cochlea with 100% OHC loss (*γ*ohc=0) and intact IHC. Right panel represents a cochlea with both IHC and OHC loss

In this study, a comprehensive model for the auditory system was introduced. It included a detailed, nonlinear time domain cochlear model with active outer hair cells that are driven by the tectorial membrane motion. Outer hair cell loss was indicated by an OHC efficiency index that could change along the cochlear partition. The second part of the model included a synaptic model that generates the auditory nerve's instantaneous rate as a response to basilar membrane motion and is affected by inner hair cell transduction efficiency. Since both inner and outer hair cell loss can be easily integrated in the model, the model is useful for demon‐

In order to compare normal and abnormal human abilities to the model predictions, a comprehensive technique was introduced. It was based on the assumption that the brain behaves as an optimal processor and its task in JND experiments is to estimate physical parameters. The performance of the optimal processor can be derived by calculating its lower

(*γ*ohc=0.125; *γ*ihc=6). Each color represents a different frequency.

**5. Summary**

22 Update On Hearing Loss

strating those pathologies.

School of Electrical Engineering, Faculty of Engineering, Tel Aviv University, Tel Aviv, Israel

#### **References**


[20] Zwislocki, J.J. (1950). Theory of the acoustical action of the cochlea. J. Acoust. Soc. Am. 22: 778–784.

[5] Zilany, M.S.A., and Bruce, I.C. (2006). Modeling auditory-nerve responses for high sound pressure levels in the normal and impaired auditory periphery. J. Acoust. Soc.

[7] Carney L.H. (1993). A model for the responses of low-frequency auditory-nerve fibers

[8] Kates, J.M. (1991). A time-domain digital cochlear model. IEEE Trans. Signal Process.

[9] Goldstein, J.L. (1990). Modeling rapid waveform compression on the basilar membrane

temporal fine structure sensitivity, frequency selectivity, and speech reception in noise.

[10] Hopkins, K. and Moore, B.C.J (2011). The effects of age and cochlear hearing loss on

[11] Jepsen, M.L. and Dau, T. (2011). Characterizing auditory processing and perception in individual listeners with sensorineural hearing loss. J. Acoust. Soc. Am., 129, 262–281.

[12] Furst, M. and Halmut, Y. (2006). Prediction for audiograms and otoacoustic emissions. In: Auditory Mechanisms: Processes and Models, edited by A.L. Nuttal, T. Ren, P. Gillespie, K. Grosh, and E. de Boer. World Scientific Publishing, pp. 384–385.

[13] Barzelay, O. and Furst, M. (2011). Time domain one-dimensional cochlear model with integrated tectorial membrane and outerhair cells. In: What Fire Is in Mine Ears Progress in Auditory Biomechanics, Vol 84, edited by C.A. Shera and E.S. Olson.

[14] Sabo, D., Barzelay, O., Weiss, S., and Furst, M. (2014). Fast evaluation of a timedomain non-linear cochlear model on GPUs. J. Comput. Phys. 265: 97–113.

[15] Dau, T., Püschel, D., and Kohlrausch, A. (1996). A quantitative model of the effective signal processing in the auditory system. I. Model structure. J. Acoust. Soc. Am. 99,

[16] Sumner, C., Lopez-Poveda, E., O'Mard, L., and Meddis, R.(2002). A revised model of the inner-hair cell and auditory-nerve complex. J. Acoust. Soc. Am. 111, 2178–2189. [17] Zilany, M.S.A., Bruce, I.C., Nelson, P.C., and Carney, L.H. (2009). A phenomenological model of the synapse between the inner hair cell and auditory nerve: long-term

adaptation with power-law. J. Acoust. Soc. Am. 126, 2390–2412.

periodicity mechanism. Proc. IEEE 58, 723–730.

[18] Siebert, W.M. (1970). Frequency discrimination in the auditory system: place or

[19] Brownell, W.E., Bader, C.R., Bertrand D., and de Ribaupierre, I. (1985). Evoked mechanical responses of isolated cochlear outer hair cells. Science 227: 194–196.

[6] Cohen A. and Furst M. (2004). Integration of outer hair cell activity in onedimensional cochlear mode. J. Acoust. Soc. Am., 115:2185–2192.

as multiple-bandpass-nonlinearity filtering. Hear. Res. 49, 39–60.

Am. 120: 1446–1466.

24 Update On Hearing Loss

39, 2573–2592.

3615–3622.

in cat. J Acoust. Soc. Am. 93:401–417.

J Acoust. Soc. Am. 130: 334–349.

American Institute of Physics, pp. 79–84.


[37] Shamma, S.A., Chadwick, R.S., Wilbur, W.J., Morrish, K.A., and Rinzel,J.(1986). A biophysical model of the cochlear processing: intensity dependence of pure tone

[38] Rattay, F., Gebeshuber, I.C., and Gitter, A.H. (1998). The mammalian auditory hair cell:

[39] Corey, D.P., and Hudspeth, A.J. (1983). Kinetics of the receptor current in the bullfrog

[41] Palmer, A.M. and Russell, I.J. (1986). Phase-locking in the cochlear nerve of the guineapig and its relation to the receptor potential of inner hair-cells. Hear. Res. 24: 1–15.

[42] Kiang N.Y.S., Watanabe T., Thomas E.C., and Clark L.F. (1965). Discharge Patterns of

[44] Schneidman E., Freedman B., and Segev I. (1998). Channel stochasticity may be critical in determining the reliability and precision of spike timing. Neural Comput. 10: 1679–

[45] Alaoglu, L. and Smith, N.M. Jr. (1938). Statistical theory of a scaling circuit. Phys. Rev.

[46] Rodieck, R.W., Kiang N.Y.-S., and Gerstein G.L. (1962). Some quantitative methods for the study of spontaneous activity of single neurons. Biophys. J. 2: 351–368.

[47] Gray, P.R. (1967). Conditional probability analyses of the spike activity of single

[48] Rieke F., Warland D., van Steveninck R.D.R., and Bialek W. (1997). Spikes Exploring

[49] Heinz, M.G., Colburn, H.S., and Carney, L.H. (2001). Evaluating auditory performance limits: I. One-parameter discrimination using a computational model for the auditory

[51] Bar David, I. (1969). Communication under Poisson regime. IEEE Trans. Inf. Theory 15:

[50] Snyder D.L. and Miller M.I. (1991). Random Point Processes in Time and Space.

describing single-fiber responses to clicks in the normal and noise-damaged cochlea. J.

[40] Schoonhoven, R., Keijzer, J., Versnel, H., Prijs, V.F. (1993). A dual filter model

Single Fibers in the Cat's Auditory Nerve. Cambridge MA, MIT press.

[43] Manwani, A. and Koch C. (1999). Detecting and estimating signals in noisy cable structures: I. neuronal noise sources. Neural Comput. 11: 1797–1829.

a simple electric circuit model. J. Acoust. Soc. Am.103: 1558–1565.

responses. J. Acoust. Soc. Am. 80: 133–145.

saccular hair-cells. J. Neurosci. 3: 962–976.

Acoust. Soc. Am. 95: 2104–2121.

neurons. Biophys. J. 10: 759–777.

nerve. Neural Comput. 13: 2273–2316.

catalogue\_detail.htm?csnumber=34222.

the Neural Code. MIT Press, Cambridge, Mass, USA.

Springer-Verlag Berlin and Heidelberg GmbH & Co. K.

[52] ISO226:2003:http://www.iso.org/iso/iso\_catalogue/catalogue\_tc/

1703.

26 Update On Hearing Loss

31–37.

53: 832 – 836.

#### **Chapter 2**

### **Classification of Hearing Loss**

Waleed B. Alshuaib, Jasem M. Al-Kandari and Sonia M. Hasan

Additional information is available at the end of the chapter

http://dx.doi.org/10.5772/61835

#### **Abstract**

Hearing loss is the partial or total inability to hear sound in one or both ears. People with hearing loss make up a significant 5.3% of the world's population. The audiogram is an important tool used to determine the degree and type of hearing loss. This chapter presents hearing loss classification, which can aid in clinical diagnosis and help in finding appropriate therapeutic management. Hearing loss is classified based on ear anatomy, type of hearing loss, degree of the disease, and configuration of the audiogram. When the hearing loss is fully characterized, appropriate medical intervention can be assigned.

**Keywords:** Hearing loss, Audiometry, Conductive hearing loss, Sensorineural hearing loss

#### **1. Introduction**

Hearing is a very important sensation for human beings. It helps to understand the surround‐ ing environment and can alert of any coming danger around us. Hearing is an essential means of communication. Hearing loss is the impairment of the ability to hear sound. The most quiet sounds that people can hear are between 25 and 40 decibel (dB). Anybody who suffers from mild hearing loss has difficulty keeping up with normal conversations. People who suffer from profound hearing loss are deaf and can hear nothing at all. Hearing loss can impact learning and development in children, including speech and language. In adults, hearing loss can greatly affect the overall quality of life, since it impacts social interaction and general wellbeing. Consequently, hearing loss can cause many difficulties in various aspects of life. Hearing loss can occur in different types and degrees of severity. In normal hearing, sound vibrations pass from the outer ear through the middle ear to the inner ear. In conductive hearing loss (CHL), vibrations cannot pass from the outer ear to the inner ear. In sensorineural hearing loss (SNHL), there is a dysfunction in the inner ear. In mixed hearing loss, there is a combination of conductive and sensorineural components. At the end of the inner ear (cochlea), thousands

© 2015 The Author(s). Licensee InTech. This chapter is distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

of auditory nerve fibers detect the high and low sound frequencies and transmit action potentials to the brain, which interprets the signal as sound. Repeated exposure to loud noise can damage the sound-sensitive hair cells in the inner ear, so it is important to protect hearing from harmful environments.

#### **2. Hearing loss**

#### **2.1. Defining hearing loss, its prevalence, and incidence**

Hearing loss, the most common form of human sensory deficit, is the partial or total inability to hear sound in one or both ears. It may be a **sudden** or a **progressive** impairment that gradually gets worse over time. Depending on the cause, it can be mild or severe, temporary or permanent. It may be a **bilateral** loss occurring in both ears or **unilateral**. Hearing loss may be **fluctuating**, that is, varying over time—improving at times and getting worse at other times. In other cases, hearing loss is **stable**, not changing at all with time. Hearing loss is caused by many factors, including genetics, age, exposure to noise, illness, chemicals, and physical trauma. Hearing loss may affect all ages, delaying speech and learning in children, and causing social and vocational problems for adults. According to the World Health Organization (WHO), there are 360 million persons in the world with hearing loss (5.3% of the world's population), and 32 million of whom are children [1]. The prevalence of hearing loss is increasing in adolescents and young adults and is associated with exposure to loud music. As for the aged, WHO reports that one-third of people above 65 years are living with disabling hearing loss [1]. Age-related hearing loss, **Presbycusis**, compromises the ability to discriminate sounds in environments with background noise. With the expected increase of 18–50% of the aging population in the coming years, the number of people with hearing loss will conse‐ quently grow [2]. Luckily, through early diagnosis and interventions, the majority of hearing loss cases are treatable. Understanding hearing loss and its classification is thus essential in improving the screening methods, preventive approaches, and in the management of the disease. A clear and concise description of the classification system for hearing loss based on the current state of scientific knowledge is important not only for clinical diagnosis and therapeutic management, but also for the use in medical research and education. In addition, a clear-cut explanation of the disease can aid patients who will themselves benefit from a better understanding of their hearing loss.

#### **2.2. Understanding the audiogram**

Hearing is examined by making the subject listen to a number of different pure tone signals through a pair of headphones or earplugs to record air conduction. An audiometer examines hearing ability by testing the threshold of hearing a sound signal at various frequencies (pitch, in cycles per second or Hz). Hearing threshold may be defined as how soft a sound may get before it becomes inaudible*.* Thresholds are measured in dB; the normal threshold is between 0 and 25 dB for adults and between 0 and 15 dB for children. Threshold is recorded on a graph known as the audiogram. The audiogram presents the sound frequency (ranging from low to high frequency) on the horizontal axis and sound intensity or loudness in dB on the vertical axis. Right ear thresholds are recorded as red circles on the audiogram while the left ear thresholds are recorded as blue Xs. Figure 1 shows a typical audiogram with normal air conduction. A bone conduction test may be performed by bypassing the outer ear and the middle ear (also known as the air conductive pathway) to find the threshold when sound is delivered directly to the cochlea. This is done by placing a bone conductor, which sends tiny vibrations to the inner ear, on the mastoid process. A comparison between results from the air conduction test (that uses tone as stimulus) and the bone conduction test provides a better indication of whether hearing loss is due to conduction deafness or nerve deafness.

Normal Audiogram

 Figure1. Shows a typical audiogram with normal air conduction for both ears.. Symbols: X, left ear air conduction. O, right ear air conduction. **Figure 1.** A typical audiogram with normal air conduction for both ears. Symbols: X, left ear air conduction; O, right ear air conduction.

#### 3. Classifying hearing loss according to: **2.3. Classifying hearing loss according to**

#### A. Anatomy of the ear: *2.3.1. Anatomy of the ear*

of auditory nerve fibers detect the high and low sound frequencies and transmit action potentials to the brain, which interprets the signal as sound. Repeated exposure to loud noise can damage the sound-sensitive hair cells in the inner ear, so it is important to protect hearing

Hearing loss, the most common form of human sensory deficit, is the partial or total inability to hear sound in one or both ears. It may be a **sudden** or a **progressive** impairment that gradually gets worse over time. Depending on the cause, it can be mild or severe, temporary or permanent. It may be a **bilateral** loss occurring in both ears or **unilateral**. Hearing loss may be **fluctuating**, that is, varying over time—improving at times and getting worse at other times. In other cases, hearing loss is **stable**, not changing at all with time. Hearing loss is caused by many factors, including genetics, age, exposure to noise, illness, chemicals, and physical trauma. Hearing loss may affect all ages, delaying speech and learning in children, and causing social and vocational problems for adults. According to the World Health Organization (WHO), there are 360 million persons in the world with hearing loss (5.3% of the world's population), and 32 million of whom are children [1]. The prevalence of hearing loss is increasing in adolescents and young adults and is associated with exposure to loud music. As for the aged, WHO reports that one-third of people above 65 years are living with disabling hearing loss [1]. Age-related hearing loss, **Presbycusis**, compromises the ability to discriminate sounds in environments with background noise. With the expected increase of 18–50% of the aging population in the coming years, the number of people with hearing loss will conse‐ quently grow [2]. Luckily, through early diagnosis and interventions, the majority of hearing loss cases are treatable. Understanding hearing loss and its classification is thus essential in improving the screening methods, preventive approaches, and in the management of the disease. A clear and concise description of the classification system for hearing loss based on the current state of scientific knowledge is important not only for clinical diagnosis and therapeutic management, but also for the use in medical research and education. In addition, a clear-cut explanation of the disease can aid patients who will themselves benefit from a better

Hearing is examined by making the subject listen to a number of different pure tone signals through a pair of headphones or earplugs to record air conduction. An audiometer examines hearing ability by testing the threshold of hearing a sound signal at various frequencies (pitch, in cycles per second or Hz). Hearing threshold may be defined as how soft a sound may get before it becomes inaudible*.* Thresholds are measured in dB; the normal threshold is between 0 and 25 dB for adults and between 0 and 15 dB for children. Threshold is recorded on a graph known as the audiogram. The audiogram presents the sound frequency (ranging from low to high frequency) on the horizontal axis and sound intensity or loudness in dB on the vertical

from harmful environments.

understanding of their hearing loss.

**2.2. Understanding the audiogram**

**2.1. Defining hearing loss, its prevalence, and incidence**

**2. Hearing loss**

30 Update On Hearing Loss

An examination with attention to the anatomy of the ear is critical for establishing a hearing loss diagnosis. The auditory system is typically divided into three main sections: the outer, middle, and inner ears (Figure 2). The outer ear receives sound waves from the environment. The auricle captures sound and directs it into the external auditory canal (EAC) that ends at a thin diaphragm called the tympanum or ear drum. Obstruction of the EAC with ear wax or An examination with attention to the anatomy of the ear is critical for establishing a hearing loss diagnosis. The auditory system is typically divided into three main sections: the outer, middle, and inner ears (Figure 2). **The outer ear** receives sound waves from the environment. The auricle captures sound and directs it into the external auditory canal (EAC) that ends at a thin diaphragm called the tympanum or ear drum. Obstruction of the EAC with ear wax or a foreign body, and inflammation of the canal, the auricle, or both (otitis externa) may produce hearing loss. Atresia, the congenital absence of the external ear canal and microtia, a congenital deformity where the pinna is underdeveloped, also cause hearing loss. Sound travels **the middle ear** as vibrations of three connected ossicles (malleus, incus, and stapes). Increase and decrease in sound-induced air pressure push and pull the tympanum, resulting in a mechanical response. The base of the first ossicle (the malleus) is attached to the tympanic membrane, while the last of the ossicles (the stapes) inserts in an opening called the oval window in the bony inner ear, the cochlea. The vibration of the incus drives the stapes deeper into the oval window and retracts it, pushing and pulling cyclically upon the liquid in the inner ear. The vibrating ossicles thus allow for the delivery of sound from the air-filled outer ear to the fluidfilled inner ear. Compromise of the middle ear's anatomy may lead to hearing loss. For example, bone growth in the ligamentous attachments of the ossicles can immobilize the ossicles and lead to severe deafness in a condition termed otosclerosis. Also of significance in the middle ear, attached to the stapes, is the stapedius muscle. This muscle contracts in response to loud sounds, thereby decreasing sound transmission to the inner ear and protect‐ ing it from acoustic insults. The cyclic motion created by the stapes displaces a liquid mass in **the inner ear**, which results in a traveling oscillating wave along the basilar membrane. The basilar membrane is elastic at the apex of the cochlea where it is most sensitive to low fre‐ quencies. On the other hand, the basilar membrane is stiff at the base of the cochlea and responds to high frequencies. Hair cells along the basilar membrane detect the frequency of the stimulus. The traveling wave pushes hair cells, depolarizing them and stimulating the afferent nerve fibers they are connected to, thereby transmitting the sound signal through the auditory (acoustic) nerve to the brain*.* membrane is elastic at the apex of the cochlea where it is most sensitive to low frequencies. On the other hand, the basilar membrane is stiff at the base of the cochlea and responds to high frequencies. Hair cells along the basilar membrane detect the frequency of the stimulus. The traveling wave pushes hair cells, depolarizing them and stimulating the afferent nerve fibers they are connected to, there by transmitting the sound signal through the auditory (acoustic) nerve to the brain.

Figure 2. The structure of the human ear. The external ear, especially the prominent auricle, focuses sound into the external auditory meatus. Alternating **Figure 2.** The structure of the human ear. The external ear, especially the prominent auricle, focuses sound into the external auditory meatus. Alternating increases and decreases in air pressure vibrate the tympanum. These vibrations are conveyed across the air-filled middle ear by three tiny, linked bones: the malleus, the incus, and the stapes. Vibra‐ tion of the stapes stimulates the cochlea, the hearing organ of the inner ear. (Source [3]: Kandel et al. 2013 Principles of Neural Science. 5th ed.).

#### *2.3.2. Type of hearing loss*

bony inner ear, the cochlea. The vibration of the incus drives the stapes deeper into the oval window and retracts it, pushing and pulling cyclically upon the liquid in the inner ear. The vibrating ossicles thus allow for the delivery of sound from the air-filled outer ear to the fluidfilled inner ear. Compromise of the middle ear's anatomy may lead to hearing loss. For example, bone growth in the ligamentous attachments of the ossicles can immobilize the ossicles and lead to severe deafness in a condition termed otosclerosis. Also of significance in the middle ear, attached to the stapes, is the stapedius muscle. This muscle contracts in response to loud sounds, thereby decreasing sound transmission to the inner ear and protect‐ ing it from acoustic insults. The cyclic motion created by the stapes displaces a liquid mass in **the inner ear**, which results in a traveling oscillating wave along the basilar membrane. The basilar membrane is elastic at the apex of the cochlea where it is most sensitive to low fre‐ quencies. On the other hand, the basilar membrane is stiff at the base of the cochlea and responds to high frequencies. Hair cells along the basilar membrane detect the frequency of the stimulus. The traveling wave pushes hair cells, depolarizing them and stimulating the afferent nerve fibers they are connected to, thereby transmitting the sound signal through the

membrane is elastic at the apex of the cochlea where it is most

sensitive to low frequencies. On the other hand, the basilar

membrane is stiff at the base of the cochlea and responds to high

frequencies. Hair cells along the basilar membrane detect the

frequency of the stimulus. The traveling wave pushes hair cells,

depolarizing them and stimulating the afferent nerve fibers they are

connected to, there by transmitting the sound signal through the

Figure 2. The structure of the human ear. The external ear, especially the prominent auricle, focuses sound into the external auditory meatus. Alternating

**Figure 2.** The structure of the human ear. The external ear, especially the prominent auricle, focuses sound into the external auditory meatus. Alternating increases and decreases in air pressure vibrate the tympanum. These vibrations are conveyed across the air-filled middle ear by three tiny, linked bones: the malleus, the incus, and the stapes. Vibra‐ tion of the stapes stimulates the cochlea, the hearing organ of the inner ear. (Source [3]: Kandel et al. 2013 Principles of

auditory (acoustic) nerve to the brain*.*

32 Update On Hearing Loss

Neural Science. 5th ed.).

auditory (acoustic) nerve to the brain.

Functionally, the human ear can be divided into two major divisions, the conductive division, associated with the areas responsible for air conduction (the outer ear and the middle ear) and the sensorineural division associated with the inner ear. Accordingly, the three main types of hearing loss are classified as conducive, sensorineural, and mixed hearing losses.


#### Conductive Hearing Loss

throat, and benign tumors.

ossicles, fluid accumulation, allergies, dysfunction of the eustachian tube that normally drains fluid from the ear to the back of the

Figure 3. Shows an audiogram with conductive hearing loss for the left ear. The bone conduction is within normal range (0 – 25 dB) and the air conduction is in moderate – moderately severe range (moderate range: 41 – 55 dB, moderately severe: 56-70 dB). Symbols: X, left ear air conduction. >, left ear bone conduction. **Figure 3.** An audiogram with CHL for the left ear. The bone conduction is within normal range (0–25 dB) and the air conduction is in moderate—moderately severe range (moderate range: 41–55 dB, moderately severe:56–70 dB). Sym‐ bols: X, left ear air conduction; >, left ear bone conduction.

of inner ear structures. The US Occupational Safety and Health Administration require ear protection in the work area when an average exposure of 85 dB is reached. Severe SNHL may also occur after sudden exposure to a loud noise at 120–155 dB, for example from explosions, fireworks, gunfire, and music concerts. Other causes of SNHL include malformation of the inner ear, aging, Meniere's disease, drug-induced ototoxicity, and tumors such as acoustic neuroma. SNHL often cannot be reversed. Figure 4 shows an audiogram with SNHL. 3. Sensorineural hearing loss (SNHL) is a hearing loss that occurs as a result of damage in the cochlea or beyond i.e., either along the 8th cranial nerve or in the brain. SNHL can cause complete loss of hearing, despite the outer

#### Sensorineural Hearing Loss

Figure 4. Shows an audiogram with sensorineural hearing loss at high frequencies for the left ear. Both air conduction and bone conduction of high frequencies are in the mild ( 26 -40 dB) to moderate range (41 -55 dB) of hearing loss. Symbols: X, **Figure 4.** An audiogram with SNHL at high frequencies for the left ear. Both air conduction and bone conduction of high frequencies are in the mild (26–40 dB) to moderate range (41–55 dB) of hearing loss. Symbols: X, left ear air con‐ duction; >, left ear bone conduction.

4. Mixed hearing loss is a type of hearing loss that has a combination of conductive and sensorineural damage in the same ear. Cases where both an air–bone gap greater than 10 dB and an elevated bone conduction

threshold are observed suggest a mixed hearing loss. While the conductive component may be treated, the sensorineural component is more of a challenge. Figure 5 shows an audiogram with mixed hearing loss.

left ear air conduction. >, left ear bone conduction.

**2.** Mixed hearing loss is a type of hearing loss that has a combination of conductive and sensorineural damage in the same ear. Cases where both an air–bone gap greater than 10 dB and an elevated bone conduction threshold are observed suggest a mixed hearing loss. While the conductive component may be treated, the sensorineural component is more of a challenge. Figure 5 shows an audiogram with mixed hearing loss.

#### Mixed Hearing Loss

Figure 5. Shows an audiogram with mixed hearing loss for the left ear. Both air conduction and bone conduction are in the abnormal range, with the air-bone gap generally greater than 10 dB. Symbols: X, left ear air conduction. > , left ear bone **Figure 5.** An audiogram with mixed hearing loss for the left ear. Both air conduction and bone conduction are in the abnormal range, with the air–bone gap generally greater than 10 dB. Symbols: X, left ear air conduction; >, left ear bone conduction.

#### *2.3.3. Degree of hearing loss*

conduction.

of inner ear structures. The US Occupational Safety and Health Administration require ear protection in the work area when an average exposure of 85 dB is reached. Severe SNHL may also occur after sudden exposure to a loud noise at 120–155 dB, for example from explosions, fireworks, gunfire, and music concerts. Other causes of SNHL include malformation of the inner ear, aging, Meniere's disease, drug-induced ototoxicity, and tumors such as acoustic neuroma. SNHL often cannot be reversed. Figure 4 shows an

3. Sensorineural hearing loss (SNHL) is a hearing loss that occurs as a result of damage in the cochlea or beyond i.e., either along the 8th cranial nerve or in the brain. SNHL can cause complete loss of hearing, despite the outer

Figure 3. Shows an audiogram with conductive hearing loss for the left ear. The bone conduction is within normal range (0 – 25 dB) and the air conduction is in moderate – moderately severe range (moderate range: 41 – 55 dB, moderately severe: 56-70 dB). Symbols: X, left ear air conduction. >, left ear bone conduction.

**Figure 3.** An audiogram with CHL for the left ear. The bone conduction is within normal range (0–25 dB) and the air conduction is in moderate—moderately severe range (moderate range: 41–55 dB, moderately severe:56–70 dB). Sym‐

ossicles, fluid accumulation, allergies, dysfunction of the eustachian tube that normally drains fluid from the ear to the back of the

Conductive Hearing Loss

throat, and benign tumors.

Figure 4. Shows an audiogram with sensorineural hearing loss at high frequencies for the left ear. Both air conduction and bone conduction of high frequencies are in the mild ( 26 -40 dB) to moderate range (41 -55 dB) of hearing loss. Symbols: X,

**Figure 4.** An audiogram with SNHL at high frequencies for the left ear. Both air conduction and bone conduction of high frequencies are in the mild (26–40 dB) to moderate range (41–55 dB) of hearing loss. Symbols: X, left ear air con‐

> 4. Mixed hearing loss is a type of hearing loss that has a combination of conductive and sensorineural damage in the same ear. Cases where both an air–bone gap greater than 10 dB and an elevated bone conduction

threshold are observed suggest a mixed hearing loss. While the conductive component may be treated, the sensorineural component is more of a challenge. Figure 5 shows an audiogram with mixed hearing loss.

left ear air conduction. >, left ear bone conduction.

audiogram with SNHL.

34 Update On Hearing Loss

bols: X, left ear air conduction; >, left ear bone conduction.

duction; >, left ear bone conduction.

Sensorineural Hearing Loss

C. Degree of hearing loss: Hearing loss can be classified according to severity or degree of the disease. Hearing losses between 26-40 dB are considered mild, 41-55 dB moderate, 56-70 dB moderately severe, 71-90 dB severe, and greater than 91 dB profound (Table 1) [5]. Severity of hearing loss is based on Hearing loss can be classified according to the severity or degree of the disease. Hearing losses between 26 and 40 dB are considered mild, 41 and 55 dB moderate, 56 and 70 dB moderately severe, 71 and 90 dB severe, and greater than 91 dB profound (Table 1) [5, 6]. Severity of hearing loss is based on thresholds at individual frequencies. Once the type and degree of loss are established, an appropriate intervention may be assigned. This may include hearing aids, aural rehabilitation, cochlear implants, medical intervention, or surgery.

#### thresholds at individual frequencies. Once the type and degree of loss are *2.3.4. Configuration of hearing loss*

established, an appropriate intervention may be assigned. This may Hearing losses may be categorized according to the audiometric configuration, that is, the shape or pattern of the audiogram across the frequency spectrum [7]. The configuration of an audiogram will tell you which sounds are best heard. A hearing loss that is more or less the same at all frequencies is depicted as a straight horizontal line on the audiogram and is thus appropriately called a **flat configuration**. In this configuration, thresholds across frequencies do not vary more than 20 dB from each other. In other words, a person with this type of loss needs the same amount of loudness to hear a sound regardless of the pitch. A person with a


**Table 1.** Degree of hearing loss based on the hearing threshold. Source [5]: Clark JG: Uses and abuses of hearing loss classification. ASHA. 1981, 23:493–500.

**sloping configuration** has little or no hearing loss at low frequencies, severe loss at midfrequency range, and profound loss at the higher frequencies. Ski-slope loss is another name for this configuration because the audiogram looks much like a ski slope with the top of the hill on the left and the slope dropping down to the right. Inversely, a **rising configuration** indicates that high-frequency sounds can be better heard than low-frequency sounds. This is a rare type of audiogram, an extreme example would be a person who is unable to hear thunder or explosive noise but can hear whispers across a room. Someone suffering from **cookie-bite or U-shaped configuration** hearing loss has one or more adjacent thresholds between 500 and 4,000 Hz ≥ 20 dB and so is likely to experience difficulty in hearing mid-frequency sounds, while maintaining the ability to hear high- and low-frequency sounds. Usually it is genetic; this type of hearing loss may progress over time. A **noise-notched configuration** indicates a hearing loss mostly between 3 and 6 kHz, while lower and higher frequencies are not affected. This configuration is observed in hearing loss due to noise exposure since sensory cells in the cochlea are more prone to noise damage in the 3-6 kHz frequency range than lower and higher frequencies. **High-frequency configuration** would show good hearing in the low frequencies and poor hearing in the high frequencies. Figure 6 shows the different configurations of hearing loss. 3–6 kHz since sensory cells in the basal portion of the cochlea are more prone to noise damage in these frequencies than those of a lower and higher frequencies. This hearing loss will appear in the audiogram as thresholds that reach a maximum between 3–6 kHz and then return toward the normal level at higher frequencies, forming a noise-notched configuration. High frequency configuration would show good hearing in the low frequencies and poor hearing in the high frequencies.

Figure 6. Hearing loss configurations **Figure 6.** Hearing loss configurations.

#### **Author details**

Waleed B. Alshuaib1\*, Jasem M. Al-Kandari2 and Sonia M. Hasan3

\*Address all correspondence to: waleeds@hsc.edu.kw

1 Department of Physiology, Faculty of Medicine, Kuwait University, Kuwait

2 Audiology and Speech Clinic, Ahmadi Hospital, Kuwait

3 Department of Physiology, Faculty of Medicine, Kuwait University, Kuwait

#### **References**

**sloping configuration** has little or no hearing loss at low frequencies, severe loss at midfrequency range, and profound loss at the higher frequencies. Ski-slope loss is another name for this configuration because the audiogram looks much like a ski slope with the top of the hill on the left and the slope dropping down to the right. Inversely, a **rising configuration** indicates that high-frequency sounds can be better heard than low-frequency sounds. This is a rare type of audiogram, an extreme example would be a person who is unable to hear thunder or explosive noise but can hear whispers across a room. Someone suffering from **cookie-bite or U-shaped configuration** hearing loss has one or more adjacent thresholds between 500 and 4,000 Hz ≥ 20 dB and so is likely to experience difficulty in hearing mid-frequency sounds, while maintaining the ability to hear high- and low-frequency sounds. Usually it is genetic; this type of hearing loss may progress over time. A **noise-notched configuration** indicates a hearing loss mostly between 3 and 6 kHz, while lower and higher frequencies are not affected. This configuration is observed in hearing loss due to noise exposure since sensory cells in the cochlea are more prone to noise damage in the 3-6 kHz frequency range than lower and higher frequencies. **High-frequency configuration** would show good hearing in the low frequencies and poor hearing in the high frequencies. Figure 6 shows the different configurations of

**Table 1.** Degree of hearing loss based on the hearing threshold. Source [5]: Clark JG: Uses and abuses of hearing loss

**Degree of hearing loss Hearing threshold (dB HL)**

Normal hearing –10–15 Slight 16–25 Mild 26–40 Moderate 41–55 Moderately severe 56–70 Severe 71–90 Profound >91

classification. ASHA. 1981, 23:493–500.

36 Update On Hearing Loss

3–6 kHz since sensory cells in the basal portion of the cochlea are more prone to noise damage in these frequencies than those of a lower and higher frequencies. This hearing loss will appear in the audiogram as thresholds that reach a maximum between 3–6 kHz and then return toward the normal level at higher frequencies, forming a noise-notched configuration. High frequency configuration would show good hearing in

the low frequencies and poor hearing in the high frequencies.

Figure 6. Hearing loss configurations **Figure 6.** Hearing loss configurations.

hearing loss.


#### **Chapter 3**

### **Up to Date on Etiology and Epidemiology of Hearing Loss**

Larissa Vilela Pereira and Fayez Bahmad Jr.

Additional information is available at the end of the chapter

http://dx.doi.org/10.5772/61845

#### **Abstract**

Deafness is one of the most common communication disorders in humans. Approximate‐ ly one out of every thousand infants is born with a significant hearing deficit. The preva‐ lence of hearing loss increases dramatically with age. By age 65 years, one out of three of us will suffer from hearing impairment sufficient to interfere with the understanding of speech. Hearing impairment is a very heterogeneous disorder with a wide range of caus‐ es. Worldwide, estimates from the World Health Organization are that hearing loss af‐ fects 538 million people. Hearing loss may be classified into three types: sensorineural, involving the inner ear, cochlea, or the auditory nerve; conductive, when the outer or middle ear structures fail to optimally capture, collect, or transmit sound to the cochlea; and mixed loss, which is a combination of conductive and sensorineural hearing loss. In this chapter, we propose to briefly define each cause of hearing loss as follows: (1) outer ear causes (congenital, infection, trauma, tumor, dermatologic, and cerumen), (2) middle ear causes (congenital, eustachian tube dysfunction, infection, tumors, otosclerosis, tym‐ panic membrane perforation, middle ear barotrauma, and vascular), and (3) inner ear causes (congenital or hereditary, presbycusis, infection, Ménière disease, noise exposure, inner ear barotrauma, trauma, tumors, endocrine/systemic/metabolic, autoimmune hear‐ ing loss, Iatrogenic, ototoxic, and neurogenic).

**Keywords:** Etiology, hearing loss, conductive hearing loss, sensorineural hearing loss

#### **1. Introduction**

Deafness is one of the most common communication disorders in humans. Approximately one out of every thousand infants is born with a significant hearing deficit, and the prevalence of hearing loss increases dramatically with age. By age 65 years, one out of three of us will suffer from hearing impairment sufficient to interfere with the understanding of speech. Hearing impairment is a very heterogeneous disorder with a wide range of causes.[1]

© 2015 The Author(s). Licensee InTech. This chapter is distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Worldwide, estimates from the World Health Organization are that hearing loss affects 538 million people.[1]

Hearing loss may be classified into three types:


#### **2. Outer ear causes**

The outer ear comprises the auricle and the external auditory canal (EAC), and all hearing loss related to the outer ear is by nature a conductive hearing loss.

#### **2.1. Cerumen**

Probably the most common cause of conductive hearing loss is the complete blockage of the EAC by a cerumen impaction. Some patients are not able to clear it on their own or use Q-tips that push the cerumen down the ear canal. These individuals may need periodic cleaning to enhance their auditory capabilities.

#### **2.2. Infection**

Infections may lead to blockage of the EAC due to the accumulation of debris, edema, or inflammation. Acute otitis externa usually develops as a result of local trauma coupled with contamination by bacteria (or occasionally fungi—otomycosis or viral—herpes zoster oticus) after swimming, showering, or exposure to hot and humid conditions. With complete obstruction, a conductive hearing loss results. Diabetes mellitus and other immunocompro‐ mised states can predispose to developing malignant otitis externa.[2]

#### **2.3. Congenital**

The auricle and the EAC are derived from different embryologic tissue, and each may develop without the full maturation of the other sites. The EAC develops from the 8th to the 28th week of gestation, and the auricle itself forms from remnants of the first and second branchial arch during the 12th and 20th weeks. Problems can occur anytime during this developmental phase, and it is possible to have a normal auricle but an atretic canal. The anatomical course of the facial nerve is frequently altered in malformations of the ear and temporal bone, but facial nerve function is rarely affected by the malformation. Conductive hearing losses that result from congenital malformations may range from mild to severe.

Malformation of the auricle is termed anotia when there is complete absence of an external ear and microtia when there is a vestige present. Anotia and microtia may cause mild to moderate conductive hearing loss.[2]

Congenital atresia of the EAC occurs in approximately 1 per 10,000 births and are usually associated with other craniofacial abnormalities such as Treacher–Collins syndrome, Pierre Robin syndrome, or Crouzon disease. The malformation occurs unilaterally 4 times more frequently than it does bilaterally (Figure 1).[3]

The severity of the atresia determines how well the child hears, and some patients with congenital atresia have associated inner-ear abnormalities, but these abnormalities typically do not cause sensorineural hearing loss. Atresia or significant stenosis of the EAC causes moderate to severe conductive hearing loss.[3]

**Figure 1.** Congenital atresia of the EAC

#### **2.4. Tumors**

Worldwide, estimates from the World Health Organization are that hearing loss affects 538

**•** Conductive: when the outer or middle ear structures fail to optimally capture, collect, or

The outer ear comprises the auricle and the external auditory canal (EAC), and all hearing loss

Probably the most common cause of conductive hearing loss is the complete blockage of the EAC by a cerumen impaction. Some patients are not able to clear it on their own or use Q-tips that push the cerumen down the ear canal. These individuals may need periodic cleaning to

Infections may lead to blockage of the EAC due to the accumulation of debris, edema, or inflammation. Acute otitis externa usually develops as a result of local trauma coupled with contamination by bacteria (or occasionally fungi—otomycosis or viral—herpes zoster oticus) after swimming, showering, or exposure to hot and humid conditions. With complete obstruction, a conductive hearing loss results. Diabetes mellitus and other immunocompro‐

The auricle and the EAC are derived from different embryologic tissue, and each may develop without the full maturation of the other sites. The EAC develops from the 8th to the 28th week of gestation, and the auricle itself forms from remnants of the first and second branchial arch during the 12th and 20th weeks. Problems can occur anytime during this developmental phase, and it is possible to have a normal auricle but an atretic canal. The anatomical course of the facial nerve is frequently altered in malformations of the ear and temporal bone, but facial nerve function is rarely affected by the malformation. Conductive hearing losses that result

million people.[1]

40 Update On Hearing Loss

Hearing loss may be classified into three types:

transmit sound to the cochlea

enhance their auditory capabilities.

**2. Outer ear causes**

**2.1. Cerumen**

**2.2. Infection**

**2.3. Congenital**

**•** Sensorineural: involving the inner ear, cochlea, or the auditory nerve

**•** Mixed loss: a combination of conductive and sensorineural hearing loss

related to the outer ear is by nature a conductive hearing loss.

mised states can predispose to developing malignant otitis externa.[2]

from congenital malformations may range from mild to severe.

Malignant cancer of the ear canal is rare, and the most common histology type is squamous cell carcinoma. Other tumors include basal cell carcinoma, adenoid cystic carcinoma, adeno‐ carcinoma, and melanoma. Initially, cancer of the EAC is usually misdiagnosed as otitis externa because most patients complain of otorrhea, aural fullness, pain, itching, and hearing loss. However, after multiple failed attempts at treatment with ototopical drops and antibiotics, a biopsy of the EAC should be obtained. Treatment of these malignant tumors varies with the specific neoplasm.[4]

Benign bony growths may also occlude the EAC with a resulting conductive hearing loss. The two most common benign growths are exostosis and osteoma.

Exostosis are the most common solid tumor of the EAC. They are periosteal outgrowths that develop in the bony ear canal and occur mostly in men who have had repeated exposure to cold water. The lesions are often multiple, bilateral, and form in the suture lines of the EAC bones. Surgical removal is performed in cases of conductive hearing loss or recurrent otitis externa.[5]

Osteoma, in contrast, is a solitary bony growth that is most commonly attached to the tympa‐ nosquamous suture line. Similar to exostoses, osteomas are not treated until they become so large that they affect hearing by occlusion or repeated infections because debris cannot exit the EAC.[6]

Benign polyps may occur as a result of other otologic conditions, such as chronic ear infections or cholesteatoma. Occasionally, benign polyps can grow large enough to occlude the lumen of the external auditory canal.

#### **2.5. Trauma**

Penetrating trauma to the external auditory canal or meatus due to bullet, knife, or fracture may cause mild or profound conductive hearing loss, depending on the degree of EAC occlusion. Ototopical drops prevent otitis externa, and external auditory canal stenting is required initially to ensure that the EAC does not develop significant stenosis. Surgical intervention is reserved for cases of stenosis.

#### **3. Middle ear causes**

As with the outer ear, all hearing loss associated with the middle ear is conductive hearing loss.

#### **3.1. Congenital malformation**

Malformation of the ossicular chain can cause conductive hearing loss. The most common ossicular abnormality observed is a missing or malalignment of the crura of the stapes. However, it is usually an abnormal incus or malleoincudal joint that causes the conductive hearing loss. Computed tomography (CT) scan is virtually always needed in order to make this diagnosis, and in some cases, exploratory tympanostomy may be required.[2]

#### **3.2. Eustachian tube dysfunction**

The eustachian tube serves to provide normal middle ear pressure when opened and to protect the middle ear from reflux and nasopharyngeal bacteria when closed. Abnormal function occurs commonly in the setting of an upper respiratory infection (viral or bacterial), and it can also occur with allergies or tumors. It results in negative middle ear pressure causing reduction of tympanic membrane (TM) excursion and conductive hearing loss.

#### **3.3. Infection**

biopsy of the EAC should be obtained. Treatment of these malignant tumors varies with the

Benign bony growths may also occlude the EAC with a resulting conductive hearing loss. The

Exostosis are the most common solid tumor of the EAC. They are periosteal outgrowths that develop in the bony ear canal and occur mostly in men who have had repeated exposure to cold water. The lesions are often multiple, bilateral, and form in the suture lines of the EAC bones. Surgical removal is performed in cases of conductive hearing loss or recurrent otitis

Osteoma, in contrast, is a solitary bony growth that is most commonly attached to the tympa‐ nosquamous suture line. Similar to exostoses, osteomas are not treated until they become so large that they affect hearing by occlusion or repeated infections because debris cannot exit

Benign polyps may occur as a result of other otologic conditions, such as chronic ear infections or cholesteatoma. Occasionally, benign polyps can grow large enough to occlude the lumen

Penetrating trauma to the external auditory canal or meatus due to bullet, knife, or fracture may cause mild or profound conductive hearing loss, depending on the degree of EAC occlusion. Ototopical drops prevent otitis externa, and external auditory canal stenting is required initially to ensure that the EAC does not develop significant stenosis. Surgical

As with the outer ear, all hearing loss associated with the middle ear is conductive hearing

Malformation of the ossicular chain can cause conductive hearing loss. The most common ossicular abnormality observed is a missing or malalignment of the crura of the stapes. However, it is usually an abnormal incus or malleoincudal joint that causes the conductive hearing loss. Computed tomography (CT) scan is virtually always needed in order to make

The eustachian tube serves to provide normal middle ear pressure when opened and to protect the middle ear from reflux and nasopharyngeal bacteria when closed. Abnormal function

this diagnosis, and in some cases, exploratory tympanostomy may be required.[2]

two most common benign growths are exostosis and osteoma.

specific neoplasm.[4]

42 Update On Hearing Loss

externa.[5]

the EAC.[6]

**2.5. Trauma**

loss.

of the external auditory canal.

**3. Middle ear causes**

**3.1. Congenital malformation**

**3.2. Eustachian tube dysfunction**

intervention is reserved for cases of stenosis.

Acute otitis media (AOM) is a common childhood disorder that also frequently occurs in adults. Approximately 80% to 90% of all children will have developed at least one episode of OM by the time they enter school, and AOM accounts for more than 25% of the prescriptions for oral antibiotics annually.[7]

It is normally associated with pain, fever, and ear fullness as well as decreased hearing. Conductive hearing loss occurs because fluid filling the middle ear space prevents the TM from vibrating adequately, thereby diminishing movement of the ossicular chain.[8]

The middle ear may still be filled with serous or thick, tenacious fluid after the acute infection has been successfully treated. This fluid resolves within four to six weeks in 70% of cases. By an additional 12 weeks, 85% to 90% of all resolve the condition on their own. However, in the 10% to 15% in whom the fluid does not clear, it needs to be removed and the middle ear aerated in order to promote resolution of any conductive hearing loss. The fluid is usually cleared by either myringotomy and pressure equalization tube placement, or myringotomy and aspira‐ tion.[9]

#### **3.4. Tympanic membrane perforation**

Conductive hearing loss due to TM perforation is common. Clearly, the size, location, and nature of perforation will affect the degree of hearing loss. Small perforations and those located in the anterior–inferior quadrant cause the least amount of conductive hearing loss; near total or posterior–superior quadrant perforations have a much higher chance of causing significant hearing loss.[10]

Tympanic membrane perforations can arise as a consequence of either infection or trauma. Most perforations heal spontaneously. Occasionally, surgical correction is required, and repair of the perforation often corrects the conductive hearing loss.[7]

#### **3.5. Otosclerosis**

Otosclerosis is a primary disease of the temporal bone, leading to stapes ankylosis. Hearing loss is the main symptom. Complaints of continuous tinnitus and, eventually, vertigo are also observed. Otosclerosis is considered an autosomal dominant disease with incomplete pene‐ trance being identified in three related genes: OTSC 1, OTSC 2, and OTSC 3. Treatment includes surgery, medical treatment, and sound amplification therapy alone or in combination.

#### **3.6. Cholesteatoma**

Cholesteatoma is a growth of desquamated, stratified, squamous epithelium within the middle ear space. Such collections of desquamated skin cells will erode bone slowly through a combination of pressure necrosis and enzymatic activity. Infection accelerates the process of bony destruction. The formation of a cholesteatoma typically occurs after a retraction pocket has formed in the posterior/superior quadrant, often the result of poor eustachian tube function. It may also occur after tympanic membrane trauma, such as a traumatic, inflamma‐ tory, or iatrogenic perforation, with implantation of squamous cells.[11]

Conductive hearing loss can occur as one or all ossicles become destroyed. Left untreated, cholesteatomas may erode the tegmen, the sigmoid sinus, or even the inner ear, resulting in labyrinthine fistula, which causes severe or profound sensorineural hearing loss and vertigo. Thus, untreated, they can cause lateral sinus thrombosis, sepsis, brain abscess, facial paralysis, and even death. Treatment is surgical, usually involving a mastoidectomy.[7]

#### **3.7. Neoplasm**

Malignant tumors such as Langerhans cell histiocytosis or squamous cell carcinoma may also occur in the middle ear and can cause conductive hearing loss. However, these entities are relatively rare when compared with cholesteatoma.

#### **3.8. Middle ear barotrauma**

Barotrauma occurs when a patient is exposed to a sudden, large change in ambient pressure, often during diving or flying. Middle ear pressure becomes more positive with respect to ambient pressure during ascent until the eustachian tube is forced open. On descent, ambient pressure exceeds middle ear pressure until swallowing opens the eustachian tube.

Pressure in the middle ear normally equilibrates with ambient pressure via the eustachian tube. However, if upon descent with flying or diving this equalization is prevented by mucosal edema secondary to an upper respiratory infection, pregnancy, or anatomic variations, the negative relative pressure in the middle ear can lead to its filling with serous fluid or blood or to inward rupture of the TM. Symptoms vary from a sensation of pressure to hearing loss and pain, which may suddenly be relieved with rupture of the TM.[12]

Overpressurization of the middle ear can occur during ascent with flying or diving, but TM rupture is rare.

#### **3.9. Vascular**

Glomus tumors are the most common benign neoplasm of the middle ear. They arise from paraganglionic tissue from the promontory of the middle ear or the adventitia of the dome of the jugular bulb and may rarely show malignant potential.[8]

As tympanic tumors grow, they tend to fill the middle ear, with resultant pulsatile tinnitus with or without conductive hearing loss. They also erode bone as they enlarge, especially inferiorly, causing damage to cranial nerves. In addition, tumors may impede upon the ossicular chain and TM, thereby decreasing motility of either or both structures.[2]

Treatment may include surgical resection, embolization, and radiation.

#### **4. Inner ear causes**

combination of pressure necrosis and enzymatic activity. Infection accelerates the process of bony destruction. The formation of a cholesteatoma typically occurs after a retraction pocket has formed in the posterior/superior quadrant, often the result of poor eustachian tube function. It may also occur after tympanic membrane trauma, such as a traumatic, inflamma‐

Conductive hearing loss can occur as one or all ossicles become destroyed. Left untreated, cholesteatomas may erode the tegmen, the sigmoid sinus, or even the inner ear, resulting in labyrinthine fistula, which causes severe or profound sensorineural hearing loss and vertigo. Thus, untreated, they can cause lateral sinus thrombosis, sepsis, brain abscess, facial paralysis,

Malignant tumors such as Langerhans cell histiocytosis or squamous cell carcinoma may also occur in the middle ear and can cause conductive hearing loss. However, these entities are

Barotrauma occurs when a patient is exposed to a sudden, large change in ambient pressure, often during diving or flying. Middle ear pressure becomes more positive with respect to ambient pressure during ascent until the eustachian tube is forced open. On descent, ambient

Pressure in the middle ear normally equilibrates with ambient pressure via the eustachian tube. However, if upon descent with flying or diving this equalization is prevented by mucosal edema secondary to an upper respiratory infection, pregnancy, or anatomic variations, the negative relative pressure in the middle ear can lead to its filling with serous fluid or blood or to inward rupture of the TM. Symptoms vary from a sensation of pressure to hearing loss and

Overpressurization of the middle ear can occur during ascent with flying or diving, but TM

Glomus tumors are the most common benign neoplasm of the middle ear. They arise from paraganglionic tissue from the promontory of the middle ear or the adventitia of the dome of

As tympanic tumors grow, they tend to fill the middle ear, with resultant pulsatile tinnitus with or without conductive hearing loss. They also erode bone as they enlarge, especially inferiorly, causing damage to cranial nerves. In addition, tumors may impede upon the

ossicular chain and TM, thereby decreasing motility of either or both structures.[2]

Treatment may include surgical resection, embolization, and radiation.

pressure exceeds middle ear pressure until swallowing opens the eustachian tube.

pain, which may suddenly be relieved with rupture of the TM.[12]

the jugular bulb and may rarely show malignant potential.[8]

tory, or iatrogenic perforation, with implantation of squamous cells.[11]

and even death. Treatment is surgical, usually involving a mastoidectomy.[7]

relatively rare when compared with cholesteatoma.

**3.7. Neoplasm**

44 Update On Hearing Loss

rupture is rare.

**3.9. Vascular**

**3.8. Middle ear barotrauma**

Disorders of the inner ear normally cause a sensorineural hearing loss. The etiology may be associated with the cochlea, eighth nerve, internal auditory canal, or brain.

#### **4.1. Congenital or hereditary**

Congenital hearing loss will be defined as any hearing loss that occurs at or shortly after birth that may be due to either a hereditary or nonhereditary cause. Nonhereditary etiologies involve an insult to the developing cochlea, including viral infections such as cytomegalovirus, hepatitis, rubella, toxoplasmosis, HIV, and syphilis. Some teratogenic medications may also affect the developing ear of the fetus, including recreational drugs, alcohol, quinine, and retinoic acid.[2]

Sensorineural hearing loss can be inherited in an autosomal dominant or recessive pattern; 90% is autosomal recessive, so that the children often have normal hearing parents. Sensori‐ neural hearing loss also may be part of a syndrome or occur as a spontaneous mutation. The hearing deficit may be present at birth, be progressive from birth, or present when the child is older, or even early adult life. The most common testable genetic defect is an abnormal connexin 26.[1]

Congenital malformations of the inner ear also occur, these include anything from complete atresia to a common cavity of the cochlea. The most common malformation is a Mondini, where the normal two-and-one-half turns of the cochlea are replaced by one to one-and-one-half turns.

Patients who have congenital anomalies of either the inner or the middle ear may also develop perilymphatic fistulas (PLFs). PLFs alone can cause progressive or fluctuating sensorineural hearing loss.

#### **4.2. Presbycusis**

Presbycusis, or age-related hearing loss, is a common cause of hearing loss worldwide. This disorder is a complex and multifactorial, characterized by symmetrical progressive loss of hearing over many years. It usually affects the high frequencies of hearing, although its presentation and clinical course can be variable. Presbycusis has a tremendous impact on the quality of life of millions of older individuals and is increasingly prevalent as the population ages.[8]

Common complaints associated with presbycusis include the inability to hear or understand speech in a crowded or noisy environment, difficulty understanding consonants, and the inability to hear high pitched voices or noises. Tinnitus is often present.[2]

The prevalence of hearing loss increases with age, with up to 80% of functionally significant hearing loss occurring in older adults.

The World Health Organization (WHO) estimates that in 2025, there will be 1.2 billion people over 60 years of age worldwide, with more than 500 million individuals who will suffer significant impairment from presbycusis.

Hearing aids are able to benefit most patients with presbycusis, and cochlear implantation may benefit patients of any age who are not helped by hearing aids.

#### **4.3. Infection**

The most common infection of the inner ear is viral cochleitis in adults and meningitis in young children. Meningitis can access the cochlea by way of CSF-perilymph fluid connection and cause a profound sensorineural hearing loss by destroying the inner ear hair cells. Viral cochleitis usually manifests as a sudden sensorineural hearing loss and vertigo.[2]

Other causes of sudden sensorineural hearing loss include acoustic neuroma, perilymphatic fistula, Ménière disease, vascular insufficiency, multiple sclerosis, and other central etiologies. Although the primary etiology of sudden sensorineural hearing loss is almost always viral or a vascular ischemic event, patients with this presentation need to undergo audiometric evaluation as well as a magnetic resonance imaging (MRI) with gadolinium.

#### **4.4. Ménière`s disease**

Ménière`s disease is characterized by (1) spontaneous episodes of vertigo lasting several minutes to hours, (2) low-pitched tinnitus occurring or worsening during a vertiginous attack, (3) fluctuating low-frequency sensorineural hearing loss, and (4) aural fullness in the affected ear.[8]

The onset of symptoms is typically between the third and sixth decades, with a slight female preponderance. Endolymphatic hydrops is the main histopathologic correlate. Over time and with repeated attacks, the hearing deficit can become permanent and may even eventually involve all frequencies.

#### **4.5. Noise exposure**

Everyday noise exposure, compounded over time, has an impact upon our ability to hear. Constant exposure to loud noises can cause high-frequency sensorineural hearing loss, beginning with selective loss in 4000 Hz. With continued exposure, the notch widens and affects all high frequencies. Eventually, hearing loss can be seen in middle and lower frequen‐ cies. A short blast of loud noise also can cause severe to profound sensorineural hearing loss, pain, or hyperacusis. This usually involves exposure to noise greater than 120 to 155 dB.[2]

The mechanism by which excessive noise induces hearing loss includes direct mechanical damage of cochlear structures and metabolic overload due to overstimulation. Some potential metabolic effects are excess nitric oxide release that can damage hair cells, generation of oxygen free radicals that become toxic to membranes, and low magnesium concentrations that weaken hair cells by reducing the concentration of intracellular calcium.

Thus, hearing protection in the form of muffs or plugs is highly recommended anytime a person is exposed to loud noise.

#### **4.6. Inner ear barotrauma**

The World Health Organization (WHO) estimates that in 2025, there will be 1.2 billion people over 60 years of age worldwide, with more than 500 million individuals who will suffer

Hearing aids are able to benefit most patients with presbycusis, and cochlear implantation

The most common infection of the inner ear is viral cochleitis in adults and meningitis in young children. Meningitis can access the cochlea by way of CSF-perilymph fluid connection and cause a profound sensorineural hearing loss by destroying the inner ear hair cells. Viral

Other causes of sudden sensorineural hearing loss include acoustic neuroma, perilymphatic fistula, Ménière disease, vascular insufficiency, multiple sclerosis, and other central etiologies. Although the primary etiology of sudden sensorineural hearing loss is almost always viral or a vascular ischemic event, patients with this presentation need to undergo audiometric

Ménière`s disease is characterized by (1) spontaneous episodes of vertigo lasting several minutes to hours, (2) low-pitched tinnitus occurring or worsening during a vertiginous attack, (3) fluctuating low-frequency sensorineural hearing loss, and (4) aural fullness in the affected

The onset of symptoms is typically between the third and sixth decades, with a slight female preponderance. Endolymphatic hydrops is the main histopathologic correlate. Over time and with repeated attacks, the hearing deficit can become permanent and may even eventually

Everyday noise exposure, compounded over time, has an impact upon our ability to hear. Constant exposure to loud noises can cause high-frequency sensorineural hearing loss, beginning with selective loss in 4000 Hz. With continued exposure, the notch widens and affects all high frequencies. Eventually, hearing loss can be seen in middle and lower frequen‐ cies. A short blast of loud noise also can cause severe to profound sensorineural hearing loss, pain, or hyperacusis. This usually involves exposure to noise greater than 120 to 155 dB.[2]

The mechanism by which excessive noise induces hearing loss includes direct mechanical damage of cochlear structures and metabolic overload due to overstimulation. Some potential metabolic effects are excess nitric oxide release that can damage hair cells, generation of oxygen free radicals that become toxic to membranes, and low magnesium concentrations that weaken

hair cells by reducing the concentration of intracellular calcium.

cochleitis usually manifests as a sudden sensorineural hearing loss and vertigo.[2]

evaluation as well as a magnetic resonance imaging (MRI) with gadolinium.

may benefit patients of any age who are not helped by hearing aids.

significant impairment from presbycusis.

**4.3. Infection**

46 Update On Hearing Loss

**4.4. Ménière`s disease**

involve all frequencies.

**4.5. Noise exposure**

ear.[8]

Barotrauma occurs when a patient is exposed to a sudden, large change in ambient pressure, often during diving or flying.

Inner ear barotrauma is a fairly uncommon injury but should be excluded in all cases of middle ear barotrauma. It can occur following the development of a sudden pressure differential between the inner and middle ear, leading to rupture of the round or oval window. The main symptoms are tinnitus, vertigo, and hearing loss. The resulting labyrinthine fistula and leakage of perilymph can result in permanent inner ear damage. The primary treatment of this complication is complete bed rest with head elevation to avoid increases in cerebrospinal fluid pressure. Deteriorating inner ear function generally requires tympanotomy and patching of the round or oval window.[2]

#### **4.7. Trauma**

Blunt trauma can result in sensorineural loss due to concussive forces of the inner ear fluids, which may cause a shearing affect on the cochlear organ of Corti. Blunt trauma may also lead to longitudinal or transverse temporal bone fracture.

The longitudinal type is most common (80%). It is usually caused by a blow to the temporal parietal region. Hearing loss is typically conductive and associated with tympanic membrane (TM) perforations and blood in the middle ear space.

A transverse fracture occurs following a blow to the occipital or frontal region (Figure 2). Fractures of this type usually run through the inner ear. If hearing is preserved to some degree, the most common reason for a conductive hearing loss is an ossicular injury, typically due to separation of the incudal stapedial joint and/or incus dislocation.[2]

Penetrating trauma typically causes sensorineural or mixed hearing loss. These injuries are usually due to gunshot wounds that upon impact cause significant temporal bone fractures.

#### **4.8. Tumors**

Most tumors of the inner ear are benign, although malignant tumors such as squamous cell carcinoma, sarcomas, adenoid carcinoma, and metastasis rarely occur. Benign bony tumors, including fibrous dysplasia and Paget disease, are also rare.[2]

The most common tumor that causes sensorineural hearing loss is an acoustic neuroma. Eighty percent of tumors arising in the cerebellopontine angle are acoustic neuroma. This is a benign tumor that usually arises from the Schwann's cells of the vestibular portion of the eighth cranial nerve. The most common complaint is an asymmetric progressive sensorineural hearing loss, which typically begins in the high frequencies and progresses to involve lower frequencies. Other symptoms include unilateral tinnitus, disequilibrium, dizziness, or headaches.[8]

**Figure 2.** Transverse temporal bone fracture

#### **4.9. Endocrine disorders**

Various metabolic abnormalities have been known to either cause or be associated with sensorineural hearing loss. Thus, an evaluation of an unexplained sensorineural hearing loss should involve a complete laboratory evaluation to include the following: complete blood count with differential, blood sugar, thyroid function tests, and serologic test for syphilis.[2]

Diabetes has been associated with an approximately twofold increase in the prevalence of lowand midfrequency hearing impairment in adults; this might relate to the impact of diabetes on the vascular or neural components of the inner ear.[8]

Anemia or a white blood cell dyscrasia may lead to sensorineural hearing loss by an unknown mechanism that may involve decreased oxygenation, microblockage of vessels, or infection.

#### **4.10. Autoimmune hearing loss**

The autoimmune inner ear disease may be limited just to the ear, or it may be part of an overall systemic problem. Approximately one third of patients will have evidence of systemic autoimmune disorder such as Wegener granulomatosis, Cogan syndrome, rheumatoid arthritis, systemic lupus erythematosus, or polyarteritis nodosa.[2]

Autoimmune hearing loss is usually sensorineural, bilateral, and asymmetric, which is either fluctuating or progressive in nature.

The treatment choice for patients with autoimmune inner ear disease is high-dose glucocorti‐ coids for up to 4 weeks. This often results in significant recovery of hearing.[2]

Cytotoxic medications such as cyclophosphamide, methotrexate, or azathioprine may be used if corticosteroids fail.

#### **4.11. Ototoxicity**

**4.9. Endocrine disorders**

48 Update On Hearing Loss

**Figure 2.** Transverse temporal bone fracture

**4.10. Autoimmune hearing loss**

fluctuating or progressive in nature.

the vascular or neural components of the inner ear.[8]

arthritis, systemic lupus erythematosus, or polyarteritis nodosa.[2]

Various metabolic abnormalities have been known to either cause or be associated with sensorineural hearing loss. Thus, an evaluation of an unexplained sensorineural hearing loss should involve a complete laboratory evaluation to include the following: complete blood count with differential, blood sugar, thyroid function tests, and serologic test for syphilis.[2]

Diabetes has been associated with an approximately twofold increase in the prevalence of lowand midfrequency hearing impairment in adults; this might relate to the impact of diabetes on

Anemia or a white blood cell dyscrasia may lead to sensorineural hearing loss by an unknown mechanism that may involve decreased oxygenation, microblockage of vessels, or infection.

The autoimmune inner ear disease may be limited just to the ear, or it may be part of an overall systemic problem. Approximately one third of patients will have evidence of systemic autoimmune disorder such as Wegener granulomatosis, Cogan syndrome, rheumatoid

Autoimmune hearing loss is usually sensorineural, bilateral, and asymmetric, which is either

The treatment choice for patients with autoimmune inner ear disease is high-dose glucocorti‐

coids for up to 4 weeks. This often results in significant recovery of hearing.[2]

A great number of medications are known to cause damage to the ear. Anti-inflammatory, antibiotics, loop diuretics, antimalarials, chemotherapeutic agents, and ototopical medications may cause ototoxicity (Table 1).[8]

The hearing loss caused by antibiotic or chemotherapeutic agents usually begins at high frequencies, and with continued medication use, the hearing loss will become more pro‐ nounced and may even continue to worsen for a time after the drug is discontinued.

Several antibiotics cause ototoxicity. All oral aminoglycosides are ototoxic, and this effect is due to hair cell death from apoptosis. Different types of aminoglycosides show different patterns of ototoxicity. Streptomycin and gentamicin are primarily vestibulotoxic. Neomycin, amikacin, and tobramycin are primarily cochleotoxic.

Ototopical aminoglycoside drops have the potential to cause ototoxicity. However, it is believed that these medications do not have their normal ototoxic effect because the inflamed mucosa within the ear prevents significant drug penetration into the oval and round windows. Other oral antibiotics that can cause ototoxicity include erythromycin and tetracycline.


**Table 1.** Medications related with ototoxicity

Many chemotherapeutic agents are known to cause hearing loss. These include cisplatin, 5 fluorouracil (5-FU), bleomycin, and nitrogen mustard. The worst ototoxicity occurs with cisplatin, which damage the outer hair cells of the basal turn of the cochlea, causing bilateral, symmetric, and high-frequency hearing loss.

High-dose aspirin (6–8 g/day) or other salicylates can cause a flat mild-to-moderate sensori‐ neural hearing loss, but this is reversible with discontinuation of the drug.

Antimalarial medications such as quinine and chloroquine may also cause sensorineural hearing loss and tinnitus, but similar to salicylates, these effects are usually reversible. This is also true for high-dose nonsteroidal anti-inflammatory agents. Loop diuretics are an additional cause of temporary hearing loss and tinnitus.[2]

Heavy metals, including lead, mercury, cadmium, and arsenic, can all lead to hearing loss.

#### **4.12. Neurogenic**

Several neurologic disorders may cause sensorineural hearing loss: cerebrovascular accident or transient ischemic attack, Arnold–Chiari malformations (may stretch the auditory vestibu‐ lar nerve, thereby causing hearing loss and/or vestibular complaints), and multiple sclerosis (can initially present as a sudden sensorineural hearing loss and/or vertigo). 2

### **Author details**

Larissa Vilela Pereira1\* and Fayez Bahmad Jr.2,3

\*Address all correspondence to: larissavilela24@yahoo.com.br

1 Otolaryngology Department, University of Sao Paulo School of Medicine, Sao Paulo, Bra‐ zil

2 Brasiliense Institute of Otolaryngology, Brasilia, Brazil

3 Health Science Faculty, University of Brasilia, Brasilia, Brazil

#### **References**


[3] Declau F, Cremers C, Van de Heyning P. Diagnosis and management strategies in congenital atresia of the external auditory canal. Study Group on Otological Malfor‐ mations and Hearing Impairment. Br J Audiol 1999;33:313. (5)

Many chemotherapeutic agents are known to cause hearing loss. These include cisplatin, 5 fluorouracil (5-FU), bleomycin, and nitrogen mustard. The worst ototoxicity occurs with cisplatin, which damage the outer hair cells of the basal turn of the cochlea, causing bilateral,

High-dose aspirin (6–8 g/day) or other salicylates can cause a flat mild-to-moderate sensori‐

Antimalarial medications such as quinine and chloroquine may also cause sensorineural hearing loss and tinnitus, but similar to salicylates, these effects are usually reversible. This is also true for high-dose nonsteroidal anti-inflammatory agents. Loop diuretics are an additional

Heavy metals, including lead, mercury, cadmium, and arsenic, can all lead to hearing loss.

Several neurologic disorders may cause sensorineural hearing loss: cerebrovascular accident or transient ischemic attack, Arnold–Chiari malformations (may stretch the auditory vestibu‐ lar nerve, thereby causing hearing loss and/or vestibular complaints), and multiple sclerosis

1 Otolaryngology Department, University of Sao Paulo School of Medicine, Sao Paulo, Bra‐

[1] Morton NE. Genetic epidemiology of hearing impairment. Ann N Y Acad Sci

[2] Cummings CW, Fredrickson JM, Harker LA, Krause CJ, Schuller DE. Otolaryngology

(can initially present as a sudden sensorineural hearing loss and/or vertigo). 2

neural hearing loss, but this is reversible with discontinuation of the drug.

symmetric, and high-frequency hearing loss.

cause of temporary hearing loss and tinnitus.[2]

Larissa Vilela Pereira1\* and Fayez Bahmad Jr.2,3

\*Address all correspondence to: larissavilela24@yahoo.com.br

2 Brasiliense Institute of Otolaryngology, Brasilia, Brazil

3 Health Science Faculty, University of Brasilia, Brasilia, Brazil

Head and Neck Surgery. 2nd ed. 1993, Vol 4.

**4.12. Neurogenic**

50 Update On Hearing Loss

**Author details**

zil

**References**

1991;630:16–31.


### **Advances in Genetic Diagnosis and Treatment of Hearing Loss — A Thirst for Revolution**

Sidheshwar Pandey and Mukeshwar Pandey

Additional information is available at the end of the chapter

http://dx.doi.org/10.5772/61218

#### **Abstract**

Despite the significant advances in understanding the molecular basis of hearing loss, precise identification of genetic cause still presents some difficulties, owing to pheno‐ typical variation. Gene discovery efforts for hearing disorders are complicated by ex‐ treme heterogeneity. Mutations in some of these genes, such as *GJB2, MYO7A, CDH23, OTOF, SLC26A4, TMC1*, are quite common and responsible for hearing loss. Clinical exome sequencing is a highly complex molecular test that analyzes the exons or coding regions of thousands of genes simultaneously, using next-generation se‐ quencing techniques. The development of a biological method for the repair, regener‐ ation, and replacement of hair cells of the damaged cochlea has the potential to restore normal hearing. At present, gene therapy and stem cells are two promising therapeutic applications for hearing disorders. Gene therapy and stem cell treatment have still a long way to go before these treatments will be available to use in humans. Therefore, existing measures must focus on the prevention of hearing loss to decrease the frequency of genetic hearing loss. Over time, genetic diagnostic tests will become available most rapidly, followed by targeted gene therapy or various permutations of progenitor cell transplantation, and eventually, the preventive interventions for a wider range of hearing impaired patients.

**Keywords:** Genetic hearing loss, next generation sequencing, genetic evaluation, gene therapy, stem cell therapy

#### **1. Introduction**

Genetic hearing loss has diverse etiologies and approximately 1% of all human genes are involved in the hearing process [1]. It is estimated that at least two-thirds of cases of childhoodonset hearing loss have a genetic cause [2], with the remaining one-third caused by environ‐ mental factors, e.g., cytomegalovirus infection, meningitis, acquired conductive loss, and the

© 2015 The Author(s). Licensee InTech. This chapter is distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

impact of extracorporeal membrane oxygenation [3]. Many cases of later onset progressive hearing loss are genetic in origin, and genes also play an important role in progressive hearing loss associated with ageing. This chapter will deal with the ability to identify genetic problems or suspected genetic causes in hearing disorders, different types of gene mutation that causes deafness, genetic evaluation of hearing loss, special aspects of genetic tests such as nextgeneration sequencing, limitation of genetic testing, the development and evaluation of genetic treatment, management, prevention and genetic counseling, benefits of genetic research in deafness, and research needs/anticipating changes. It will conclude by the direction of possible future technological development in these aspects.

The majority of hearing loss is caused by mutations in the DNA (deoxyribonucleic acid) sequence of genes. There are four "bases" in a strand of DNA: adenine (A), guanine (G), thymine (T), and cytosine (C). The DNA also contains a monosaccharide sugar called deoxyribose and a phosphate group. According to base pairing rules (A with T, and C with G), hydrogen bonds bind bases of the two separate strands to make double-stranded DNA. Humans have approx‐ imately 30,000 genes. The DNA sequence of these genes provides the code for producing proteins (which consist of amino acids). The gene is located within a designated region on the chromosome and is composed of the different base pairs. The specific location on the chro‐ mosome where the gene is found is known as locus. For example, autosomal recessive deafness 1A (DFNB1A) is caused by mutation in the *GJB2* gene on chromosomal locus 13q11-q12. This means it is on the long arm of chromosome (q indicates the long arm) 13 somewhere in the range from band 1 of region 1 to band 2 of region 1 (11 or 12 represent the position on arm: 12 region 1, band 2). At times, changes (INDELs) occur in the DNA sequence of genes, such as a short segment of a gene is AGACATCATCTA and A has been replaced by a G at position 8 (AGACATCGTCTA) and C at position 4 has been deleted (AGAGATCGTTA), resulting in a mutation in the DNA sequence and affecting their functions such as a non-functioning protein may be produced or that protein may be missing altogether. If these mutations occur in a gene with important information for normal hearing, the result may be hearing loss, or in extreme cases, deafness.

#### **2. Genetic causes of hearing loss**

There are two types of hearing loss caused by genetics. About 30% of people with a genetic type of hearing loss are classified as syndromic, which involves other presenting abnormalities along with hearing impairment. Non-syndromic hearing loss occurs when there are no other problems associated with an individual other than hearing loss. From a genetic standpoint, this accounts for the other 70% of people, which attributes to the vast majority of genetic hearing loss.

The genetic basis is highly complex. There are many different ways that the DNA sequence of a gene can be changed, resulting in different types of mutation. The types of gene mutations include missence, nonsense, substitution, insertion, deletion, duplication, frameshifts, repeat expansion, splice site, and translocation. The chances of developing deafness caused by a mutated gene depend on whether the mutation is dominant or recessive. Dominant and recessive hearing loss results from the allelic mutation in some genes, syndromic and nonsyndromic hearing loss is caused by mutations in the same gene, and recessive hearing loss may be caused by a combination of two mutations in different genes from the same functional group [4].

#### **2.1. Non-syndromic hearing loss**

impact of extracorporeal membrane oxygenation [3]. Many cases of later onset progressive hearing loss are genetic in origin, and genes also play an important role in progressive hearing loss associated with ageing. This chapter will deal with the ability to identify genetic problems or suspected genetic causes in hearing disorders, different types of gene mutation that causes deafness, genetic evaluation of hearing loss, special aspects of genetic tests such as nextgeneration sequencing, limitation of genetic testing, the development and evaluation of genetic treatment, management, prevention and genetic counseling, benefits of genetic research in deafness, and research needs/anticipating changes. It will conclude by the direction of possible

The majority of hearing loss is caused by mutations in the DNA (deoxyribonucleic acid) sequence of genes. There are four "bases" in a strand of DNA: adenine (A), guanine (G), thymine (T), and cytosine (C). The DNA also contains a monosaccharide sugar called deoxyribose and a phosphate group. According to base pairing rules (A with T, and C with G), hydrogen bonds bind bases of the two separate strands to make double-stranded DNA. Humans have approx‐ imately 30,000 genes. The DNA sequence of these genes provides the code for producing proteins (which consist of amino acids). The gene is located within a designated region on the chromosome and is composed of the different base pairs. The specific location on the chro‐ mosome where the gene is found is known as locus. For example, autosomal recessive deafness 1A (DFNB1A) is caused by mutation in the *GJB2* gene on chromosomal locus 13q11-q12. This means it is on the long arm of chromosome (q indicates the long arm) 13 somewhere in the range from band 1 of region 1 to band 2 of region 1 (11 or 12 represent the position on arm: 12 region 1, band 2). At times, changes (INDELs) occur in the DNA sequence of genes, such as a short segment of a gene is AGACATCATCTA and A has been replaced by a G at position 8 (AGACATCGTCTA) and C at position 4 has been deleted (AGAGATCGTTA), resulting in a mutation in the DNA sequence and affecting their functions such as a non-functioning protein may be produced or that protein may be missing altogether. If these mutations occur in a gene with important information for normal hearing, the result may be hearing loss, or in extreme

There are two types of hearing loss caused by genetics. About 30% of people with a genetic type of hearing loss are classified as syndromic, which involves other presenting abnormalities along with hearing impairment. Non-syndromic hearing loss occurs when there are no other problems associated with an individual other than hearing loss. From a genetic standpoint, this accounts for the other 70% of people, which attributes to the vast majority of genetic

The genetic basis is highly complex. There are many different ways that the DNA sequence of a gene can be changed, resulting in different types of mutation. The types of gene mutations include missence, nonsense, substitution, insertion, deletion, duplication, frameshifts, repeat expansion, splice site, and translocation. The chances of developing deafness caused by a

future technological development in these aspects.

cases, deafness.

54 Update On Hearing Loss

hearing loss.

**2. Genetic causes of hearing loss**

The different gene loci for non-syndromic deafness are designated DFN (for DeaFNess). Based on the mode of inheritance, loci are named as A, B, X, and Y for autosomal dominant (DFNA), autosomal recessive (DFNB), X-linked (DFNX), and Y-linked (DFNY), respectively. The order in which the loci have been described is denoted by a number after these letters, e.g., DFNB1 is the first identified locus causing autosomal recessive HL [5, 6]. Earlier research reports that two-thirds of prelingual-onset sensorineural hearing loss (SNHL) is estimated to have a genetic etiology in developed countries, of which 70% is non-syndromic hearing loss (NSHL). However, 80% of NSHL is autosomal recessive non-syndromic hearing loss (ARNSHL), 20% is autosomal dominant (AD), and the remainder is composed of X-linked and mitochondrial forms [7-9]. NSHL demonstrates extreme genetic heterogeneity, with more than 55 autosomal dominant (deafness, neurosensory, DFNA), 80 autosomal recessive (deafness, neurosensory, DFNB), and 5 X-linked (deafness, neurosensory, DFNX) loci with 30, 55, and 4 causative genes, respectively, identified to date [10]. A fraction of these genes have been associated with both dominant and recessive HL. Furthermore, mitochondrial mutations can also underlie NSHL.

#### *2.1.1. Autosomal dominant non-syndromic hearing loss*

Autosomal dominant non-syndromic hearing loss (ADNSHL) is represented by heterogeneity of genetic and clinical features. ADNSHL is passed directly through generations. It is often possible to identify an autosomal dominant pattern through simple inspection of the family tree. Autosomal dominant traits usually affect males and females equally. ADNSHL associated with *GJB2* mutations is early-onset, moderate to severe, and (in contrast to autosomal recessive *GJB2* related deafness) typically progressive. Dominant *GJB2* mutations, however, often have pleiotropic effects. There is no frequent gene mutated in ADNSHL but *WFS1, KCNQ4, COCH*, and *GJB2* mutations are somewhat more frequent in comparison to the other reported genes [2, 11, 12]. Clinical manifestations and loci of known genes causing autosomal dominant non-syndromic hearing loss are summarized below in Table 1 [13, 14].



**Table 1.** Clinical manifestations and locus of known genes causing ADNSHL. Adapted from [10, 14]

#### *2.1.2. Autosomal recessive non-syndromic hearing loss*

Autosomal recessive non-syndromic hearing loss at the *DFNB1* locus on chromosome 13q11-12 is characterized as congenital, typically non-progressive, mild to profound hearing impair‐ ment. The locus contains two genes, *GJB2* and *GJB6*. *GJB2* and *GJB6* are the most common mutated genes. *GJB2* is a small gene with a single coding exon. *GJB2* encodes connexin 26, a gap junction protein of the beta group with a molecular weight of 26 kd. The most common mutation is a deletion of a single guanine from a string of six *(35delG).* This mutation accounts for more than two-thirds of identified mutations and results in a frame-shift with premature termination of the protein. Profound HL caused by *GJB2* gene mutations is found in 50% of the cases; 30% are severe, 20% moderate and 1-2% are mild cases [15]. *GJB2* mutation preva‐

lence suggests that the overall prevalence of *GJB2* mutations is similar around the world, although specific mutations differ [16].

**Locus Name Gene Onset/Decade Audioprofile**

56 Update On Hearing Loss

**DFNA4** *MYH14* Postlingual Flat/gently downsloping **DFNA5** *DFNA5* Postlingual/1st High frequency progressive **DFNA6/14/38** *WFS1* Prelingual Low frequency progressive

**DFNA8/12** *TECTA* Prelingual Mid-frequency loss

**DFNA9** *COCH* Postlingual/2nd High frequency progressive **DFNA10** *EYA4* Postlingual/3rd, 4th Flat/gently downsloping **DFNA11** *MYO7A* Postlingual/1st Flat/gently downsloping **DFNA13** *COL11A2* Postlingual/2nd Mid-frequency loss

**DFNA15** *POU4F3* Postlingual High frequency progressive **DFNA17** *MYH9* Postlingual High frequency progressive **DFNA20/26** *ACTG1* Postlingual High frequency progressive **DFNA22** *MYO6* Postlingual High frequency progressive

**DFNA25** *SLC17A8* Postlingual/2nd-6th decades High frequency progressive **DFNA28** *GRHL2* Postlingual Flat/gently downsloping **DFNA36** *TMC1* Postlingual Flat/gently downsloping **DFNA39** *DSPP* Postlingual High frequency progressive

**DFNA44** *CCDC50* Postlingual Low to mild frequencies progressive

Autosomal recessive non-syndromic hearing loss at the *DFNB1* locus on chromosome 13q11-12 is characterized as congenital, typically non-progressive, mild to profound hearing impair‐ ment. The locus contains two genes, *GJB2* and *GJB6*. *GJB2* and *GJB6* are the most common mutated genes. *GJB2* is a small gene with a single coding exon. *GJB2* encodes connexin 26, a gap junction protein of the beta group with a molecular weight of 26 kd. The most common mutation is a deletion of a single guanine from a string of six *(35delG).* This mutation accounts for more than two-thirds of identified mutations and results in a frame-shift with premature termination of the protein. Profound HL caused by *GJB2* gene mutations is found in 50% of the cases; 30% are severe, 20% moderate and 1-2% are mild cases [15]. *GJB2* mutation preva‐

**DFNA51** *TJP2 & FAM189A2* Postlingual/4th High frequency progressive

**Table 1.** Clinical manifestations and locus of known genes causing ADNSHL. Adapted from [10, 14]

**DFNA23** *SIX1* Prelingual Downsloping

**DFNA41** *P2RX2* Postlingual Flat progressive

**DFNA48** *MYO1A* Postlingual Progressive **DFNA50** *MIR96* Postlingual/2nd Flat progressive

*2.1.2. Autosomal recessive non-syndromic hearing loss*

Mutations in the *GJB6* gene are the second most common genetic defect in hereditary hearing loss and lead to similar effects on abnormal expression of connexin protein Cx30. However, *GJB6* mutations are much less common than mutations in *GJB2*. In 1999, a research revealed the role for *GJB6*, i.e., adjacent to *GJB2* on chromosome 13, when a dominant mutation (T5M) was described [17]. The most common mutation in *GJB6*, however, is a >300-kb deletion that causes non-syndromic SNHL when homozygous, or when present on the opposite allele of a *GJB2* mutation [18]. *GJB6* is very similar to *GJB2* and only ∼35 kb apart, but not interrupted by introns [4, 17]. Both genes are expressed in the cochlea where they can unite to form multi-unit hemichannels in the cell membrane, and function as an integral component of the potassium regulation in the inner ear. Clinical manifestations and locus of known genes implicated in autosomal recessive nonsyndromic hearing loss are summarized in Table 2 [10, 14].



**Table 2.** Clinical manifestations and locus of known genes causing ARNSHL. Adapted from [10, 14].

#### *2.1.3. X-Linked Non-Syndromic Hearing Loss*

X-linked non-syndromic hearing loss is much rarer, accounting for 1-3% of hereditary deafness [19]. So far, there are only four genes that have been associated with X-linked non-syndromic hearing loss. These are: *PRPS1* on Xq22 that encodes phosphoribosyl pyrophosphate (PRPP) synthetase 1; *POU3F4* on Xq21, encoding a member of a transcription factor family that contains a POU domain; *SMPX* on Xp22 that encodes the small muscle protein; and *COL4A6* on Xq22 encoding the alpha-6 chain of type IV collagen. *COL4A6* is a protein-coding gene, type IV collagen having an important role in cochlea development. The *COL4A6* gene is located only ~500 kb away from the DFNX1/PRPS1 locus [20]. Clinical manifestations and locus of known genes causing X-linked nonsyndromic hearing impairment are summarized in Table 3 [10, 14].


**Table 3.** Clinical manifestations and locus of known genes causing X-linked non-syndromic hearing loss. Modified from [10, 14].

#### *2.1.4. Non-syndromic mitochondrial hearing loss*

Mitochondrial DNA (mtDNA) mutations account for at least 5% of cases of postlingual, nonsyndromic hearing impairment [21]. MtDNA mutations are classified as either large-scale rearrangements (partial deletions or duplications) that are usually sporadic or point mutations, which are usually maternally inherited, and concern genes responsible for protein synthesis (rRNAs or tRNAs), or genes encoding subunits of the electron transport chain (ETC) [22, 23]. Tang et al. reported that mitochondrial mutations by themselves are not sufficient to produce a deafness phenotype. Modifier factors such as nuclear and mitochondrial genes, or environ‐ mental factors such as exposure to aminoglycosides, appear to modulate the phenotypic manifestations [24].

**Locus Name Gene Onset Type**

**DFNB39** *HGF* Prelingual Severe to profound; downsloping **DFNB49** *MARVELD2* Prelingual Moderate to profound; stable **DFNB53** *COL11A2* Prelingual Severe to profound; stable **DFNB59** *DFNB59* Prelingual Severe to profound; stable **DFNB61** *SLC26A5* Prelingual Severe to profound; stable **DFNB63** *LRTOMT* Prelingual Severe to profound; stable **DFNB67** *LHFPL5* Prelingual Severe to profound; stable **DFNB73** *BSND* Prelingual Severe to profound; stable **DFNB76** *SYNE4* Prelingual High frequency; progressive **DFNB77** *LOXHD1* Postlingual Moderate to profound; progressive **DFNB79** *TPRN* Prelingual Severe to profound; stable **DFNB84** *PTPRQ* Prelingual Moderate to profound; progressive

**Table 2.** Clinical manifestations and locus of known genes causing ARNSHL. Adapted from [10, 14].

X-linked non-syndromic hearing loss is much rarer, accounting for 1-3% of hereditary deafness [19]. So far, there are only four genes that have been associated with X-linked non-syndromic hearing loss. These are: *PRPS1* on Xq22 that encodes phosphoribosyl pyrophosphate (PRPP) synthetase 1; *POU3F4* on Xq21, encoding a member of a transcription factor family that contains a POU domain; *SMPX* on Xp22 that encodes the small muscle protein; and *COL4A6* on Xq22 encoding the alpha-6 chain of type IV collagen. *COL4A6* is a protein-coding gene, type IV collagen having an important role in cochlea development. The *COL4A6* gene is located only ~500 kb away from the DFNX1/PRPS1 locus [20]. Clinical manifestations and locus of known genes causing X-linked nonsyndromic hearing impairment are summarized in Table

**Locus Name Gene Onset Type and Degree Frequencies DFNX1 (DFN2)** *PRPS1* Postlingual Progressive sensorineural; severe to profound All

All

All

profound

**DFNX4 (DFN6)** *SMPX* Postlingual Progressive sensorineural; mild to profound All

severe

**Table 3.** Clinical manifestations and locus of known genes causing X-linked non-syndromic hearing loss. Modified

Mitochondrial DNA (mtDNA) mutations account for at least 5% of cases of postlingual, nonsyndromic hearing impairment [21]. MtDNA mutations are classified as either large-scale

**DFNX2 (DFN3)** *POU3F4* Prelingual Progressive, mixed; variable, but progresses to

**DFNX6** *COL4A6* Postlingual Progressive Sensorinueral, mixed; mild to

*2.1.3. X-Linked Non-Syndromic Hearing Loss*

*2.1.4. Non-syndromic mitochondrial hearing loss*

3 [10, 14].

58 Update On Hearing Loss

from [10, 14].

The mutation most commonly associated with maternal inheritance is A1555G on gene *12S rRNA*, *MTRNR1* [25, 26]. Non-syndromic mitochondrial hearing loss is characterized by moderate-to-profound hearing loss and a pathogenic variant in either *MTRNR1* or *MTTS1*. Pathogenic variants in *MTRNR1* can be associated with the predisposition to aminoglycoside ototoxicity and/or late-onset sensorineural hearing loss. [27].

The use of streptomycin, and to a lesser extent other aminoglycoside antibiotics, can cause hearing loss in genetically susceptible individuals. These drugs are known to exert their antibacterial effects at the level of the decoding site of the small ribosomal subunit, causing miscoding or premature termination of protein synthesis [28-30]. The hearing loss is primarily high frequency and may be unilateral. Mitochondrial non-syndromic sensory neural hearing loss (SNHL) is also associated with the A7445G, 7472insC, T7510C, and T7511C mutations in the tRNASer (UCN) gene,*MTTS1* [30, 31]. Mitochondrially inherited non-syndromic hearing loss can be caused by mutation in any one of several mitochondrial genes, including *MTRNR1, MTTS1, MTCO1, MTTH, MTND1,* and *MTTI* (Table 4).


**Table 4.** Identified mitochondrial DNA mutations in hearing loss. Modified from [10].

#### **2.2. Syndromic hearing loss**

Syndromic forms of hearing loss are less common than non-syndromic forms. To date, more than 400 genes responsible for syndromic hearing loss have been identified [32]. These can include syndromes transmitted in Mendelian or monogenic, syndromes due to chromosomal anomalies, syndromes due to multi-factorial influences, or syndromes due to a combination of these. Syndromic hearing loss can be conductive, sensorineural, or mixed.

Many of the syndromes associated with SNHL do not usually demonstrate gross inner ear anomalies. However, inner ear malformations are common in numerous syndromes. In some cases, the existence of specific inner ear anomalies may be characteristic of certain syndromes such as in BOR syndrome, Waardenburg syndrome, or X-linked deafness with stapes gusher or CHARGE syndrome. SNHL presenting later in life is often related to inner ear infections or inflammatory conditions, trauma, or tumor [33].

Mutations in the same gene may cause syndromic hearing loss in one individual and nonsyndromic hearing loss in another. The most common autosomal dominant form is Waarden‐ burg syndrome. The most common autosomal recessive forms are Pendred syndrome and Usher syndrome. Syndromic hearing loss may be transmitted as an autosomal recessive, autosomal dominant, X-linked, or matrilineal trait. Some of the genetics forms of syndromic hearing loss and their main clinical features are given in Table 5 [34].



**2.2. Syndromic hearing loss**

60 Update On Hearing Loss

**Waardenburg syndrome** (AD)\*

inflammatory conditions, trauma, or tumor [33].

Syndromic forms of hearing loss are less common than non-syndromic forms. To date, more than 400 genes responsible for syndromic hearing loss have been identified [32]. These can include syndromes transmitted in Mendelian or monogenic, syndromes due to chromosomal anomalies, syndromes due to multi-factorial influences, or syndromes due to a combination

Many of the syndromes associated with SNHL do not usually demonstrate gross inner ear anomalies. However, inner ear malformations are common in numerous syndromes. In some cases, the existence of specific inner ear anomalies may be characteristic of certain syndromes such as in BOR syndrome, Waardenburg syndrome, or X-linked deafness with stapes gusher or CHARGE syndrome. SNHL presenting later in life is often related to inner ear infections or

Mutations in the same gene may cause syndromic hearing loss in one individual and nonsyndromic hearing loss in another. The most common autosomal dominant form is Waarden‐ burg syndrome. The most common autosomal recessive forms are Pendred syndrome and Usher syndrome. Syndromic hearing loss may be transmitted as an autosomal recessive, autosomal dominant, X-linked, or matrilineal trait. Some of the genetics forms of syndromic

**Syndrome Main Clinical Features Genetics Hearing loss**

canthorum MITF, SNAI2

**Charge** ⋅ Choanal atresia Mutations in Severe-to**syndrome** ⋅ Coloboma *CHD7* are profound (AD) ⋅ Characteristic ears detected in asymmetrical

⋅ Cardiovascular malformations of CHARGE

*PAX3*

*PAX3*

⋅ Cranial nerve anomalie more than 75% mixed losses

*EDN3*, *SOX10* and *EDNRB*

Sensorineural hearing loss

of these. Syndromic hearing loss can be conductive, sensorineural, or mixed.

hearing loss and their main clinical features are given in Table 5 [34].

heterochromy, brilliant blue eyes, broad nasal root, premature, graying of hair, white forelock,

⋅ Type 2: similar phenotype except dystopia

⋅ Type 3 (Klein-Waardenburg syndrome ): upper extremity abnormalities other Type 1

⋅ Type 4 (Waardenburg-Shah syndrome): pigmentation abnormalities and

Hirschsprung's disease other Type 2 clinical

⋅ Type 1: dystopia canthorum, iris

and vestibular dysfunction

clinical features

features



**Syndrome Main Clinical Features Genetics Hearing loss**

**Crouzon** ⋅ Synostosis *FGFR2* Conductive **syndrome** ⋅ High forehead hearing loss

**Saethre- Chotzen** ⋅ Brachycephaly *TWIST1* Conductive or mixed

**Pfeiffer syndrome** ⋅ Broad face is midface hypoplasia *FGFR1* & Conductive

**Townes-Brock** ⋅ Anus imperforatus It is caused by Sensorineural or **syndrome** ⋅ Rectovaginal mutations in conductive (AD) ⋅ Rectoperineal fistula *SALL1* hearing loss

**Miller syndrome** ⋅ Malar hypoplasia *DHODH* Conductive (AR\*\* or AD) ⋅ Micrognathia hearing loss-

⋅ Down-slanting eyes mainly due to

⋅ Hypoplastic midface

⋅ External strabismus ⋅ Hypertelorism ⋅ Prognathism

⋅ Hypoplastic upper jaw

⋅ Widely spaced eyes

⋅ Facial asymmetry ⋅ Syndactyly

⋅ Skull malformation ⋅ Limb abnormalities

⋅ External ear anomalies ⋅ Thumbs malformation

⋅ Broad or duplicated thumb or hallux

⋅ High forehead, flat occiput, hypertelorism ⋅ Swallowing orbits which cause proptosis

(AD) ⋅ Prognatism *FGFR2*

⋅ Proptosis ⋅ Small upper jaw ⋅ Syndactyly

(AD) ⋅ Proptosis

62 Update On Hearing Loss

**syndrome** ⋅ Low frontal hair line (AD) ⋅ Flattened nasofrontal angle

⋅ Ptosis


**Table 5.** Syndromic hearing loss and their clinical features. Modified from [34].

#### **3. Genetic evaluation of hearing loss**

Despite the significant advances in understanding the molecular basis of hearing loss, precise identification of genetic cause still presents some difficulties due to phenotypical variation. Gene discovery efforts for hearing disorders are complicated by the extreme heterogeneity. Usher syndrome or Jervell and Lange-Nielsen syndrome, which can be mistaken for nonsyn‐ dromic hearing loss, where Usher syndrome can be caused by mutations in several different genes. We must therefore have a clear understanding of the different types of diagnostic tests available to patients, including karyotyping, RFLP, FISH, microarray, clinical exome sequenc‐ ing, preimplantation genetic diagnosis, and newborn genetic screening. Establishing a genetic diagnosis of hearing loss is a critical component of the clinical evaluation of hearing impaired persons and their families. If a genetic cause of hearing loss is determined, it is possible to provide families with prognostic information, recurrence risks, and improved habilitation options [9].

The identification of genes or genetic cause for hearing loss is a breakthrough approach. First we have to rule out non-genetic causes, then syndromic causes, and then look for nonsyndromic causes. Mutations in some of these genes, such as *GJB2, MYO7A, CDH23, OTOF, SLC26A4, TMC1* are quite common for responsible of hearing loss. *GJB2* mutations are the most frequent cause of autosomal recessive non-syndromic hearing loss (ARNSHL) and account for about 20% of the cases, therefore routine screening begins with *GJB2* analysis [35]. Newborns that are diagnosed with severe-to-profound HL (in the absence of other abnormal findings on physical examination) are analyzed for mutations in the *GJB2* gene. For abnormalities such as an enlarged vestibular aqueduct indicated by imaging of the inner ear, the *SLC26A4* gene is analyzed. It is exceptional to find any gene other than *GJB2* and *SLC26A4* that is routinely analyzed in DNA diagnostics. In such cases, a positive result is only obtained in less than 20% of deaf children for which DNA diagnostics is requested [2, 36]. The key challenge lies in determining which gene is responsible in a patient with hearing loss. Sequencing all genes by traditional DNA sequencing technology is labor-intensive and not cost-effective [35, 36]. In such case, next-generation sequencing offers rapid sequencing of the entire human genome compared to traditional molecular testing that focuses on a single gene at a time. However, Sanger sequencing is still recommended for first-line diagnostics. It is currently the standard for molecular diagnosis of unknown point mutations in known genes. Screening can be costeffective in individuals with genetically heterogeneous hearing loss phenotypes when a single gene is responsible for a significant percentage of cases.

#### **3.1. Next-generation sequencing**

**Syndrome Main Clinical Features Genetics Hearing loss**

Despite the significant advances in understanding the molecular basis of hearing loss, precise identification of genetic cause still presents some difficulties due to phenotypical variation. Gene discovery efforts for hearing disorders are complicated by the extreme heterogeneity. Usher syndrome or Jervell and Lange-Nielsen syndrome, which can be mistaken for nonsyn‐ dromic hearing loss, where Usher syndrome can be caused by mutations in several different genes. We must therefore have a clear understanding of the different types of diagnostic tests available to patients, including karyotyping, RFLP, FISH, microarray, clinical exome sequenc‐ ing, preimplantation genetic diagnosis, and newborn genetic screening. Establishing a genetic diagnosis of hearing loss is a critical component of the clinical evaluation of hearing impaired persons and their families. If a genetic cause of hearing loss is determined, it is possible to provide families with prognostic information, recurrence risks, and improved habilitation

The identification of genes or genetic cause for hearing loss is a breakthrough approach. First we have to rule out non-genetic causes, then syndromic causes, and then look for nonsyndromic causes. Mutations in some of these genes, such as *GJB2, MYO7A, CDH23, OTOF, SLC26A4, TMC1* are quite common for responsible of hearing loss. *GJB2* mutations are the most frequent cause of autosomal recessive non-syndromic hearing loss (ARNSHL) and account for about 20% of the cases, therefore routine screening begins with *GJB2* analysis [35]. Newborns that are diagnosed with severe-to-profound HL (in the absence of other abnormal findings on physical examination) are analyzed for mutations in the *GJB2* gene. For abnormalities such as an enlarged vestibular aqueduct indicated by imaging of the inner ear, the *SLC26A4* gene is analyzed. It is exceptional to find any gene other than *GJB2* and *SLC26A4* that is routinely analyzed in DNA diagnostics. In such cases, a positive result is only obtained in less than 20% of deaf children for which DNA diagnostics is requested [2, 36]. The key challenge lies in determining which gene is responsible in a patient with hearing loss. Sequencing all genes by

NDP

*TIMM8A* Progressive hearing

loss

Sensorineural Progressive HL

⋅ Hearing loss usually begins in the adolescent

⋅ Developmental delays in motor skills ⋅ Mild to moderate intellectual disability

\*AD- Autosomal dominant inheritance, \*\*AR- Autosomal recessive inheritance

**Table 5.** Syndromic hearing loss and their clinical features. Modified from [34].

years

⋅ Visual disability ⋅ Dystonia, fractures ⋅ Intellectual disability

⋅ Eye disorder

**3. Genetic evaluation of hearing loss**

**Mohr-Tranebjaerg syndrome** (X-Linked)

64 Update On Hearing Loss

**Norrie Syndrome** (X-Linked)

options [9].

With the fast development and wide applications of next-generation sequencing (NGS) technologies, genomic sequence information is within reach to aid the achievement of goals to decode life mysteries and improve life qualities. Today, NGS-based tests are rapidly replacing traditional tests, which include many single gene-sequencing tests for hearing loss. These tests use disease-targeted exon capture, whole-exome sequencing (WES), or wholegenome sequencing (WGS) strategies. The main advantage of these tests is that they address the problem of genetic heterogeneity, where many different genes result in phenotypes that cannot be easily distinguished clinically [36-39]. NGS also offers sequencing of very large gene or in presence of substantial locus heterogeneity, where it may be difficult to analyze the same gene by comprehensive Sanger sequencing.

NGS systems are typically represented by SOLiD/Ion Torrent PGM from Life Technologies, Genome Analyzer/HiSeq/MiSeq/NextSeq from Illumina, and GS FLX Titanium/GS Junior from Roche. Today, Illumina dominates the genome sequencing market, where instrument versa‐ tility, high throughput and accuracy, turnaround speed, faster and simpler sample prepara‐ tion, and supportive data analysis software make it a driving force and the clear winner as of now. Their technology creates new applications and also decipher many existing genetic research and clinical diagnostic markets.

Targeted genomic capture and massively parallel sequencing technologies are revolutionizing genomics by making it possible to sequence complete genomes of model organisms. However, the cost and capacity required are still high, especially considering that the functional signif‐ icance of intronic and intergenic noncoding DNA sequences is still largely unknown. One application that these technologies are well suited for is the re-sequencing of many selected parts of a genome, such as all exons, from a large set of genes. This requires that the targeted parts of the genome are highly enriched in the sample. Recent technological changes, such as genome capture, genome enrichment, and genome partitioning, have successfully been used to enrich large parts of the genome [40-42]. The targeted fragments can subsequently be captured using solid- or liquid-phase hybridization assays [43, 44].

Clinical exome sequencing or whole-exome sequencing is a highly complex molecular test that analyzes the exons or coding regions of thousands of genes simultaneously from a small sample of blood, using NGS techniques. Exome sequencing is especially valuable when a patient's symptoms suggest numerous possible genetic causes. The whole-exome sequencing test sequences base by base with the required depth of coverage to achieve accurate consensus sequence rather than limiting the testing to a single gene or panel of genes and incurring



**Table 6.** Deafness genes identified using genomic capture and massively parallel sequencing.

Among NGS applications, whole-exome sequencing is a cost-effective alternative to wholegenome sequencing. The total size of the human exome is ~30 Mb, which comprises ~180,000 exons that are arranged in about 22,000 genes and constitute about 2-3% of the entire human genome, but contains ~85% of known disease-causing variants. The exome refers to the portion of the human genome that contains functionally important sequences of DNA that direct the body to make proteins essential for the body to function properly. Research revealed that most of the errors that occur in DNA sequences are usually located in the exons that lead to genetic disorders. Consequently, sequencing human exome is considered to be an efficient method to discover the genetic cause of hearing disorders. Currently, sequencing whole genomes is still a substantial undertaking, which is not a routine procedure that can be done on hundreds of samples. At present, exome sequencing represents an alternative in which, approximately 30-70 Mb sequences encompassing exons and splice sites are targeted, enriched, and sequenced using commercially available sequence capture methods. Several Human Exome Sequence Capture kits are now commercially available. These include the Agilent SureSelect Human All Exon Kit, the Illumina Nextera Rapid Capture Exome and Nextera Rapid Capture Expanded Exome Kit, the TargetSeq In-solution Target Enrichment Kit from Life Tech/Applied Biosys‐ tems, and SeqCap EZ Exome from Roche NimbleGen. Clinical exome sequencing should be considered in the diagnostic assessment of a phenotype individual when a genetic disorder is suspected clinically, but limited genetic testing is available clinically, the patient's features are unclear or atypical, there are multiple genetic conditions as part of the differential diagnosis, and a novel or candidate gene is suspected but has yet to be discovered.

Hundreds of syndromic forms of deafness have been described, and for many of them, the underlying genes still await discovery. Since the introduction of the first NGS technology in 2004, more than 1,000 NGS-related manuscripts have been published. Until now, approxi‐ mately a dozen of genes for HL have been successfully determined using NGS [45-47] (Table 6).

#### *3.1.1. Sequencing panels*

diagnostic delays and escalating costs. It is possible to identify point mutations, insertions,

**Hearing status Locus/disorder Gene Strategy References**

DFNB79 *TPRN* Targeted enrichment of

Dominant DFNA4 *CEACAM16* Whole exome [51] DFNA41 *P2RX2* Targeted enrichment of

Syndromic Perrault syndrome *HSD17B4* Whole exome [54] Perrault syndrome *HARS2* Targeted enrichment of

X-Linked DFNX4 SMPX Targeted enrichment of

**Table 6.** Deafness genes identified using genomic capture and massively parallel sequencing.

Among NGS applications, whole-exome sequencing is a cost-effective alternative to wholegenome sequencing. The total size of the human exome is ~30 Mb, which comprises ~180,000 exons that are arranged in about 22,000 genes and constitute about 2-3% of the entire human genome, but contains ~85% of known disease-causing variants. The exome refers to the portion of the human genome that contains functionally important sequences of DNA that direct the body to make proteins essential for the body to function properly. Research revealed that most of the errors that occur in DNA sequences are usually located in the exons that lead to genetic disorders. Consequently, sequencing human exome is considered to be an efficient method to discover the genetic cause of hearing disorders. Currently, sequencing whole genomes is still a substantial undertaking, which is not a routine procedure that can be done on hundreds of samples. At present, exome sequencing represents an alternative in which, approximately 30-70 Mb sequences encompassing exons and splice sites are targeted, enriched, and sequenced using commercially available sequence capture methods. Several Human Exome Sequence

DFNB82 *GPSM2* Whole exome [49] DFNB84 *OTOGL* Whole exome [50]

genomic locus

genomic locus

genomic locus

genomic locus

*MASP1* Whole exome [56]

*DNMT1* Whole exome [57]

[48]

[52]

[53]

[55]

deletions, inversions, and rearrange the exome.

Carnevale, Malpuech, Michels, and oculoskeletal-abdominal syndromes

Hereditary sensory and autonomic neuropathy type 1 (HSAN1) with dementia and hearing loss

Nonsyndromic Recessive

66 Update On Hearing Loss

Consider a case where there is an interest in a large but limited subset of particular genes, not the whole genome, or even the whole exome, but more than just one or two genes. This sort of situation frequently arises in the context of oncology, where the characterization of a set of oncogenes on a set of pathways can help stratify cases and select the best therapeutic options. These may consist of 30-150 particular target genes, with a desire to have high throughput by analyzing multiple different specimens within a single NGS run. Generally referred to as "NGS panels," this is a third form of library which, depending on design, may start with extracting genomic DNA from a test sample where selection of targets of interest is performed. This can be by gene-specific PCR, leading to a pool of amplicons (already of the desired length, although in this case with defined endpoints), by hybridization capture, or by selective genomic DNA coding only for particular genes. This genetic material is then a very focused subset of the source genome from which to prepare the library material for dispersion and sequencing, following either of the paths above as appropriate to the sample type (note that for a direct PCR amplified genomic DNA panel type, the size shearing and adapter ligation steps may be dispensed with as these are effectively carried out in the PCR step).

A particularly clever aspect of NGS panels is that it is possible, either in the direct PCR stage for genomic DNA-based panels or in the adapter ligation step for exome-based panels, to use PCR primers or adapters, respectively, which contain an internal sequence element (commonly referred to as a "barcode") that is distinct to each sample prepared. This then allows multiple panel libraries from different samples to be mixed together prior to the dispersion and actual sequencing steps. By doing this, each individual sequence read will start with a sample-unique "barcode," allowing it to be associated back to the sample of origin. This allows many different unrelated panel sample libraries to be mixed together in one dispersion and sequencing run, thereby taking full advantage of the massively parallel nature of NGS technology and allowing for high throughput with respect to the number of samples per run. This makes panels highly cost-effective and of relatively low labor input on a per-sample basis. Depending on the type of research or clinical question being addressed in an NGS assay, the choice of the best method helps to make results cost-effective and most directly meaningful.

Different panels designed to diagnose hearing loss include:


#### *3.1.2. NGS data analysis*

The large amount of data derived from NGS platforms imposes increasing demands on statistical methods and bioinformatic tools for the analysis. Although the NGS platforms rely on different principles and differ in how the array is made, their work flows are conceptually very similar. All of them generate millions or billions of short sequencing reads simultane‐ ously. Several layers of analysis are necessary to convert these raw sequence data into understanding of functional biology. These include alignment of sequence reads to a reference, base-calling and/or polymorphism detection, de novo assembly from paired or unpaired reads, and structural variant detection (Figure 1). To date, a variety of software tools are available for analyzing NGS data. Although tremendous progress has been achieved over the last several years in the development of computational methods for the analysis of high-throughput sequencing data, there are still many algorithmic and informatics challenges remaining. For example, even if a plethora of alignment tools have been adapted or developed for the reconstruction of full human genotypes from short reads, this task remains an extremely challenging problem. Also, when a high-throughput technology is used to sequence an individual (the donor), any genetic difference between the sequenced genomes and a reference human genome—typically the genome maintained at NCBI—is called the variant. Although this reference genome was built as a mosaic of several individuals, it is haploid, and may not contain a number of genomic segments present in other individuals. By simply mapping reads to the reference genome, it is impossible to identify these segments. Thus, de novo assembly procedures should be used instead. Nonetheless, NGS technologies continue to change the landscape of human genetics. The resulting information has both enhanced our knowledge and expanded the impact of the genome on biomedical research, medical diagnostics, and treatment, and has accelerated the pace of gene discovery [47].

Clinical diagnostic using NGS technologies may be applicable in such cases where clinicians consider a non-syndromic hearing disorder, especially after negative results on tests for mutations in the autosomal recessive *DFNB1* locus for *GJB2* or *GJB6*, according to recently published guidelines [39, 58]. Updated guidelines from the American College of Medical Genetics and Genomics (ACMG) recommend that clinicians consider NGS when testing for genetic causes of hearing loss [56]. Prior to considering genetic testing, clinicians should Advances in Genetic Diagnosis and Treatment of Hearing Loss — A Thirst for Revolution http://dx.doi.org/10.5772/61218 69

Figure 1. Next-generation sequencing: An approach from sample to analysis. Clinical diagnostic using NGS technologies may be applicable in such cases where clinicians consider a non-syndromic **Figure 1.** Next-generation sequencing: An approach from sample to analysis.

Different panels designed to diagnose hearing loss include:

treatment, and has accelerated the pace of gene discovery [47].

syndromes.

68 Update On Hearing Loss

*3.1.2. NGS data analysis*

**•** Hearing Loss Panel Tier 1—testing for mutations in *GJB2, GJB6, MTRNR1*, and *MTTS1* that account for 40% of the genetic causes of hearing loss. Reflex testing to OtoSeq® Hearing Loss Panel is an option for patients with normal Tier 1 results. This panel contains 23 genes,

**•** OtoGenomeTM Test Panel offered by Partners Healthcare. This test panel simultaneously screens 87 genes known to cause both non-syndromic hearing loss and syndromes that can present as isolated hearing loss, such as Usher, Pendred, Jervell and Lange-Nielsen (JLNS), Branchio-Oto-Renal (BOR), Deafness and Male Infertility (DIS), Perrault, and Waardenburg

The large amount of data derived from NGS platforms imposes increasing demands on statistical methods and bioinformatic tools for the analysis. Although the NGS platforms rely on different principles and differ in how the array is made, their work flows are conceptually very similar. All of them generate millions or billions of short sequencing reads simultane‐ ously. Several layers of analysis are necessary to convert these raw sequence data into understanding of functional biology. These include alignment of sequence reads to a reference, base-calling and/or polymorphism detection, de novo assembly from paired or unpaired reads, and structural variant detection (Figure 1). To date, a variety of software tools are available for analyzing NGS data. Although tremendous progress has been achieved over the last several years in the development of computational methods for the analysis of high-throughput sequencing data, there are still many algorithmic and informatics challenges remaining. For example, even if a plethora of alignment tools have been adapted or developed for the reconstruction of full human genotypes from short reads, this task remains an extremely challenging problem. Also, when a high-throughput technology is used to sequence an individual (the donor), any genetic difference between the sequenced genomes and a reference human genome—typically the genome maintained at NCBI—is called the variant. Although this reference genome was built as a mosaic of several individuals, it is haploid, and may not contain a number of genomic segments present in other individuals. By simply mapping reads to the reference genome, it is impossible to identify these segments. Thus, de novo assembly procedures should be used instead. Nonetheless, NGS technologies continue to change the landscape of human genetics. The resulting information has both enhanced our knowledge and expanded the impact of the genome on biomedical research, medical diagnostics, and

Clinical diagnostic using NGS technologies may be applicable in such cases where clinicians consider a non-syndromic hearing disorder, especially after negative results on tests for mutations in the autosomal recessive *DFNB1* locus for *GJB2* or *GJB6*, according to recently published guidelines [39, 58]. Updated guidelines from the American College of Medical Genetics and Genomics (ACMG) recommend that clinicians consider NGS when testing for genetic causes of hearing loss [56]. Prior to considering genetic testing, clinicians should

which identifies an estimated 80% of the genetic causes of hearing loss.

undertake a comprehensive evaluation of the patients' medical histories, including birth, that will help distinguish acquired versus inherited causes of hearing loss. They should also perform audiological evaluations to determine the type and degree of hearing loss, as recom‐ mended by ACMG. ACMG also recommends genetic testing and counseling that could include hearing disorder, especially after negative results on tests for mutations in the autosomal recessive DFNB1 locus for GJB2 or GJB6, according to recently published guidelines [39, 58]. Updated guidelines from the American College of Medical single-gene tests, panels, genome/exome sequencing, chromosome analysis, and array-based copy number analysis if clinical findings suggest syndromic genetic hearing loss. Single-gene testing may be needed if medical and family history suggests non-syndromic hearing loss that is not associated with environmental causes. If none is suggested, the next step could be *GJB2* or *GJB6* testing. If single-gene tests yield no diagnosis, clinicians may consider NGS that quickly replaces many single-gene tests for hearing loss and can assess patients whose phenotypes are not easily distinguished clinically [58].

#### **3.2. Cytogenetics**

Cytogenetic tests are a diagnostic tool for a number of clinical syndromes associated with hearing loss. They proved the causal association between specific chromosomal abnormalities and clinical features observed in patients. Although cytogenetics is not the first technique to be considered when evaluating a child with non-syndromic deafness, this form of testing could be valuable in cases of deafness of unknown etiology, particularly if there were accompanying congenital anomalies, or a family history of multiple spontaneous abortions. When all other causes of deafness are eliminated, cytogenetics could be used to determine if the hearing loss may be due to a chromosome rearrangement, such as a balanced translocation. The advantage would be that, if such a chromosome rearrangement were found, it would immediately suggest the location of the deafness gene [59].

Cytogenetic or molecular cytogenetic tests such as karyotyping, fluorescent in situ hybridiza‐ tion (FISH), or chromosomal microarray analysis (CMA) may provide diagnostic information when syndromes characterized by chromosomal aneuploidies, structural rearrangements, or deletions or duplications are suspected. Genetic testing of specific individual genes (PAX3 for Waardenburg syndromes types I and III), or small panels of genes related to a specific clinical finding (FGFR-related craniosynostosis panel) may be appropriate, depending on the sus‐ pected diagnosis [60].

#### *3.2.1. Prenatal diagnosis*

Prenatal diagnosis of chromosomal aberrations requires cytogenetic analysis of amniotic fetal cells. Amniocentesis is an invasive, well-established, safe, and reliable test during pregnancy that removes a small amount of fluid from the sac around the baby to look for birth defects and chromosomal problems. Amniocentesis is done from 12 to 15 weeks of gestation for chromosomal analysis. When the amniotic sample is received in the laboratory, it is centrifuged at 750 rpm for 10 minutes. The amniotic fluid is then carefully decanted from the cell pellet into a sterile test tube, and then the cell pellet is re-suspended in amniotic fluid. Then, suitable medium supplemented with fetal bovine serum, L-glutamine, and antibiotics are added and the cultures are incubated at 37ºC in 5% CO2 incubator. The cells are harvested at 8-10 days after culture, subjected to routine hypotonic and fixative treatments as for whole blood culture, and then the chromosomes are analyzed [61].

Genetic screening for a specific mitochondrial mutation during pregnancy could offer a strategy of minimizing hearing loss in babies from exposure to avoidable risk factors such as neonatal use of aminoglycoside antibiotics [3]. Genetic counseling should ideally be offered to all pregnant women who have a family history of any condition that might be tested by either amniocentesis or chorionic villus sampling (CVS). It is important to offer genetic counseling before and after prenatal diagnostic testing. The important aspects should be considered such as presentation of the background risk of congenital disease and anomaly, and individual increased risks (such as increased maternal age), options and limitations for prenatal genetic diagnosis, possible diseases that can be detected, risks associated with the relevant tests, and conflictual areas in relation to prenatal diagnosis and alternatives. [62]. Different techniques used in genetic analysis and their applications are summarized in Table 7.


**Table 7.** Different techniques used in genetic analysis and applications.

single-gene tests, panels, genome/exome sequencing, chromosome analysis, and array-based copy number analysis if clinical findings suggest syndromic genetic hearing loss. Single-gene testing may be needed if medical and family history suggests non-syndromic hearing loss that is not associated with environmental causes. If none is suggested, the next step could be *GJB2* or *GJB6* testing. If single-gene tests yield no diagnosis, clinicians may consider NGS that quickly replaces many single-gene tests for hearing loss and can assess patients whose

Cytogenetic tests are a diagnostic tool for a number of clinical syndromes associated with hearing loss. They proved the causal association between specific chromosomal abnormalities and clinical features observed in patients. Although cytogenetics is not the first technique to be considered when evaluating a child with non-syndromic deafness, this form of testing could be valuable in cases of deafness of unknown etiology, particularly if there were accompanying congenital anomalies, or a family history of multiple spontaneous abortions. When all other causes of deafness are eliminated, cytogenetics could be used to determine if the hearing loss may be due to a chromosome rearrangement, such as a balanced translocation. The advantage would be that, if such a chromosome rearrangement were found, it would immediately suggest

Cytogenetic or molecular cytogenetic tests such as karyotyping, fluorescent in situ hybridiza‐ tion (FISH), or chromosomal microarray analysis (CMA) may provide diagnostic information when syndromes characterized by chromosomal aneuploidies, structural rearrangements, or deletions or duplications are suspected. Genetic testing of specific individual genes (PAX3 for Waardenburg syndromes types I and III), or small panels of genes related to a specific clinical finding (FGFR-related craniosynostosis panel) may be appropriate, depending on the sus‐

Prenatal diagnosis of chromosomal aberrations requires cytogenetic analysis of amniotic fetal cells. Amniocentesis is an invasive, well-established, safe, and reliable test during pregnancy that removes a small amount of fluid from the sac around the baby to look for birth defects and chromosomal problems. Amniocentesis is done from 12 to 15 weeks of gestation for chromosomal analysis. When the amniotic sample is received in the laboratory, it is centrifuged at 750 rpm for 10 minutes. The amniotic fluid is then carefully decanted from the cell pellet into a sterile test tube, and then the cell pellet is re-suspended in amniotic fluid. Then, suitable medium supplemented with fetal bovine serum, L-glutamine, and antibiotics are added and the cultures are incubated at 37ºC in 5% CO2 incubator. The cells are harvested at 8-10 days after culture, subjected to routine hypotonic and fixative treatments as for whole blood culture,

Genetic screening for a specific mitochondrial mutation during pregnancy could offer a strategy of minimizing hearing loss in babies from exposure to avoidable risk factors such as

phenotypes are not easily distinguished clinically [58].

the location of the deafness gene [59].

and then the chromosomes are analyzed [61].

pected diagnosis [60].

*3.2.1. Prenatal diagnosis*

**3.2. Cytogenetics**

70 Update On Hearing Loss

Molecular genetic testing can be helpful because an etiology cannot be otherwise established in the majority of individuals with genetic hearing loss. Molecular analysis is essentially noninvasive and may reduce the need for more extensive and expensive testing; it sometimes requires sedation or general anesthesia of infants and children. Molecular analysis can be beneficial for the diagnosis of syndromic hearing loss before additional features emerge (e.g., in Pendred syndrome or Jervell and Lange-Nielsen syndrome), and can distinguish individ‐ uals with mitochondrial mutations who are at risk for iatrogenic hearing loss when treated with aminoglycosides [4]. There are other benefits of molecular analysis, which include associated knowledge of the pattern of inheritance and more accurate genetic counseling. Recently developed high-throughput techniques reduce the burden of the costs of sequencing. For example, sequencing costs have massively reduced from \$5,292.39/Mb in 2001 to \$0.06/Mb by April 2013 [70]. It is estimated that the sequencing costs will further reduce with precipitous dropping per-base cost with advancing techniques.

#### **3.3. Limitations and challenges**

Despite the significant advantages of genetic testing, there are also several limitations and challenges. These limitations and challenges are briefly discussed below:


### **4. Development and evaluation of genetic treatment for hearing loss**

Despite recent developments in medicine, there is still no cure for most types of hearing loss. The development of a biological method for the repair, regeneration, and replacement of hair cells of the damaged cochlea has the potential to restore normal hearing. At present, gene therapy and stem cells are two promising therapeutic applications for hearing disorders.

#### **4.1. Gene therapy**

For example, sequencing costs have massively reduced from \$5,292.39/Mb in 2001 to \$0.06/Mb by April 2013 [70]. It is estimated that the sequencing costs will further reduce with precipitous

Despite the significant advantages of genetic testing, there are also several limitations and

**•** The spectrum of DNA variation in a human genome comprises small base changes (substi‐ tutions), insertions and deletions of DNA, large genomic deletions of exons or whole genes, and rearrangements such as inversions and translocations. Traditional Sanger sequencing is restricted to the discovery of substitutions and small insertions and deletions [71].

**•** Although NGS promises a personalized approach to complex diseases, it has limitations. NGS cannot detect large deletions or duplications of DNA or nucleotide repeats that can cause disease. These limitations of NGS technologies may necessitate use of alternative or

**•** Not all regions of the genome are efficiently captured and analyzed by current exon-capture or WGS approaches, and large deletions and duplications, in addition to copy-number and

**•** It is possible to determine with recent technology if an asymptomatic newborn has a mutation in the genes known to be implicated to hearing loss, although there is no certainty that all of these genes will be responsible for the incidence of hearing loss in the future. **•** Current methods of DNA analysis require 2-5 mls of blood, which would be unacceptable for a newborn screen. However, It is anticipated that sufficient DNA could be extracted from a drop of blood collected for newborn bloodspot metabolic screen, with improved sequence

**•** Genetic testing for deafness is not collectively perceived to be advantageous. Deafness is not usually considered to be negative or limiting especially by the deaf community. Many deaf individuals consider themselves to be part of their own linguistic (sign language) and cultural group, where they have their own values, identity, and traditions. It is not perceived to be a medical condition or disability. As a result, advances in hearing loss research and genetic testing might be perceived as harmful. Genetic services may be considered; however,

**•** A positive genetic test can also lead to an increased level of anxiety and individuals may

Despite recent developments in medicine, there is still no cure for most types of hearing loss. The development of a biological method for the repair, regeneration, and replacement of hair

feel guilty for having potentially passed a gene alteration on to their children.

**4. Development and evaluation of genetic treatment for hearing loss**

challenges. These limitations and challenges are briefly discussed below:

complementary genetic testing strategies in some cases.

structural variations, may not be efficiently detected [72].

some individuals prefer to have deaf children [4, 73].

dropping per-base cost with advancing techniques.

**3.3. Limitations and challenges**

72 Update On Hearing Loss

techniques (the Guthrie test) [3].

Gene therapy involves using specific sequences of DNA to treat human disease. It is an experimental form of treatment that is still being developed, but it has a unique application for hearing loss. Two main gene therapy approaches have been considered: replacing a mutated gene that causes disorder with a healthy copy of the gene, or inactivating a mutated gene that is functioning improperly. Gene therapy technology has improved in recent years, making it a promising technique for treating hearing disorders. The gene vector, the route of gene administration, the therapeutic gene, and the target cells are four major elements of gene therapy. With the recent developments in the field, a wide variety of viral and non-viral vectors have emerged that can deliver genetic payloads to target cells in the inner ear. There are three viral vectors commonly utilized for gene therapy (targeted at the inner ear): adenoviral vectors, adeno-associated viral vectors, and lentiviral vectors. Several promising clinical trials have been reported using gene therapy.

The first study of gene therapy for hearing disorders was reported in 1994 by Fujiyoshi and colleagues. They developed the myelin basic protein (MBP) transgenic mice by microinjecting an MBP cosmid clone into the pronucleus of fertilized eggs of shiverer mice to replace the autosomal recessive mutation (deletion) gene by the transgene for MBP. The MBP transgenic mice were found to recover up to 25% of normal levels of MBP, and significantly higher myelinated axons were present in the transgenic mice compare to control mice [74, 75]. In 1996, other studies reported that foreign genes were successfully transfected into the inner ear using replication-deficient viral vectors [75-77].

The discovery of RNA interference (RNAi)-mediated gene inactivation has introduced a new mechanism for targeted therapy of the inner ear at the molecular level [78]. RNAi is an intracellular two-step process that converts precursor double-stranded RNA molecules into functional small interfering RNAs (siRNAs). Synthetic double-stranded RNAs can be intro‐ duced as siRNA mimics and used to trigger RNAi and intentionally reduce the expression of targeted genes for therapeutic applications. A few allele variants of *GJB2* cause autosomal dominant non-syndromic hearing loss as a dominant-negative consequence of expression of the mutant protein. Allele-specific gene suppression by RNAi is a potentially attractive strategy to prevent hearing loss caused by this mechanism [79]. Since inheritance is autosomal dominant, silencing the mutated allele is predicted to preserve hearing. A recent proof-ofprinciple study validated this prediction—an siRNA was shown to potently suppress expres‐ sion of the *R75W* allele of human *GJB2* in a murine model [79, 80].

Bacterial artificial chromosome (BAC) mediated transgenesis has proven to be a highly reliable way to obtain accurate transgene expression for in vivo studies of gene expression and function. BAC transgenes direct gene expression at physiological levels with the same developmental timing and expression patterns as endogenous genes in transgenic animal models. Recently, transgene expression through the germline was demonstrated to maintain normal inner ear morphology and stable hearing function in a mouse model of human nonsyndromic deafness DFNB3 caused by a missense mutation in the *Myo15a* gene on mouse chromosome 11. In addition, excess *Myo15a* expression has no physiologically significant protective or deleterious effects on hearing of normal mice, suggesting that the dosage of *Myo15a* may not be problematic for gene therapy [80, 81].

Neurotrophic factors are essential in the development of the inner ear and are able to protect inner ear hair cells and spiral ganglion neurons from the damage caused by various pathogenic factors and promote the recovery from cochlear injury. Up to now, more than 20 neurotrophic factors have been revealed with protective effects on inner ear cells [75]. Neurotrophin gene therapy is promising both in the protection against exogenous damage and the regeneration after endogenous and exogenous damage. It has been reported that neurotrophic factors, such as BDNF (brain-derived neurotrophic factor), NT-3 (neurotrophins 3), TGF (transforming growth factor), GDNF (glial cell line-derived neurotrophic factor), FGF (fibroblast growth factor), CNTF (ciliary neurotrophic factor), and HGF (hepatocyte growth factor) have protec‐ tive effects of different extents on inner ear hair cells and neurons [82-86].

The discovery of new therapies for the treatment of hereditary hearing loss will depend on a better understanding of gene function in the survival and differentiation of existing neurons, and encourages the growth and differentiation of new neurons and synapses, bonding nerves to cochlear hair cells to form synaptic connections as well as in maintaining the unique inner ear ion balance.

*Atoh1 (Math1)* gene acts as a "switch" to turn on hair cell growth and it is discovered that *Atoh1* is artificially switched on in the cells that support hair cells (called "supporting cells"); it instructs them to divide and form new hair cells. *Atoh1 (Math1)* plays an important role in the differentiation of hair cells of the developing inner ear and restore auditory function [87-89]. Using the tools of gene therapy may activate *Atoh1* to induce undamaged cells within the cochlea to develop into hair cells in an adult human ear and rebuild a damaged ear by replicating the steps that took place during embryonic development. There is still a lot of work to be done for human adult ear. *CGF-166* gene therapy has been shown to activate the *Atoh1* biological pathway and the gene was able to safely restore hearing in animal models. Recently, the clinical trial started to test if *CGF166* will have the same beneficial effect in humans [90]. Researchers believe that this therapy would not help people with types of inherited deafness where the structures in the ear needed to support new hair cell growth are missing or those who have damaged auditory nerves.

There is no ideal gene delivery system for gene therapy so far. Three kinds of vectors (bacterial vector, multiplex gene vector, labeled gene vector) may have great prospects. Long-term human gene therapy will not be feasible until there is substantial improvement in transduction efficiencies into human tissues. The combination of more efficient gene transfers, targeted vector systems, and effective and relatively nontoxic selection systems to maintain gene expression may make the long-term correction of hearing disorders feasible and safe. Some practices of inner ear gene therapy may need to be carried out at the embryonic stage for the treatment of hereditary hearing loss in the future.

#### **4.2. Stem cells therapy**

syndromic deafness DFNB3 caused by a missense mutation in the *Myo15a* gene on mouse chromosome 11. In addition, excess *Myo15a* expression has no physiologically significant protective or deleterious effects on hearing of normal mice, suggesting that the dosage of

Neurotrophic factors are essential in the development of the inner ear and are able to protect inner ear hair cells and spiral ganglion neurons from the damage caused by various pathogenic factors and promote the recovery from cochlear injury. Up to now, more than 20 neurotrophic factors have been revealed with protective effects on inner ear cells [75]. Neurotrophin gene therapy is promising both in the protection against exogenous damage and the regeneration after endogenous and exogenous damage. It has been reported that neurotrophic factors, such as BDNF (brain-derived neurotrophic factor), NT-3 (neurotrophins 3), TGF (transforming growth factor), GDNF (glial cell line-derived neurotrophic factor), FGF (fibroblast growth factor), CNTF (ciliary neurotrophic factor), and HGF (hepatocyte growth factor) have protec‐

The discovery of new therapies for the treatment of hereditary hearing loss will depend on a better understanding of gene function in the survival and differentiation of existing neurons, and encourages the growth and differentiation of new neurons and synapses, bonding nerves to cochlear hair cells to form synaptic connections as well as in maintaining the unique inner

*Atoh1 (Math1)* gene acts as a "switch" to turn on hair cell growth and it is discovered that *Atoh1* is artificially switched on in the cells that support hair cells (called "supporting cells"); it instructs them to divide and form new hair cells. *Atoh1 (Math1)* plays an important role in the differentiation of hair cells of the developing inner ear and restore auditory function [87-89]. Using the tools of gene therapy may activate *Atoh1* to induce undamaged cells within the cochlea to develop into hair cells in an adult human ear and rebuild a damaged ear by replicating the steps that took place during embryonic development. There is still a lot of work to be done for human adult ear. *CGF-166* gene therapy has been shown to activate the *Atoh1* biological pathway and the gene was able to safely restore hearing in animal models. Recently, the clinical trial started to test if *CGF166* will have the same beneficial effect in humans [90]. Researchers believe that this therapy would not help people with types of inherited deafness where the structures in the ear needed to support new hair cell growth are missing or those

There is no ideal gene delivery system for gene therapy so far. Three kinds of vectors (bacterial vector, multiplex gene vector, labeled gene vector) may have great prospects. Long-term human gene therapy will not be feasible until there is substantial improvement in transduction efficiencies into human tissues. The combination of more efficient gene transfers, targeted vector systems, and effective and relatively nontoxic selection systems to maintain gene expression may make the long-term correction of hearing disorders feasible and safe. Some practices of inner ear gene therapy may need to be carried out at the embryonic stage for the

*Myo15a* may not be problematic for gene therapy [80, 81].

ear ion balance.

74 Update On Hearing Loss

who have damaged auditory nerves.

treatment of hereditary hearing loss in the future.

tive effects of different extents on inner ear hair cells and neurons [82-86].

The recent developments in stem cell technologies are opening novel therapeutic possibilities for the treatment of hearing disorders. Stem cell therapy is a relatively new technique used to treat many forms of human disease in which exogenous stem cells are used to replace dead or damaged endogenous cell types. In recent years, researchers have undertaken a number of successful animal studies in the area of developing stem cell therapies for hearing loss and able to turn stem cells into many of the cell types in the inner ear whose damage and death leads to hearing loss, such as hair cells and auditory nerve cells.

Stem cells are a group of cells in our bodies with the capacity to self-renew and differentiate to various types of cells, thus to construct tissues and organs. When a stem cell divides, each new cell has the potential either to remain undifferentiated (self-renewal) or become a specialized (differentiation) type of cell with a specific function. Stem cells can be classified into different types, based on their source of origin, the time of derivation, and the potential to produce different lineages. The primordial master stem cell is the zygote. The zygote and early blastomeres are totipotent and can generate any and all human cells type in the body, such as the brain, liver, blood, or heart cells. It can even give rise to a whole functional organism including extraembryonic tissues. Pluripotent stem cells have a slightly more limited potential. They have the ability to produce cell types from all three embryonic germ layers (endoderm, mesoderm, and ectoderm), including all the somatic lineages as well as germ cells; but infrequently, if ever, can produce extraembroynic lineages such as those from the placenta. It cannot form an entire functional organism. Lastly, multipotent stem cells such as hemato‐ poietic stem cells have a more limited ability, producing cell types usually restricted to a single organ or germ layer. Multipotent stem cells have the ability to differentiate into a closely related family of cells. Pluripotent stem cells have the widest range of potential applications. They can generally be classified as embryonic or adult, depending on their developmental stage of derivation.

Embryonic and adult stem cells differ primarily in the number of different cells each can produce. Embryonic stem cells are derived from a four- or five-day-old embryo that is in the blastocyst phase of development. It can develop into any cell type of the body. In contrast, adult stem cells reside in many organs of the adult human body and can generate a range of cell types from the originating organ or even regenerate the entire original organ. Adult stem cells can be found in a great number of organs and tissues including the brain, bone marrow, peripheral blood, blood vessels, skeletal muscle, skin, teeth, heart, gut, liver, ovarian epithe‐ lium, and testis [91]. A relatively recent breakthrough in stem cell research is the discovery that specialized adult (somatic) cells can be 'reprogrammed' into cells that behave like embryonic stem cells, termed induced pluripotent stem cells (iPSCs) [92]. Like human embryonic stem cells, the iPS (induced pluripotent stem cells) cells are immortal, pluripotent, and express genes characteristic of all three embryonic germ cell layers (endoderm, ectoderm, and mesoderm) when induced to differentiate.

A number of criteria must be satisfied to achieve functional restoration, including generation of an adequate number of cells to invert the defects, differentiation of the cells to the correct phenotype, formation of appropriate three-dimensional tissue structures, and production of cells/tissues that are mechanically and structurally complaint with the native tissue without immunological rejection [80, 93]. The generation of neural stem cells and control of neural differentiation from human embryonic stem cells have opened new doors for therapy of hearing disorders. Several studies have demonstrated successful delivery of embryonic and adult stem cells to normal and damaged tissues in vivo, and in some cases a therapeutic effect has been observed.

One of the first reports of stem cell delivery to the inner ear was a study by Ito and colleagues (2001) that demonstrated survival and migration of adult rat hippocampus-derived neural stem cells (NSCs) implanted into the rat cochlea. Within 2-4 weeks of grafting to the cochlea, some NSCs survived in the cochlear cavity. Some of them had adopted the morphologies and positions of hair cells [94]. Following this study was a report about the potential of NSC transplantation to the damaged mouse cochlea. The majority of transplanted cells integrated in the vestibular sensory epithelia and expressed specific markers (myosin VIIa) for hair cells in vivo. The result of this study suggests that transplanted NSCs have the potential to differ‐ entiate and restore inner ear hair cells. However, a small number of hair cell marker-positive grafted cells and no evidence of synaptic connections between transplants and host spiral ganglion neurons hampered well-established methods for functional recovery [95]. The principal differences between human and mouse NSCs seem to be the length of the cell cycle (up to 4 days in humans) and the predilection of human cells to senesce (after ~100 cell divisions) [96]. NSCs can achieve therapeutic efficacy in human clinical applications, although many limitations remain to be overcome.

Several studies reported on the transplantation of embryonic stem (ES) cells into the inner ear. ES cells have the ability to differentiate into neuronal cell types when transplanted into the spiral ganglion of cochlea. ES cells that have been transplanted into the spiral ganglion of the cochlea were found to express neural markers [97, 98] and develop cellular processes similar to axons that extend towards the organ of Corti [99-102]. Some of these stem cell-derived neurons were shown to establish synaptic contacts with sensory hair cells, the peripheral target for spiral ganglion neurons (SGNs) in vitro (Matsumoto et al., 2008) and to survive in animals with selective loss of SGNs [99, 103].

For a cell therapy approach aiming at restoring impaired function, implanted cells need to be able to convey auditory information from the periphery to more centrally located nuclei. Recent studies have shown that dorsal root ganglion cells or ES cells are transplanted to the transected auditory nerve migrated along the nerve fibers in the internal auditory meatus and, in some cases, even reach proximate to the ventral cochlear nucleus in the brainstem [104, 105]. Interestingly, Ito et al. (2001) reported that embryonic brain tissue transplanted to the acutely transected ventral cochlear tract resulted in not only regeneration but additionally functional recovery [105, 106]. However, there are many chemical factors that produce a barrier between the peripheral and central nervous system and could impede the ability of central processes of replacement neurons to make a connection in the cochlear nucleus.

A number of criteria must be satisfied to achieve functional restoration, including generation of an adequate number of cells to invert the defects, differentiation of the cells to the correct phenotype, formation of appropriate three-dimensional tissue structures, and production of cells/tissues that are mechanically and structurally complaint with the native tissue without immunological rejection [80, 93]. The generation of neural stem cells and control of neural differentiation from human embryonic stem cells have opened new doors for therapy of hearing disorders. Several studies have demonstrated successful delivery of embryonic and adult stem cells to normal and damaged tissues in vivo, and in some cases a therapeutic effect

One of the first reports of stem cell delivery to the inner ear was a study by Ito and colleagues (2001) that demonstrated survival and migration of adult rat hippocampus-derived neural stem cells (NSCs) implanted into the rat cochlea. Within 2-4 weeks of grafting to the cochlea, some NSCs survived in the cochlear cavity. Some of them had adopted the morphologies and positions of hair cells [94]. Following this study was a report about the potential of NSC transplantation to the damaged mouse cochlea. The majority of transplanted cells integrated in the vestibular sensory epithelia and expressed specific markers (myosin VIIa) for hair cells in vivo. The result of this study suggests that transplanted NSCs have the potential to differ‐ entiate and restore inner ear hair cells. However, a small number of hair cell marker-positive grafted cells and no evidence of synaptic connections between transplants and host spiral ganglion neurons hampered well-established methods for functional recovery [95]. The principal differences between human and mouse NSCs seem to be the length of the cell cycle (up to 4 days in humans) and the predilection of human cells to senesce (after ~100 cell divisions) [96]. NSCs can achieve therapeutic efficacy in human clinical applications, although

Several studies reported on the transplantation of embryonic stem (ES) cells into the inner ear. ES cells have the ability to differentiate into neuronal cell types when transplanted into the spiral ganglion of cochlea. ES cells that have been transplanted into the spiral ganglion of the cochlea were found to express neural markers [97, 98] and develop cellular processes similar to axons that extend towards the organ of Corti [99-102]. Some of these stem cell-derived neurons were shown to establish synaptic contacts with sensory hair cells, the peripheral target for spiral ganglion neurons (SGNs) in vitro (Matsumoto et al., 2008) and to survive in animals

For a cell therapy approach aiming at restoring impaired function, implanted cells need to be able to convey auditory information from the periphery to more centrally located nuclei. Recent studies have shown that dorsal root ganglion cells or ES cells are transplanted to the transected auditory nerve migrated along the nerve fibers in the internal auditory meatus and, in some cases, even reach proximate to the ventral cochlear nucleus in the brainstem [104, 105]. Interestingly, Ito et al. (2001) reported that embryonic brain tissue transplanted to the acutely transected ventral cochlear tract resulted in not only regeneration but additionally functional recovery [105, 106]. However, there are many chemical factors that produce a barrier

has been observed.

76 Update On Hearing Loss

many limitations remain to be overcome.

with selective loss of SGNs [99, 103].

A number of studies have shown that adult bone marrow-derived stem cells (MSCs) can also have therapeutic potential in the damaged inner ear. MSCs have shown plasticity with a capacity to differentiate into a variety of specialized cells. MSCs have been delivered both systemically and by direct injection through the scala tympani into the mouse and gerbil cochlea respectively [107, 108]. Matsuoka and colleagues investigated the potential of MSCs to adopt properties of SGNs in vivo [108]. Identification of stem cells in the human fetal cochlea [109] contributes to the study stem cell biology of the auditory organ in humans, while advances in identification of stem cells have been made in rodents [110].

Umbilical cord blood (UCB) is the most recently identified useful source of hematopoietic stem cells (HSCs) for treatment of a wide variety of disorders. UCB has potential applications in hearing disorders. A study provided the first evidence of positive engraftment of intrave‐ nously transplanted human umbilical cord blood CD133+ HSCs into the inner ear of NOD-SCID mice rendered deaf with kanamycin and noise in vivo [111]. In another study, the researchers have demonstrated that hematopoietic stem cell transplantation (HSCT) may provide improvement in mucopolysaccharidosis-associated sensorineural hearing loss [112]. Recently, an FDA-approved clinical trial involving stem cells derived from UCB has been initiated for treatment of children with sensorineural hearing loss [113].

Mammalian cochlear hair cells do not regenerate spontaneously, although vestibular hair cells in adult mammals regenerate at levels so low as to rule out any significant functional recovery [114, 115]. The discovery of stem cells has opened the possibility of devising strategies to recruit these cells to repair damaged or lost cochlear hair cells. Stem cells are important tools for hearing disorder research and offer great potential for use in the clinic. Certain types of stem cells, such as neural stem cells, are more capable than others of replacing lost or damaged hair cells, although they have limitations. There is a great challenge in identifying more effective ways of directing stem cells to develop into inner ear hair cells. The field of auditory stem cell research is still in its infancy, although important advances are already taking place. Stem cell therapy for hearing loss is some years away from being clinically feasible.

### **5. Management and prevention of hereditary hearing loss**

Gene therapy and stem cell treatment have still a long way to go before these treatments will be available to use in humans. Therefore, existing measures must focus on the prevention of hearing loss to decrease the frequency of genetic hearing loss. There is a need of improved implementation of genetic counseling and awareness in populations that are at high risk of hereditary hearing loss.

Early detection and intervention of hearing loss is the most important factor in minimizing the impact of hearing loss on a child's development and educational achievements. At least, all children with a risk for hereditary hearing loss need to be given screening audiometry. The hearing loss can be progressive in nature for a person with autosomal recessive non-syndromic hearing loss caused by mutations in *SLC26A4*. In such case, audiometric testing may be warranted every year. Additionally, thyroid function should be followed if the diagnosis is consistent with Pendred syndrome [14]. Sequential audiologic examinations are essential to document the stability or progression of the hearing loss and to identify and treat superim‐ posed hearing losses, such as middle ear effusion.

Knowledge of the genetic cause is helpful in determining the kind of damage to the auditory system that caused deafness. Identification of the underlying cause in terms of how the inner ear is damaged may assist in choosing rehabilitation strategies, such as hearing aid or cochlear implant. In children with congenital severe-to-profound autosomal recessive non-syndromic hearing loss who are positive for mutations in *GJB2* and *GJB6* at the DFNB1 locus and who elect to receive cochlear implants, performance outcome is outstanding [116]. In addition, a recent cochlear implant study stated that children with identified *GJB2* mutations, which cause an isolated insult to the cochlea without damage to the 8th nerve or the central auditory system, benefitted from cochlear implantation in the areas of speech production, speech perception, and language [4].

#### **5.1. Genetic counseling**

Genetic counseling is an important part of evaluation and management of hearing disorders. The process of genetic counseling is intended to inform patients and their families of the medical, psychological, and familial implications of genetic hearing disorders, as well as the risks, benefits, and limitations of genetic testing. In the United States, genetics professionals recommend "non-directive" counseling. It is meant to be informative and supportive rather than advise people what to do or whether or not to have children. Genetic information can help predict whether the hearing loss will remain the same or whether it will worsen over time. In addition, genetic testing can help determine if problems besides hearing loss may be present or may develop in the future. It can also help patients and families who may be at risk for conditions that can be passed down in families (inherited conditions). There are a number of people who may have quite different attitudes about deafness in their family. Some hearing parents might be concerned about having another deaf child, while others may believe that the hearing loss would not cause a problem, but they would want to know if any other associated medical problems might be involved. Likewise, deaf parents may feel comfortable about their own abilities, but would have a better opinion of not to have a deaf child, in view of the fact that other deaf parents may be more worried about the challenges of raising a hearing child [117]. In such case, the genetic counselor should be very cautious in providing informa‐ tion concerning the nature of the disease, the implications of being carriers (mutation carriers of genes associated with hearing loss), and the reproductive choices. Genetic counseling services for families with deafness can only be effective and appropriate if the social values of the deaf community are taken into consideration.

#### **6. Conclusions and future perspectives**

children with a risk for hereditary hearing loss need to be given screening audiometry. The hearing loss can be progressive in nature for a person with autosomal recessive non-syndromic hearing loss caused by mutations in *SLC26A4*. In such case, audiometric testing may be warranted every year. Additionally, thyroid function should be followed if the diagnosis is consistent with Pendred syndrome [14]. Sequential audiologic examinations are essential to document the stability or progression of the hearing loss and to identify and treat superim‐

Knowledge of the genetic cause is helpful in determining the kind of damage to the auditory system that caused deafness. Identification of the underlying cause in terms of how the inner ear is damaged may assist in choosing rehabilitation strategies, such as hearing aid or cochlear implant. In children with congenital severe-to-profound autosomal recessive non-syndromic hearing loss who are positive for mutations in *GJB2* and *GJB6* at the DFNB1 locus and who elect to receive cochlear implants, performance outcome is outstanding [116]. In addition, a recent cochlear implant study stated that children with identified *GJB2* mutations, which cause an isolated insult to the cochlea without damage to the 8th nerve or the central auditory system, benefitted from cochlear implantation in the areas of speech production, speech perception,

Genetic counseling is an important part of evaluation and management of hearing disorders. The process of genetic counseling is intended to inform patients and their families of the medical, psychological, and familial implications of genetic hearing disorders, as well as the risks, benefits, and limitations of genetic testing. In the United States, genetics professionals recommend "non-directive" counseling. It is meant to be informative and supportive rather than advise people what to do or whether or not to have children. Genetic information can help predict whether the hearing loss will remain the same or whether it will worsen over time. In addition, genetic testing can help determine if problems besides hearing loss may be present or may develop in the future. It can also help patients and families who may be at risk for conditions that can be passed down in families (inherited conditions). There are a number of people who may have quite different attitudes about deafness in their family. Some hearing parents might be concerned about having another deaf child, while others may believe that the hearing loss would not cause a problem, but they would want to know if any other associated medical problems might be involved. Likewise, deaf parents may feel comfortable about their own abilities, but would have a better opinion of not to have a deaf child, in view of the fact that other deaf parents may be more worried about the challenges of raising a hearing child [117]. In such case, the genetic counselor should be very cautious in providing informa‐ tion concerning the nature of the disease, the implications of being carriers (mutation carriers of genes associated with hearing loss), and the reproductive choices. Genetic counseling services for families with deafness can only be effective and appropriate if the social values of

posed hearing losses, such as middle ear effusion.

the deaf community are taken into consideration.

and language [4].

78 Update On Hearing Loss

**5.1. Genetic counseling**

Advances in genetic testing are already directly impacting people's lives. The demand for molecular tests is by now increasing with the discovery of the varied molecular defects underlying hearing loss. Genetic testing has now reached a stage where it is becoming increasingly applicable for precise diagnosis of hearing disorders. The development of NGS technology has made DNA sequencing not only rapid and cost-effective, but also highly accurate and reproducible. In the near future, it is expected that there will be more enhance‐ ments in the speed and cost of DNA sequencing. We are already in the modern DNA se‐ quencing era, where aims of third- and fourth-generation DNA sequencing additionally boost the speed of sequencing and reduce costs. Although sequencing the whole genome seems exhaustive, it could be more cost-effective than having to select the genes of interest [118]. Once genome sequencing becomes more cost-effective and fast, it will accelerate the pace of gene discovery for deafness and clinical application of this discovery will be realized. Over the next few years, most molecular genetic testing will be performed on automated instruments and some genetic tests for hearing disorders will be available as at-home kits on a large scale.

One of the roles of genetic testing is to identify presently known genetic causes of hearing loss in failed hearing screening of newborns and children who are identified with childhood-onset hearing loss. Furthermore, it increases our knowledge of the genetic causes of hearing loss. The potential for increased usage of aminoglycoside antibiotics also supports the case for a genetic screening program of pregnant women for the *m.1555A G* mutation, which could avoid unnecessary cases of hearing loss. Only when a reliable estimate of the future risks of hearing loss can be made at a reasonable cost will genetic screening become viable [3].

Molecular diagnostic results should always be interpreted with caution, as our knowledge of the molecular basis of hearing loss is still evolving. Keeping pace with emerging clinical genetic technologies requires specialized genetic training as well as broad genetic literacy for patients and clinicians ordering and receiving genetics test results. Genetic information that is shared by the patient and patient's family is unique. The application of genetic tests has appropriately generated substantial debate in the community with regard to the delivery and impact of the information on clinicians, patients, and society in general. The potential for misuse of genetic information is enormous and requires action to protect the privacy of genetic information and protect individuals from discrimination based on genetic information. The ethical, legal, and social issues surrounding genetic testing for hearing loss need to be addressed. In the near future, more studies of the ethical and social aspects of genetic testing for hearing disorders should be done. It is hoped that the potential for misuse of genetic information in the future will be limited.

Some of the novel rehabilitation options under development to slow down the progression of hearing loss are gene and even mutation specific [80], suggesting that comprehensive genetic testing will be an integral part of the care of deaf and hard-of-hearing patients in the future [9]. Over time, the genetic diagnostic tests will become available faster, followed by targeted gene therapy or various permutations of progenitor cell transplantation, and eventually, the preventive interventions for a wider range of hearing impaired patients.

#### **Author details**

Sidheshwar Pandey1\* and Mukeshwar Pandey2

\*Address all correspondence to: sidheshwarp@yahoo.co.in

1 DRETCHI, Trinidad and Tobago Association for the Hearing Impaired, Port of Spain, Tri‐ nidad and Tobago, West Indies

2 Premas Lifescience, a division of Premas Biotech Pvt Ltd, New Delhi, India

#### **References**


[10] Van Camp G, Smith RJH. Hereditary Hearing Loss Homepage. 2014. Available from: http://www.hereditaryhearingloss.org [Accessed: 2015-01-27].

**Author details**

80 Update On Hearing Loss

**References**

Sidheshwar Pandey1\* and Mukeshwar Pandey2

nidad and Tobago, West Indies

2009;681:189-196.

2013;52:124-133.

2005;365:879-890.

of America. 2010;107:21104-21109.

\*Address all correspondence to: sidheshwarp@yahoo.co.in

Genomics Hum Genet. 2003;4:341-402.

1 DRETCHI, Trinidad and Tobago Association for the Hearing Impaired, Port of Spain, Tri‐

[1] Friedman TB, Griffith AJ. Human nonsyndromic sensorineural deafness. Annu Rev

[2] Hilgert N, Smith RJ, Van Camp G. Forty six genes causing non-syndromic hearing impairment: Which ones should be analysed in DNA diagnostics? Mutat Res.

[3] Linden Phillips L, Bitner-Glindzicz M, Lench N, et al. The future role of genetic screening to detect newborns at risk of childhood-onset hearing loss. Inter J Audiolo

[4] Schrijver, I. Hereditary non-syndromic sensorineural hearing loss: Transforming si‐ lence to sound. The Journal of Molecular Diagnostics: JMD. 2004;6:275-284.

[5] Guilford P, Ben Arab S, Blanchard S, Levilliers J, Weissenbach J, Belkahia A and Petit C. A non-syndrome form of neurosensory, recessive deafness maps to the pericentro‐

[6] Mahdieh N, Rabbani B, Inoue, I. Genetics of Hearing Loss, In: Dr. SadafNaz. Hearing

[7] Smith RJ, Bale JF Jr, White KR. Sensorineural hearing loss in children. Lancet.

[8] Van Camp G, Willems PJ, Smith RJ. Nonsyndromic hearing impairment: Unparal‐

[9] Shearer AE, Deluca AP, Hildebrand MS, Taylor KR, Gurrola J, Scherer S, et al. Com‐ prehensive genetic testing for hereditary hearing loss using massively parallel se‐ quencing. In: Proceedings of the National Academy of Sciences of the United States

Loss. (Ed.). InTech; 2012. ISBN: 978-953-51-0366-0, DOI: 10.5772/31984.

meric region of chromosome 13q. Nat Genet. 1994;6:24-8.

leled heterogeneity. Am J Hum Genet. 1997;60:758-764.

2 Premas Lifescience, a division of Premas Biotech Pvt Ltd, New Delhi, India


plantation in a melas patient, Genetics Research International, vol. 2012, Article ID 287432, 5 pages, 2012. DOI: 10.1155/2012/287432.


[36] De Keulenaer et al. Molecular diagnostics for congenital hearing loss including 15 deafness genes using a next generation sequencing platform. BMC Med. Genomics. 2012;5:17.

plantation in a melas patient, Genetics Research International, vol. 2012, Article ID

[24] Tang X, Yang L, Zhu Y, Liao Z, Wang J, Qian Y, Tao Z, Hu L, Wu G, Lan J, Wang X, Ji J, Wu J, Ji Y, Feng J, Chen J, Li Z, Zhang X, Lu J, Guan M-X. Very low penetrance of hearing loss in seven Han Chinese pedigrees carrying the deafness-associated 12S

[25] Prezant TR, Agapian JV, Bohlman MC et al. Mitochondrial ribosomal RNA mutation associated with both antibiotic-induced and non-syndromic deafness. Nat Genet*.*

[26] Estivill X, Govea N, Barcelo A, et al. Familial progressive sensorineural deafness is mainly due to the mtDNA A1555G mutation and is enhanced by treatment with ami‐

[27] Pandya A. Non-syndromic hearing loss and deafness. In: Pagon RA, Adam MP, Ar‐ dinger HH, et al., editors. Mitochondrial. Seattle (WA): University of Washington,

[28] Chamber HF, Sande MA. The aminoglycosides. In: The Pharmocological Basis of Therapeutic. Hardman JG, Limbird LE, Molinoff PB, Ruddon RW, Gilman A, editors.

[29] Kokotas H, Petersen MB and Willems PJ. Mitochondrial deafness. Clin Genet.

[30] Scarpelli M, Zappini F, Filosto M, Russignan A, Tonin P, Tomelleri G. Mitochondrial sensorineural hearing loss: A retrospective study and a description of cochlear im‐ plantation in a melas patient. Genet Res. Inter. 2012; p5 DOI: 10.1155/2012/287432.

[31] Sinnathuray AR, Raut V, Awa A, Magee A, Toner JG. A review of cochlear implanta‐ tion in mitochondrial sensorineural hearing loss. Otology and Neurotology.

[32] Toriello HV, Reardon W, Gorlin RJ. Hereditary hearing loss and its syndromes 2nd

[33] Huang BY, Zdanski C, Castillo M. Pediatric Sensorineural Hearing Loss, Part 2: Syn‐

[34] Lunardi S, Forli F, Michelucci A, Liumbruno A, Baldinotti F, Fogli A, Bertini V, Valet‐ to A, Toschi B, Simi P, Boldrini A, Berrettini S, Ghirri P. Genetic Hearing Loss Associ‐ ated with Craniofacial Abnormalities, Hearing Loss. In: Dr. Sadaf Naz (Ed.), InTech;

[35] Putcha GV, Bejjani BA, Bleoo S, Booker JK, Carey JC, Carson N, et al. A multicenter study of the frequency and distribution of GJB2 and GJB6 mutations in a large North

dromic and Acquired Causes. AJNR Am J Neuroradiol. 2012;33:399-406.

287432, 5 pages, 2012. DOI: 10.1155/2012/287432.

rRNA A1555G mutation. Gene. 2007;393:11-19.

noglycosides. American J Human Genet. 1998;62:27-35.

9th edition. New York. McGraw-Hill; 1996. pp.1103-1221.

Seattle. GeneReviews®; 2014; 1993-2015.

ed. New York: Oxford University Press; 2004.

American cohort. Genet. Med. 2007;9:413-426.

2012. ISBN: 978-953-51-0366-0.

1993;4:289-294.

82 Update On Hearing Loss

2007;71: 379-391.

2003;24:418-426.


[50] Yariz KO, Duman D, Seco CZ, et al. Mutations in OTOGL, encoding the inner ear protein otogelin-like, cause moderate sensorineural hearing loss. Am J Hum Genet.

[51] Zheng J, Miller KK, Yang T, et al. Carcinoembryonic antigen-related cell adhesion molecule 16 interacts with alpha-tectorin and is mutated in autosomal dominant

[52] Yan D, Zhu Y, Walsh T, et al. Mutation of the ATP-gated P2X2 receptor leads to pro‐ gressive hearing loss and increased susceptibility to noise. Proc Natl Acad Sci.

[53] Schraders M, Haas SA, Weegerink NJD, et al. Next-generation sequencing identifies mutations of SMPX, which encodes the small muscle protein, X-linked, as a cause of

[54] Pierce SB, Walsh T, Chisholm KM, et al. Mutations in the DBP-deficiency protein HSD17B4 cause ovarian dysgenesis, hearing loss, and ataxia of Perrault syndrome.

[55] Pierce SB, Chisholm KM, Lynch ED, et al. Mutations in mitochondrial histidyl tRNA synthetase HARS2 cause ovarian dysgenesis and sensorineural hearing loss of Per‐

[56] Sirmaci A, Walsh T, Akay H, et al. MASP1 mutations in patients with facial, umbili‐ cal, coccygeal, and auditory findings of Carnevale, Malpuech, OSA, and Michels syn‐

[57] Klein CJ, Botuyan MV, Wu Y, et al. Mutations in DNMT1 cause hereditary sensory

[58] Levenson D. New testing guidelines for hearing loss support next-generation se‐

[59] Anne B, Skvorak G, Cynthia C, Morton. Cytogenetics and Cochlear Expressed Se‐ quence Tags (ESTs) for Identification of Genes Involved in Hearing and Deafness. In: Keats, Bronya J.B., Fay, Richard R. editors. Genetics of Auditory Disorders. Springer

[60] Helga VT, Shelley S. Hereditary Hearing Loss and Its Syndromes. Chapter 5: Syn‐ drome diagnosis and investigation in the hearing impaired patient. Oxford Universi‐

[61] Ponnuraj KT. Cytogenetic Techniques in Diagnosing Genetic Disorders, Advances in the Study of Genetic Disorders. In: Dr. Kenji Ikehara Editor. InTech. 2011; ISBN:

[62] Wieacker P, Steinhard J. The Prenatal Diagnosis of Genetic Diseases. Deutsches Arz‐

teblatt Inter. 2010;107:857-862. DOI: 10.3238/arztebl.2010.0857.

Handbook of Auditory Research. 2002; pp. 92-120. ISBN: 978-0-387-21853-3

neuropathy with dementia and hearing loss. Nat Genet. 2011;43:595-600.

quencing. Am. J. Med. Genet. 2014;164:7-8. DOI: 10.1002/ajmg.a.36643

progressive hearing impairment. Am J Hum Genet. 2011;88:628-634.

rault syndrome. Proc Natl Acad Sci. 2011;108:6543-6548.

dromes. Am J Hum Genet. 2010;87:679-686.

hearing loss (DFNA4) Proc Natl Acad Sci. 2011;108:4218-4223.

2012;91:872-882.

84 Update On Hearing Loss

2013;110:2228-2233.

ty Press. 2013; p89.

978-953-307-305-7, DOI: 10.5772/17481.

Am J Hum Genet. 2010;87:282-288.


Engineering News. 2014;92. pp. 12-17. http://cen.acs.org/articles/92/i14/Sound-Sci‐ ence.html.

[91] Rivolta MN. Stem cells in the inner ear: Advancing towards a new therapy for hear‐ ing impairment. In: Martini A, Stephens D, Read AP. Genes, Hearing, and Deafness: From Molecular Biology to Clinical Practice. CRC Press; 2007. p. 279.

[77] Raphael Y, Frisancho JC, Roessler BJ. Adenoviral-mediated gene transfer into guinea

[78] Fire A, Xu S, Montgomery MK, Kostas SA, Driver SE, Mello CC. Potent and specific genetic interference by double-stranded RNA in,*Caenorhabditis elegans*. Nature.

[79] Maeda Y, Fukushima K, Nishizaki K Smith RJ. *In vitro* and *in vivo* suppression of GJB2 expression by RNA interference. Hum Mol Genet. 2005;14:1641-1650.

[80] Hildebrand MS, Newton SS, Gubbels SP, Sheffield AM, Kochhar A, De Silva MG, Dahl HH, Rose SD, Behlke MA, Smith RJ. Advances in Molecular and Cellular Thera‐ pies for Hearing Loss. Molecular Therapy. 2007;16:224-236. DOI: 10.1038/sj.mt.

[81] Kanzaki S, Beyer L, Karolyi IJ, Dolan DF, Fang Q, Probst FJ, et al. Transgene correc‐ tion maintains normal cochlear structure and function in 6-month-old Myo15a mu‐

[82] Staecker H, Kopke R, Malgrange B, et al. NT-3 and/or BDNF therapy prevents loss of auditory neurons following loss of hair cells. Neuroreport. 1996;7:889-894.

[83] Pickles JO, Harter C, Rebillard G. Fibroblast growth factor receptor expression in out‐

[84] Stover T, Yagi M, Raphael Y. Transduction of the contralateral ear after adenovirus-

[85] Kawamoto K, Yagi M, Stover T, et al. Hearing and hair cells are protected by adeno‐

[86] Nakaizumi T, Kawamoto K, Minoda R, Raphael Y. Adenovirus-mediated expression of brain-derived neurotrophic factor protects spiral ganglion neurons from ototoxic

[87] Bermingham NA, Hassan BA, Price SD, Vollrath MA, Ben-Arie N, Eatock RA, et al. Math1: An essential gene for the generation of inner ear hair cells. Sci. 1999;284:

[88] Kawamoto K, Ishimoto S, Minoda R, Brough DE Raphael Y. Math1 gene transfer gen‐ erates new cochlear hair cells in mature guinea pigs *in vivo*. J Neurosci. 2003;23:

[89] Izumikawa M, Minoda R, Kawamoto K, Abrashkin KA, Swiderski DL, Dolan DF, et al. Auditory hair cell replacement and hearing improvement by Atoh1 gene therapy

[90] Jarvis LM. Sound Science. Emerging technology and a wide-open market are spur‐ ring more companies to pursue drugs to treat or prevent hearing loss. Chemical &

viral gene therapy with TGF-β1 and GDNF. Mol. Ther. 2003;7:484-492.

er hair cells of rat cochlea. Neuroreport. 1998;9:4093-4095.

mediated cochlear gene transfer. Gene Ther. 2000;7:377-383.

damage. Audiol. Neurootol. 2004;9:135-143.

in deaf mammals. Nat Med. 2005;11:271-276.

pig cochlear cells in vivo. Neurosci Lett. 1996;207:137-141.

1998;391:806-811.

86 Update On Hearing Loss

6300351.

1837-1841.

4395-4400.

tant mice. Hear Res. 2006;214:37-44.


[116] Bauer PW, Geers AE, Brenner C, Moog JS, Smith RJH. The effect of GJB2 allele var‐ iants on performance after cochlear implantation. Laryngoscope. 2003;113:2135-41.

[103] Matsuoka AJ, Kondo T, Miyamoto RT, Hashino E. Enhanced survival of bone-mar‐ row-derived pluripotent stem cells in an animal model of auditory neuropathy. The

[104] Hu Z, Ulfendahl M, Olivius NP. Central migration of neuronal tissue and embryonic stem cells following transplantation along the adult auditory nerve. Brain Res.

[105] Ulfendahl M. Tissue transplantation into the inner ear. In: Martini A, Stephens D, Read A P. Genes, Hearing, and Deafness: From Molecular Biology to Clinical Prac‐

[106] Ito J, Murata M, Kawaguchi S. Regeneration and recovery of the hearing function of the central auditory pathway by transplants of embryonic brain tissue in adult rats.

[107] Lang H, Ebihara Y, Schmiedt RA, Minamiguchi H, Zhou D, Smythe N, et al. Contri‐ bution of bone marrow hematopoietic stem cells to adult mouse inner ear: mesenchy‐

[108] Matsuoka AJ, Kondo T, Miyamoto RT, Hashino E. *In vivo* and *in vitro* characteriza‐ tion of bone marrow-derived stem cells in the cochlea. Laryngoscope.

[109] Chen W, Cacciabue-Rivolta DI, Moore HD, Rivolta MN. The human fetal cochlea can be a source for auditory progenitors/stem cells isolation. Hear Res. 2007;233:23-29.

[110] Nishimura K, Nakagawa T, Ito J. Potential of Pluripotent Stem Cells for the Replace‐ ment of Inner Ears, Embryonic Stem Cells - Recent Advances in Pluripotent Stem Cell-Based Regenerative Medicine, Prof. Craig Atwood editor. InTech. 2011; ISBN:

[111] Revoltella RP, Papini S, Rosellini A, et al. Cochlear repair by transplantation of hu‐ man cord blood CD133+ cells to nod-scid mice made deaf with kanamycin and noise.

[112] Da Costa V, O'Grady G, Jackson L, Kaylie D, Raynor E. Improvements in sensorineu‐ ral hearing loss after cord blood transplant in patients with mucopolysaccharidosis. Arch Otolaryngol Head Neck Surg. 2012;138:1071-6. DOI: 10.1001/jamaoto.2013.597.

[113] Available from: https://clinicaltrials.gov/show/NCT01343394 [Accessed: 2015-02-01].

[114] Forge A, Li L, Corwin JT, Nevill G. Ultrastructural evidence for hair cell regeneration

[115] Warchol ME, Lambert PR, Goldstein BJ, Forge A, Corwin JT. Regenerative prolifera‐ tion in inner ear sensory epithelia from adult guinea pigs and humans. Sci.

mal cells and fibrocytes. J Comp Neurol. 2006;496: 187-201.

Laryngoscope*.* 2007;117:1629-1635.

tice. CRC Press. 2007. p. 294.

Exp Neurol. 2001;169:30-5.

2006;116:1363-1367.

978-953-307-198-5, DOI: 10.5772/10541.

in the mammalian inner ear. Sci. 1993;259:1616-1619.

Cell Transplant. 2008;17:665-78.

1993;259:1619-1622.

2004;1026:68-73.

88 Update On Hearing Loss


### **Hearing Loss in Infectious and Contagious Diseases**

Luiz Alberto Alves Mota, Paula Cristina Alves Leitão, Paulo Marcelo Freitas de Barros and Ana Maria dos Anjos Carneiro Leão

Additional information is available at the end of the chapter

http://dx.doi.org/10.5772/61818

#### **Abstract**

Hearing loss can occur for various reasons, whether it is of a genetic, congenital or ac‐ quired character. Infectious diseases stand out among those causing this type of deficien‐ cy and account for approximately 25% of all cases of profound hearing loss. Of these, one-fifth are due to congenital causes. As to classifying hearing loss, this can be done ac‐ cording to where this is in the hearing system, to whether the loss is unilateral or bilater‐ al, and to its intensity or degree. Regarding where the hearing system is affected, hearing loss can be about transmission (or conduction), perception (sensorineural), or mixed. Hearing losses arising from any affection of the outer and middle ear are called transmis‐ sion or conductive losses. Sensorineural losses occur due to lesions on the hair cells of the cochlear organ of Corti (inner ear) and/or of the cochlear nerve. When there is concomi‐ tant conductive and sensorineural affection, the loss is classified as mixed. Hearing loss can interfere in the lives of affected individuals, since besides affecting communication, it can influence the quality of life, when the loss leads to feelings of sadness and anxiety, or even to social isolation. In children, it can moreover represent consequences for develop‐ ment. Thus, appropriate treatment and/or monitoring of infectious diseases is important, the purpose of which is to see to it that hearing loss is prevented or diagnosed early.

**Keywords:** Hearing loss, infectious diseases, etiology

#### **1. Introduction**

The hearing process begins when sound waves enter the outer ear and travel along the ear canal to the eardrum, causing it to vibrate. These vibrations are transmitted to the ossicles of the middle ear, which cause the sound vibration to be amplified before transmission to the inner ear. The inner ear has a part called the cochlea, which is filled with liquid and contains hair cells.[1]

© 2015 The Author(s). Licensee InTech. This chapter is distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

The frequencies and intensities of the sound determine which hair cells will move. This causes electrical impulses to be generated and sent through the auditory pathways to the brain so that it may process the information. These electrical impulses are the codes that the brain can process and, on understanding them, assigns them various specific meanings.[1]

Hearing losses can be classified according to the location of the portion of the hearing system affected, to whether the loss is unilateral or bilateral, and to its intensity or degree. The location of the portion affected of hearing loss has to do with transmission (or conduction), perception (sensorineural), or a mixture of these (mixed). Sensorineural losses arising from some affection of the outer and middle ear are called transmission or conductive losses. Sensorineural losses result from lesions on the hair cells of the cochlear organ of Corti (inner ear) and/or of the cochlear nerve. When there are concomitant conductive and sensorineural affections, the hearing loss is classified as mixed.[2]

Hearing loss can occur due to a genetic, congenital, or acquired cause.[3] Among the acquired causes, many could sometimes be avoided, e.g., infections that occur during pregnancy, meningitis, and even due to using ototoxic medication.[4]

### **2. Epidemiology**

Deafness is a global problem that affects individuals, families, societies, and governments. According to the World Health Organization (WHO), deafness affects between 1 and 4 people per 1,000 individuals, and there has been a considerable increase in poor countries. In 2005, for example, about 278 million people had degrees of hearing loss between moderate and profound, and 80% of them live in poor and developing countries.[5] Prevalence greater than 1 per 1000, however, indicates a serious public health problem that needs urgent attention.[6]

Infectious diseases are the leading cause of hearing loss and produce about 25% of profound losses. Of these, the causes of one-fifth are congenital.[7] The main infections include diseases such as rubella, cytomegalovirus, and measles.[7]

In the newborn, congenital infections are an important cause of hearing loss, which may have implications for the development of the child.[8]

#### **3. Pathogenesis**

The mechanisms that lead to the onset of viral hearing loss may include infections of the upper airways, may progress to subsequent involvement of the middle ear, and may occur with conductive hearing loss.[3]

Moreover, viral invasion of the inner ear can occur.[3] The viruses that can damage the inner ear may do so at different stages of the life cycle: during intrauterine life, childhood, adoles‐ cence, or adulthood. The pathological changes that predominate in the basal cochlea include degeneration of the organ of Corti, atrophy of the stria vascularis, displacement and distortion of the tectorial membrane, and degeneration of the saccule. The utricle and semicircular canals tend to be preserved.[9]

### **4. Main infectious and contagious diseases related to loss of hearing**

#### **4.1. Infections by virus**

The frequencies and intensities of the sound determine which hair cells will move. This causes electrical impulses to be generated and sent through the auditory pathways to the brain so that it may process the information. These electrical impulses are the codes that the brain can

Hearing losses can be classified according to the location of the portion of the hearing system affected, to whether the loss is unilateral or bilateral, and to its intensity or degree. The location of the portion affected of hearing loss has to do with transmission (or conduction), perception (sensorineural), or a mixture of these (mixed). Sensorineural losses arising from some affection of the outer and middle ear are called transmission or conductive losses. Sensorineural losses result from lesions on the hair cells of the cochlear organ of Corti (inner ear) and/or of the cochlear nerve. When there are concomitant conductive and sensorineural affections, the

Hearing loss can occur due to a genetic, congenital, or acquired cause.[3] Among the acquired causes, many could sometimes be avoided, e.g., infections that occur during pregnancy,

Deafness is a global problem that affects individuals, families, societies, and governments. According to the World Health Organization (WHO), deafness affects between 1 and 4 people per 1,000 individuals, and there has been a considerable increase in poor countries. In 2005, for example, about 278 million people had degrees of hearing loss between moderate and profound, and 80% of them live in poor and developing countries.[5] Prevalence greater than 1 per 1000, however, indicates a serious public health problem that needs urgent attention.[6]

Infectious diseases are the leading cause of hearing loss and produce about 25% of profound losses. Of these, the causes of one-fifth are congenital.[7] The main infections include diseases

In the newborn, congenital infections are an important cause of hearing loss, which may have

The mechanisms that lead to the onset of viral hearing loss may include infections of the upper airways, may progress to subsequent involvement of the middle ear, and may occur with

Moreover, viral invasion of the inner ear can occur.[3] The viruses that can damage the inner ear may do so at different stages of the life cycle: during intrauterine life, childhood, adoles‐ cence, or adulthood. The pathological changes that predominate in the basal cochlea include

process and, on understanding them, assigns them various specific meanings.[1]

hearing loss is classified as mixed.[2]

**2. Epidemiology**

92 Update On Hearing Loss

**3. Pathogenesis**

conductive hearing loss.[3]

meningitis, and even due to using ototoxic medication.[4]

such as rubella, cytomegalovirus, and measles.[7]

implications for the development of the child.[8]

#### *4.1.1. Epidemic parotiditis*

Epidemic parotiditis or mumps is an acute systemic and contagious viral infection. A *Paramyxovirus*, with an RNA genome, is involved.[10]

The most typical clinical manifestations are sialadenitis, epididymo-orchitis, pancreatitis, meningitis, and hearing loss. Sensorineural hearing loss occurs in up to 5/10,000 cases and may appear some days or weeks after the parotiditis.[11]

Deafness is usually sudden, profound, associated with or without nausea, vomiting, dizziness, and tinnitus.[9] Hearing loss, which is unilateral in 80% of cases, is more common for high frequencies of sound and may present reduced caloric responses to the vestibular test.[11]

There may be atrophy of the organ of Corti and of the stria vascularis, with minimal effect on the vestibular system. Also observed are endolymphatic hydrops and obliteration of the endolymphatic duct.[11]

#### *4.1.2. Infection by cytomegalovirus*

Cytomegalovirus (CMV), which belongs to the herpesvirus family, is an enveloped virus that has the largest genome among the viruses that infect animal species. In immunocompetent individuals, it is generally responsible for asymptomatic infections.[12]

The highest incidence of the primary infection occurs in two peak periods: the first is in childhood, with early acquisition as a result of perinatal infection, and the second is in adolescence, through sexual transmission or by kissing.[12] It infects up to 70% of children who spend the day in kindergartens, and about 1–2% of infants are infected with CMV.[13]

In the congenital form, clinical manifestations range from the unapparent to the severe and widespread. Cytomegalic inclusion disease develops in about 5% of the infected fetuses. The most common manifestations at presentation are petechiae, hepatosplenomegaly, and jaundice. The occurrence of microcephaly with or without intracranial calcifications delayed intrauterine growth, and prematurity in 30–50% of cases is observed.[12] Deafness occurs in 20–65% of infants with this disease, which is typically bilateral.[13]

In patients with hearing loss, a consistent pattern follows, and this can develop over a period of years. Among asymptomatic patients, the rate of hearing loss of such children ranges from 7% to 13% and should therefore be considered in patients with nonsyndromic and nongenetic hearing loss.[13]

#### *4.1.3. Rubella*

This is a disease with an acute rash caused by an RNA virus of the genus *Rubivírus* and the Togaviridae family, which is highly contagious and mainly affects children.[13]

The clinical state is characterized by a maculopapular and diffuse pinpoint rash, starting on the face, scalp, and neck, later spreading to the trunk and limbs.[13]

The infection acquired after birth usually causes a mild or even subclinical disease. The main symptoms of this form are retroauricular, cervical and suboccipital lymphadenopathy, rash, and fever. Complications are uncommon.[10]

Maternal infection during early pregnancy can lead to infection of the fetus, resulting in congenital rubella.[10] Congenital rubella syndrome (Gregg's syndrome) affects most organ systems, causing cataracts, microphthalmia, heart defects, skin rash, retardation of growth, and hearing loss. In general, hearing loss affects about 50% of individuals with the disease and is normally severe to profound. Auditory manifestations may occur months to years after the initial infection.[11]

#### *4.1.4. Measles*

Measles is an acutely infectious viral disease that is potentially serious, transmittable, and extremely contagious. Its etiologic agent is an RNA virus of the genus *Morbilliviru*s, family Paramyxoviridae.[13]

Among the clinical manifestations, it is characterized by high fever, above 38.5°C, a widespread maculopapular rash, cough, coryza, conjunctivitis, and Koplik spots (small white spots on the oral mucosa, prior to the rash).[13]

It may cause severe degeneration of the organ of Corti, the stria vascularis, cochlear neurons, and vestibular damage. Inflammation, fibrous deposit, and ossification in the basal turn of the cochlea may also be present. Hearing loss tends to be asymmetrical, bilateral, and severe. Vestibular abnormalities are not rare.[11]

#### *4.1.5. Viral meningitis*

Viral meningitis is characterized by a clinical state of neurological changes, which usually develops benignly. Approximately 85% of cases are due to the group of Enteroviruses, among which the poliovirus, echovirus, and coxsackievirus stand out. Other less common groups are arboviruses, herpes simplex virus, and varicella, mumps, and measles viruses.[14]

It occurs most frequently in children over two years old and can lead to sensorineural hearing loss.[3]

#### *4.1.6. Herpes simplex*

Herpes simplex has been considered one of the most common viral contamination agents in humans and is subdivided into two groups: type 1 and type 2.[15]

Infections caused by herpes simplex type 1 usually affect areas such as the lips, mouth, intraoral region, nose, eyes, while infections caused by herpes simplex type 2 are mainly found in the genital and surrounding areas. Trigger factors include fever, exposure to cold temperatures or ultraviolet rays, skin or mucous abrasions, emotional stress, and nerve injury. In the case of occurrence in newborns, the onset of infection can be at different periods: prenatal (congenital infection), perinatal (infection through the birth canal), or postnatal (infection through contact with infected individuals).[15]

The virus of the herpes group is regarded as causing sensorineural loss. In pregnancy, it can also cause spontaneous miscarriages, still births, and congenital defects.[15]

#### *4.1.7. Infectious mononucleosis and other viral agents*

Infectious mononucleosis (IM) is caused by the Epstein–Barr virus (EBV), characterized by fever, pharyngitis, lymphadenopathy, and atypical lymphocytosis. EBV is a member of the Herpesviridae family.

One of the main viral agents associated with sensorineural hearing loss in adulthood is the IM virus. Other agents that can also often affect this age-group and are related to hearing loss are adenovirus, enterovirus, influenza, and parainfluenza.[3]

#### **4.2. Infections by bacteria**

*4.1.3. Rubella*

94 Update On Hearing Loss

initial infection.[11]

Paramyxoviridae.[13]

*4.1.5. Viral meningitis*

*4.1.6. Herpes simplex*

loss.[3]

oral mucosa, prior to the rash).[13]

Vestibular abnormalities are not rare.[11]

*4.1.4. Measles*

This is a disease with an acute rash caused by an RNA virus of the genus *Rubivírus* and the

The clinical state is characterized by a maculopapular and diffuse pinpoint rash, starting on

The infection acquired after birth usually causes a mild or even subclinical disease. The main symptoms of this form are retroauricular, cervical and suboccipital lymphadenopathy, rash,

Maternal infection during early pregnancy can lead to infection of the fetus, resulting in congenital rubella.[10] Congenital rubella syndrome (Gregg's syndrome) affects most organ systems, causing cataracts, microphthalmia, heart defects, skin rash, retardation of growth, and hearing loss. In general, hearing loss affects about 50% of individuals with the disease and is normally severe to profound. Auditory manifestations may occur months to years after the

Measles is an acutely infectious viral disease that is potentially serious, transmittable, and extremely contagious. Its etiologic agent is an RNA virus of the genus *Morbilliviru*s, family

Among the clinical manifestations, it is characterized by high fever, above 38.5°C, a widespread maculopapular rash, cough, coryza, conjunctivitis, and Koplik spots (small white spots on the

It may cause severe degeneration of the organ of Corti, the stria vascularis, cochlear neurons, and vestibular damage. Inflammation, fibrous deposit, and ossification in the basal turn of the cochlea may also be present. Hearing loss tends to be asymmetrical, bilateral, and severe.

Viral meningitis is characterized by a clinical state of neurological changes, which usually develops benignly. Approximately 85% of cases are due to the group of Enteroviruses, among which the poliovirus, echovirus, and coxsackievirus stand out. Other less common groups are

It occurs most frequently in children over two years old and can lead to sensorineural hearing

Herpes simplex has been considered one of the most common viral contamination agents in

arboviruses, herpes simplex virus, and varicella, mumps, and measles viruses.[14]

humans and is subdivided into two groups: type 1 and type 2.[15]

Togaviridae family, which is highly contagious and mainly affects children.[13]

the face, scalp, and neck, later spreading to the trunk and limbs.[13]

and fever. Complications are uncommon.[10]

#### *4.2.1. Bacterial meningitis*

Meningitis is frequently associated with a high mortality rate. A large portion may still present sequelae of the disease, among which is hearing loss. This disease is held to be among the main ones responsible for postnatal acquired hearing impairment.[16]

Among the mechanisms elucidated, as being responsible for hearing damage, is the direct invasion of the bacteria into the cochlea and labyrinth, lesion of cranial nerve VIII, by toxins, and blockage of small vessels and ototoxic action of the antibiotics used. Regarding the degree of loss, a high percentage of profound hearing loss (66.95%) has been evidenced. However, hearing loss of all degrees (mild to anacusis) was observed.[16]

In a study of 124 children recruited from 21 hospitals in England and South Wales, aged between 4 weeks and 16 years old, with a recent diagnosis of bacterial meningitis, 92 (74%) had meningococcal and 18 (15%) had pneumococcal meningitis. All cases showed obvious hearing loss in the first assessment. Three children had permanent sensorineural hearing loss. Thirteen children (10.5%) had reversible loss, nine of which were resolved within 48 hours of diagnosis.[17]

The impact on the development of the child after meningitis can be devastating. In the postmeningitis period, a possibility of rehabilitation for patients with severe and profound sensorineural loss is a cochlear implant.[18]

In cases of postmeningitis hearing loss, it is particularly important to do the implant as early as possible due to the intracochlear ossification that may occur, thus preventing the placement of electrodes in the lumen of the cochlea.[19]

#### *4.2.2. Syphilis*

During the decade of 2003–2012, the diagnosis of primary syphilis increased 61% in men in England, while in contrast, this diagnosis in women decreased by 16%.[20] In the 2004 Sentinela Parturiente (Mother in Labour Sentinel) Study of the Ministry of Health in Brazil, the preva‐ lence of syphilis in pregnant women was 1.6%, about four times higher than HIV infection in the same group, the estimate being that a total of 48,425 pregnant women were infected in that year. Between 2005 and June 2012, 57,700 cases of syphilis in pregnant women were registered in SINAN (the Brazilian statutory body for notifiable diseases), most of which occurred in the Southeast and Northeast regions.[21]

Syphilis is an infectious disease caused by a bacterium, *Treponema pallidum*, which is predom‐ inantly transmitted sexually. If left untreated, the disease can progress to stages that adversely affect the skin and internal organs such as the heart, liver, and nervous system central.[18] Hearing loss can occur because of syphilis, but currently this is rare, and this being most often in the tertiary phase.[3]

Otosyphilis may be present in the form of a sudden and fluctuating sensorineural loss, episodic vertigo, with progressive unilateral or bilateral loss.[11]

Acquired syphilis may also affect the inner ear, simulating Ménière's disease. Hearing loss can progress rapidly progressive, initially with good discrimination; tinnitus and vestibular symptoms disappear to the extent that the destruction of the labyrinth is completed.[9]

Congenital syphilis is due to the hematogenous spread of *Treponema pallidum* of pregnant women who have not been treated or inadequately treated for their unborn child, via the placenta. Transmission can occur at any stage of pregnancy and in any stage of the disease.[22]

Congenital syphilis can cause severe deafness and separately affect both ears. Manifestation occurs when a child is around two years old or between 8 and 20 years old.[9]

#### **4.3. Protozoan infections**

#### *4.3.1. Toxoplasmosis*

Toxoplasmosis is caused by infection with the obligate intracellular parasite *Toxoplasma gondii*. Both in its acute and in its chronic form, it is related to the appearance of a clinically evident disease, including lymphadenopathy, encephalitis, myocarditis, and pneumonitis.[10]

In immunocompetent individuals, acute toxoplasmosis is habitually asymptomatic and goes unnoticed in 80–90% of adults and children with acquired infection. In the congenital form, the infection of the placenta determines the hematogenous infection of the fetus. The propor‐ tion of fetuses that are infected increases as pregnancy progresses, but the severity of the infection declines.[10]

*Toxoplasma gondii* has been associated with lesion of the auditory pathways with a demon‐ stration of calcium deposits (similar to the calcifications found in the brains of children with congenital toxoplasmosis) in the spiral ligament and the cochlea. A hearing deficit has been reported in about 20% of cases of congenital toxoplasmosis.[23]

### **5. Final remarks**

In cases of postmeningitis hearing loss, it is particularly important to do the implant as early as possible due to the intracochlear ossification that may occur, thus preventing the placement

During the decade of 2003–2012, the diagnosis of primary syphilis increased 61% in men in England, while in contrast, this diagnosis in women decreased by 16%.[20] In the 2004 Sentinela Parturiente (Mother in Labour Sentinel) Study of the Ministry of Health in Brazil, the preva‐ lence of syphilis in pregnant women was 1.6%, about four times higher than HIV infection in the same group, the estimate being that a total of 48,425 pregnant women were infected in that year. Between 2005 and June 2012, 57,700 cases of syphilis in pregnant women were registered in SINAN (the Brazilian statutory body for notifiable diseases), most of which occurred in the

Syphilis is an infectious disease caused by a bacterium, *Treponema pallidum*, which is predom‐ inantly transmitted sexually. If left untreated, the disease can progress to stages that adversely affect the skin and internal organs such as the heart, liver, and nervous system central.[18] Hearing loss can occur because of syphilis, but currently this is rare, and this being most often

Otosyphilis may be present in the form of a sudden and fluctuating sensorineural loss, episodic

Acquired syphilis may also affect the inner ear, simulating Ménière's disease. Hearing loss can progress rapidly progressive, initially with good discrimination; tinnitus and vestibular symptoms disappear to the extent that the destruction of the labyrinth is completed.[9]

Congenital syphilis is due to the hematogenous spread of *Treponema pallidum* of pregnant women who have not been treated or inadequately treated for their unborn child, via the placenta. Transmission can occur at any stage of pregnancy and in any stage of the disease.[22] Congenital syphilis can cause severe deafness and separately affect both ears. Manifestation

Toxoplasmosis is caused by infection with the obligate intracellular parasite *Toxoplasma gondii*. Both in its acute and in its chronic form, it is related to the appearance of a clinically evident disease, including lymphadenopathy, encephalitis, myocarditis, and pneumonitis.[10] In immunocompetent individuals, acute toxoplasmosis is habitually asymptomatic and goes unnoticed in 80–90% of adults and children with acquired infection. In the congenital form, the infection of the placenta determines the hematogenous infection of the fetus. The propor‐ tion of fetuses that are infected increases as pregnancy progresses, but the severity of the

occurs when a child is around two years old or between 8 and 20 years old.[9]

of electrodes in the lumen of the cochlea.[19]

Southeast and Northeast regions.[21]

vertigo, with progressive unilateral or bilateral loss.[11]

in the tertiary phase.[3]

**4.3. Protozoan infections**

*4.3.1. Toxoplasmosis*

infection declines.[10]

*4.2.2. Syphilis*

96 Update On Hearing Loss

Hearing loss can interfere with the life of affected individuals because in addition to affecting communication, this can influence the quality of life, on expressing feelings such as sadness and anxiety, or can even lead to social isolation. In infancy, hearing loss can still represent consequences for development.

Thus, proper treatment and/or monitoring of infectious diseases for the purpose of establishing the prevention or early diagnosis of hearing loss is important. With regard to congenital infections, public measures that encourage primary prevention and early identification of these affections in newborns are needed. Therefore, hearing health will depend on epidemiological studies of each location and on a perfect integration between health and education authorities working in an integrated way with all other sectors of society.

### **Author details**

Luiz Alberto Alves Mota1\*, Paula Cristina Alves Leitão2 , Paulo Marcelo Freitas de Barros3 and Ana Maria dos Anjos Carneiro Leão4

\*Address all correspondence to: luizmota10@hotmail.com

1 Faculty of Medical Sciences, Universidade de Pernambuco, Brazil

2 Faculty of Medical Sciences, Universidade de Pernambuco, Recife/PE, Brazil

3 Universidade Católica de Pernambuco, Brazil

4 Department of Animal Morphology and Physiology, Universidade Federal Rural de Pernambuco, Brazil

### **References**

[1] Instituto Brasileiro do sono. Available at: http://www.institutobrasileirodoso‐ no.com.br/. Accessed on 16 November 2014.


[18] Bevilacqua MC, Moret ALM, Costa Filho OA, Nascimento LT, Banhara MR. Cochlear implant in deaf children due to meningitis. Rev Bras Otorhinolaryngol. 2003; 69(6): 760–4.

[2] Vieira ABC, Macedo LR, Gonçalves DU. O diagnóstico da perda auditiva na infância.

[3] Vieira ABC, Mancini P, Goncalves DU. Infectious diseases and hearing loss. Rev Med

[4] Pereira T, Costa KC, Pomilio MCA, Costa SMS, Rodrigues GRI, Sartorato EL. Investi‐ gação etiológica da deficiência auditiva em neonatos identificados em um programa

[5] World Health Organization. Deafness and hearing impairment. Available at: http:// www.who.int/mediacentre/factsheets/fs300/en/index.html. Accessed on 21 January

[6] Bubbico, LRA.; Spagnolo, A. Prevalence of prelingual deafness in Italy. Acta Otorhi‐

[7] Catlin FI. Prevention of hearing impairment from infection and ototoxic drugs. Arch

[8] Gatto CI, Tochetto TM. Infantile hearing loss: implications and solutions. Rev CE‐

[9] Miniti, Aroldo. Otorrinolaringologia: Clínica e Cirúrgica. 2a edição. São Paulo: Edi‐

[10] Harrison. Medicina interna. 17º edição. Rio de Janeiro: McGraw-Hill Interamericana

[11] K. J. Lee. Princípios de Otorrinolaringologia. Cirurgia cabeça e pescoço. 9º edição.

[12] Mendrone Junior, A. Prevalência da infecção pelo citomegalovírus: a importância de

[13] Brasil. Ministério da Saúde. Secretaria de Vigilância em Saúde. Guia de Vigilância em Saúd. Brasília: Ministério da Saúde, 2014. Available at: <www.saude.gov.br/bvs> Ac‐

[14] Secretaria de Estado da Saúde de São Paulo. Divisão de Doenças de Transmissão

[15] Schuster LC, Buss C. Herpes and its hearing implications: a literature review. Rev.

[16] Romero, JH., Carvalho, MS., Feniman, MR. Achados audiológicos em indivíduos

[17] Richardson MP, Reid A, Tarlow MJ, Rudd PT. Hearing loss during bacterial meningi‐

Respiratória. Meningites virais. Rev Saúde Pública. 2006; 40(4), 748–50.

estudos locais. Rev Bras Hematol Hemoter. 2010; 32(1): 7–8.

pós-meningite. Rev Saúde Pública. 1997; 31(4): 398–401.

de triagem auditiva neonatal universal. Rev. CEFAC. 2014; 16(2): 422–9.

Pediatria (São Paulo). 2007; 29(1):43–49.

Minas Gerais. 2010; 20(1): 102–6.

nolaryngol. 2007; 27: 17–21.

FAC. 2007; 9(1): 110–15.

tora Atheneu, 2000.

do Brasil, 2008.

Otolaryngol. 1985; 111(6):377–84.

São Paulo: McGraw-Hill, 2008.

cessed on 13 February 2015.

CEFAC. 2009; 11(4):695–700.

tis. Arch Dis Child. 1997; 76:134–8.

2010.

98 Update On Hearing Loss

