**6. Future of hearing aids**

Although implanted hearing devices are a rapidly evolving research field and market, noninvasive hearing aids remain the first approach for treating hearing loss. Although these hearing aids have approximately achieved their aim, three severe problems are yet to be resolved.

The first is the adaptation for profound hearing loss (hearing level > 90 dB). Hearing aids are recommended when hearing level is higher than 40 dB but do not have much effect for profound hearing loss. The exception is the cochlear implant, which can work in some instances.

The second problem concerns how the hearing aids are worn. In the long history of hearing aids, some part is always inserted in the ear. The earplug occludes the external auditory canal, and the annoyance this causes is always a high-ranking reason for discontinuing use. Although the vent in CIC hearing aids as well as open-fitting hearing aids seems to reduce this feeling and the accentuation of the user's voice, they lead to acoustic feedback. Additionally, earplugs are not possible for patients with atresia of the external auditory canal or microtia, who do not have enough space to insert it.

The third problem deals with the adaptation for sensorineural hearing loss. Unlike conductive hearing loss that affects sound waves conduction anywhere along the route through the outer ear, tympanic membrane, and middle ear, the root cause of sensorineural hearing loss lies in the inner ear or between the auditory nerves and audio cortex in the brain. Some types of hearing loss, like presbycusis, have mixed causes. Patients with conductive hearing loss are able to achieve nearly 100% accuracy on speech intelligibility tests when the speech is presented at a sufficiently loud volume. However, patients with sensorineural hearing loss including presbycusis can only achieve the maximum peak accuracy between 40 and 80%, even when the speech is presented at an optimal volume. In this condition, patients can hear but cannot identify the syllables. Because the main function of a hearing aid is to amplify sound, it may offer limited benefits for sensorineural and mixed hearing loss when used during conversation.

The following subsections describe the solutions or potential clues for solving these three problems and introduce the next-generation hearing aids.

#### **6.1. Bone-conducted ultrasonic hearing aid**

conduction hearing aids, this technique has better amplification of higher frequency ranges, which

Another implanted device is the artificial middle ear, which is used when vibration from the ear drum does not smoothly travel through the ear ossicles even after tympanoplasty operations (**Figure 9b**). A sound receiver with a built-in transmitting coil is magnetically attached behind the ear to an implanted receiving coil on the temporal bone, and the signal is transmitted through the skin by the reciprocation of the two coils. The receiving coil activates a transducer that touches the round window of the cochlea, which directly stimulates the basilar membrane. The artificial middle ear was originally developed by a Japanese hearing aid maker (RION Co., LTD.) in 1983, and other makers are currently developing a new stimu-

A third device is the cochlear implant, which converts sound into an electrical current and directly stimulates hair cells in the cochlea using an implanted electrode (**Figure 9c**). The first prototype cochlear implant was conducted by William House and John Doyle in 1961 [17], making the history of this method older than either the BAHA or the artificial middle ear. Subsequently, Clark developed a multichannel electrode cochlear implant in 1977 and produced the first commercialized multielectrode device in 1978 [18]. The cochlear implant is adaptive for profound deafness (hearing level > 90 dB) in Japan. The cochlear implant is reported to be especially useful for children who have not acquired language skills for the

Unlike implanted hearing devices, another future option could utilize stem cells, as inner ear stem cells were found in 1999 [19], and studies in regenerative medicine have developed even further since that time. This means that surgical approaches might become mainstream hearing-loss treatments. One irony is that in Japan, medical expenses for implanted hearing devices are more economic for the individual than the cost of hearing aids because they are

Although implanted hearing devices are a rapidly evolving research field and market, noninvasive hearing aids remain the first approach for treating hearing loss. Although these hearing aids have approximately achieved their aim, three severe problems are yet to be resolved. The first is the adaptation for profound hearing loss (hearing level > 90 dB). Hearing aids are recommended when hearing level is higher than 40 dB but do not have much effect for profound hearing loss. The exception is the cochlear implant, which can work in some instances. The second problem concerns how the hearing aids are worn. In the long history of hearing aids, some part is always inserted in the ear. The earplug occludes the external auditory canal, and the annoyance this causes is always a high-ranking reason for discontinuing use. Although the vent in CIC hearing aids as well as open-fitting hearing aids seems to reduce this feeling and the accentuation of the user's voice, they lead to acoustic feedback. Additionally,

improves speech intelligibility [14, 15].

160 An Excursus into Hearing Loss

lation approach using artificial ear ossicles [16].

linguistic development in the future.

**6. Future of hearing aids**

considered within the scope of the health-care system.

Ultrasonic sound waves are those with frequencies greater than 20 kHz, which is the audible limit of human hearing. Although airborne ultrasound cannot be perceived, we can hear ultrasound delivered to the mastoid bone of the skull via a transducer [20]. Importantly, boneconducted ultrasound can even be perceived by people with profound hearing impairment. One study has reported that these individuals are able to identify ultrasound amplitude-modulated speech signals as speech [21]. Additionally, Hosoi used magnetoencephalography and positron-emission tomography to show that bone-conducted ultrasound can activate auditory cortex of people with profound hearing loss [22]. Although the mechanisms underlying this phenomenon are still only hypotheses, the best guess at the moment is that bone-conducted ultrasound stimulates residual inner hair cells in the base of basilar membrane [23, 24].

An accumulation of research has led to the development of several different bone-conducted ultrasonic hearing aids (**Figure 10**). The HD-GU was a test model developed by Nara Medical University in Japan. Connected to a computer, parameters from various digital signal processors (e.g., noise reduction and nonlinear gain) were controlled on a monitor via software. AIST-BCUHA-003 and AIST-BCUHA-005 were developed by the National Institute of Advanced Industrial Science and Technology (AIST) in Japan [25]. AIST-BCUHA-003 was able to control amplitude and carrier frequency (i.e., ultrasound), while the AIST-BCUHA-005 contained digital signal processors and could control the degree of modulation. Another device was HiSonic, a commercial product that controlled sound amplitude for hearing aids and as therapy for suppressing tinnitus [26, 27]. Using these models, Shimokura was able to see vast improvement in speech intelligibility in a woman with profound hearing loss who he advised to try bone-conducted ultrasonic hearing aids. [28]. The results demonstrated significant improvement from the outset of therapy, and her perceived-speech intelligibility reached 60%, as measured by correctly answered questions in a closed-set test of word intelligibility

**Figure 10.** Bone-conducted hearing aids: (a) HD-GU, (b) AIST-BCUHA-003, (c) AIST-BCHA-005, and (d) HiSonic.

with three options. For patients with profound hearing loss, the bone-conducted ultrasonic hearing aid might be a good option before risking cochlear implant surgery.

> The cartilage conduction hearing aid can be used by patients with atresia of the external auditory canal [37, 38]. Because patients whose canal is occluded by fibrotic tissue cannot use a conventional hearing aid because they lack a sound pathway (**Figure 11c** and **d**), they are usually advised to use a bone conduction hearing aid or a BAHA. However, the bone conduction transducer must be pushed tightly against the head insufferably when using it for a long period of time, and the BAHA needs surgery. In contrast, the cartilage conduction transducer only needs to be softly put on the end of the canal because the excitation force required for light cartilage is much smaller than that for heavy skull bone. Despite this, hearing levels for

> **Figure 11.** (a) Cartilage conduction hearing aid with a ring-shaped transducer. (b) Sound-transmission pathway in the ear. (c) Cartilage conduction hearing aid with a small transducer. (d) Sound-transmission pathway for people with

Hearing Aids

163

http://dx.doi.org/10.5772/intechopen.73527

bone and cartilage conduction are almost equivalent at frequencies below 2 kHz [38].

The third problem is the speech intelligibility for people with sensorineural hearing loss. As mentioned earlier, patients with sensorineural hearing loss can perceive sound but cannot recognize speech. For example, all Japanese medical institutions use the same list of monosyllable signals for an intelligibility test, which are delivered by a professional female speaker, and studies investigating speech intelligibility in patients with sensorineural hearing loss have identified the less discernible consonants within Japanese monosyllables [39–41]. According to [41], 90% of patients identified monosyllable /i/ correctly, while only 10% could identify /de/.

**6.3. Autocorrelation analysis of speech signals**

atresia of the external auditory canal.

#### **6.2. Cartilage conduction hearing aid**

The uncomfortable feeling that results from an occluded ear is a major hurdle for hearing aid users, and this feeling is always a top reason for discontinuing hearing-aid use. One study tried using active noise control to reduce this feeling [29]. Unlike this digital approach, a cartilage conduction hearing aid reduces the "full ear" feeling with an analog procedure. In 2004, Hosoi found that a specific type of transducer could be used to create clear audible sound when gently placed on aural cartilage [30, 31]. Aural cartilage comprises the outer ear and is distributed around the exterior half of the external auditory canal. Transducer-induced cartilage vibration generates sound directly in the external auditory canal as shown in **Figure 11b** [32–35]. In this case, the cartilage and transducer play the roles of a diaphragm and a loudspeaker voice coil, respectively. When the transducer is ring shaped, it can amplify the sound without occluding the ears (**Figure 11a**). Sound pressure levels in the canal show that the ring-shaped transducer produces an average gain of 35 dB for frequencies below 1 kHz [32]. Although the cartilage conduction hearing aid does not work for those with profound hearing loss, it can help those with moderate loss of hearing. One advantage of cartilage-conducted sound is its unique property of remaining in the canal regardless of the amount of ventilation, and the less sound leakage reduces the risk of acoustical feedback [36].

**Figure 11.** (a) Cartilage conduction hearing aid with a ring-shaped transducer. (b) Sound-transmission pathway in the ear. (c) Cartilage conduction hearing aid with a small transducer. (d) Sound-transmission pathway for people with atresia of the external auditory canal.

The cartilage conduction hearing aid can be used by patients with atresia of the external auditory canal [37, 38]. Because patients whose canal is occluded by fibrotic tissue cannot use a conventional hearing aid because they lack a sound pathway (**Figure 11c** and **d**), they are usually advised to use a bone conduction hearing aid or a BAHA. However, the bone conduction transducer must be pushed tightly against the head insufferably when using it for a long period of time, and the BAHA needs surgery. In contrast, the cartilage conduction transducer only needs to be softly put on the end of the canal because the excitation force required for light cartilage is much smaller than that for heavy skull bone. Despite this, hearing levels for bone and cartilage conduction are almost equivalent at frequencies below 2 kHz [38].

#### **6.3. Autocorrelation analysis of speech signals**

with three options. For patients with profound hearing loss, the bone-conducted ultrasonic

**Figure 10.** Bone-conducted hearing aids: (a) HD-GU, (b) AIST-BCUHA-003, (c) AIST-BCHA-005, and (d) HiSonic.

The uncomfortable feeling that results from an occluded ear is a major hurdle for hearing aid users, and this feeling is always a top reason for discontinuing hearing-aid use. One study tried using active noise control to reduce this feeling [29]. Unlike this digital approach, a cartilage conduction hearing aid reduces the "full ear" feeling with an analog procedure. In 2004, Hosoi found that a specific type of transducer could be used to create clear audible sound when gently placed on aural cartilage [30, 31]. Aural cartilage comprises the outer ear and is distributed around the exterior half of the external auditory canal. Transducer-induced cartilage vibration generates sound directly in the external auditory canal as shown in **Figure 11b** [32–35]. In this case, the cartilage and transducer play the roles of a diaphragm and a loudspeaker voice coil, respectively. When the transducer is ring shaped, it can amplify the sound without occluding the ears (**Figure 11a**). Sound pressure levels in the canal show that the ring-shaped transducer produces an average gain of 35 dB for frequencies below 1 kHz [32]. Although the cartilage conduction hearing aid does not work for those with profound hearing loss, it can help those with moderate loss of hearing. One advantage of cartilage-conducted sound is its unique property of remaining in the canal regardless of the amount of ventilation,

hearing aid might be a good option before risking cochlear implant surgery.

and the less sound leakage reduces the risk of acoustical feedback [36].

**6.2. Cartilage conduction hearing aid**

162 An Excursus into Hearing Loss

The third problem is the speech intelligibility for people with sensorineural hearing loss. As mentioned earlier, patients with sensorineural hearing loss can perceive sound but cannot recognize speech. For example, all Japanese medical institutions use the same list of monosyllable signals for an intelligibility test, which are delivered by a professional female speaker, and studies investigating speech intelligibility in patients with sensorineural hearing loss have identified the less discernible consonants within Japanese monosyllables [39–41]. According to [41], 90% of patients identified monosyllable /i/ correctly, while only 10% could identify /de/.

To explain the difference, studies have investigated several physical parameters. For example, voice onset time (VOT) is the length of time in milliseconds that passes between the release of a stop consonant and the onset of voicing [42]. Another example is the speech intelligibility index (SII), which is a measure of the proportion of a speech sound that is discernible under different listening conditions, such as filtering related to hearing decline or reverberation [43]. Loudness level [phon] can be calculated based on the averaged spectra of a signal, but the masking effects of neighboring auditory filters may be included [44].

All these parameters are related to mechanisms of peripheral perception in the auditory pathway. However, the causes of sensorineural hearing loss are in the inner ear or the auditory nerve. Therefore, subsequent processing after peripheral functions is complete and has to be considered when trying to explain speech intelligibility. A major candidate for imitating subsequent processing is autocorrelation function (ACF). ACF is an established method for temporally analyzing auditory nerve processes [45]. Neural representations that resemble the ACF of an acoustic stimulus have been detected in distributions of all-order interspike intervals in the auditory nerve [46, 47]. Mathematically, the normalized ACF and ACF can be represented by

$$
\mathfrak{D}\left(\mathsf{T}\right) = \frac{\mathsf{Q}(\mathsf{T})}{\mathsf{Q}(\mathsf{T})} \tag{1}
$$

**Figure 12.** Example of an autocorrelation function calculated for the Japanese monosyllable /sa/ (F0: Fundamental

Hearing Aids

165

http://dx.doi.org/10.5772/intechopen.73527

**Figure 13.** Relationship between the percent of articulation and four physical parameters: (a) VOT, (b) SII, (c) loudness,

)med (R: Correlation coefficient). The different symbols indicate different consonants. Reproduced from

frequency of voice).

and (d) (τ<sup>e</sup>

(**Figure 5**) in [49].

where Φ(τ) <sup>=</sup> \_\_\_ <sup>1</sup> <sup>2</sup>*<sup>T</sup>* <sup>∫</sup> −*T <sup>T</sup> p* ′ (*t*) *p* ′ (*t* + *τ*)*dt* and where 2 *T* is the integral interval, τ is the time delay, and *p*'(*t*) is the signal after it is passed through an A-weighting filter. **Figure 12** shows an example of the calculated monosyllable /sa/. In this case, the ACF was calculated for the integral interval (2 *T* = 80 ms) that moves with the duration of the monosyllable (running ACF). In the fricative consonant part of the sound (before 0.3 s), the normalized ACF decays suddenly to 0, while in the extended vowel portion of the sound (after 0.3 s), it gradually decays as a function of the delay time. To evaluate the slope of the decay, an effective duration (τ<sup>e</sup> ) has been proposed [48] that is defined by the delay such that the envelope of the normalized ACF becomes smaller than 0.1. When the consonant component-containing noise element (e.g., /s/ and /d/) occupies the monosyllable, τ<sup>e</sup> becomes shorter because it expresses the amount of noise (i.e., τ<sup>e</sup> = 0 for white noise and τ<sup>e</sup> = ∞ for a pure tone). Indeed, τ<sup>e</sup> represents the SN ratio (S: speech and N: noise or reverberation) of a monosyllable itself.

**Figure 13** shows the relationships between the physical parameters and the percent of monosyllables articulation presented to patients with sensorineural hearing loss at an optimal volume [49]. Each symbol is the average for a consonant, and (τ<sup>e</sup> ) med indicates the median of the time-varied τ<sup>e</sup> of the running ACF. Among the four physical measures examined, only τ<sup>e</sup> was correlated with speech intelligibility. Effective duration is a measure of temporal pattern persistence, that is, the duration over which a waveform maintains a stable pattern. These data have led to the hypothesis that poor speech recognition is related to the degraded perception of temporal fluctuation patterns. DSPs that prolong the effective duration (e.g., shorten the consonant length or smoothen the voicing frequency) may therefore improve speech intelligibility for those with sensorineural hearing loss.

To explain the difference, studies have investigated several physical parameters. For example, voice onset time (VOT) is the length of time in milliseconds that passes between the release of a stop consonant and the onset of voicing [42]. Another example is the speech intelligibility index (SII), which is a measure of the proportion of a speech sound that is discernible under different listening conditions, such as filtering related to hearing decline or reverberation [43]. Loudness level [phon] can be calculated based on the averaged spectra of a signal, but the masking effects of neighboring auditory filters may be

All these parameters are related to mechanisms of peripheral perception in the auditory pathway. However, the causes of sensorineural hearing loss are in the inner ear or the auditory nerve. Therefore, subsequent processing after peripheral functions is complete and has to be considered when trying to explain speech intelligibility. A major candidate for imitating subsequent processing is autocorrelation function (ACF). ACF is an established method for temporally analyzing auditory nerve processes [45]. Neural representations that resemble the ACF of an acoustic stimulus have been detected in distributions of all-order interspike intervals in the auditory nerve [46, 47]. Mathematically, the normalized ACF and ACF can be

and *p*'(*t*) is the signal after it is passed through an A-weighting filter. **Figure 12** shows an example of the calculated monosyllable /sa/. In this case, the ACF was calculated for the integral interval (2 *T* = 80 ms) that moves with the duration of the monosyllable (running ACF). In the fricative consonant part of the sound (before 0.3 s), the normalized ACF decays suddenly to 0, while in the extended vowel portion of the sound (after 0.3 s), it gradually decays as a function of the delay time. To evaluate the slope of the decay, an

envelope of the normalized ACF becomes smaller than 0.1. When the consonant compo-

shorter because it expresses the amount of noise (i.e., τ<sup>e</sup> = 0 for white noise and τ<sup>e</sup> = ∞ for

**Figure 13** shows the relationships between the physical parameters and the percent of monosyllables articulation presented to patients with sensorineural hearing loss at an optimal vol-

correlated with speech intelligibility. Effective duration is a measure of temporal pattern persistence, that is, the duration over which a waveform maintains a stable pattern. These data have led to the hypothesis that poor speech recognition is related to the degraded perception of temporal fluctuation patterns. DSPs that prolong the effective duration (e.g., shorten the consonant length or smoothen the voicing frequency) may therefore improve speech intel-

of the running ACF. Among the four physical measures examined, only τ<sup>e</sup>

nent-containing noise element (e.g., /s/ and /d/) occupies the monosyllable, τ<sup>e</sup>

ume [49]. Each symbol is the average for a consonant, and (τ<sup>e</sup>

ligibility for those with sensorineural hearing loss.

<sup>Φ</sup>(0) (1)

becomes

was

med indicates the median of the

(*t* + *τ*)*dt* and where 2 *T* is the integral interval, τ is the time delay,

) has been proposed [48] that is defined by the delay such that the

represents the SN ratio (S: speech and N: noise or reverberation)

)

included [44].

164 An Excursus into Hearing Loss

represented by

where Φ(τ) <sup>=</sup> \_\_\_ <sup>1</sup>

effective duration (τ<sup>e</sup>

a pure tone). Indeed, τ<sup>e</sup>

of a monosyllable itself.

time-varied τ<sup>e</sup>

<sup>2</sup>*<sup>T</sup>* <sup>∫</sup> −*T <sup>T</sup> p* ′ (*t*) *p* ′

<sup>∅</sup> (τ) <sup>=</sup> <sup>Φ</sup>(*τ*) \_\_\_\_

**Figure 12.** Example of an autocorrelation function calculated for the Japanese monosyllable /sa/ (F0: Fundamental frequency of voice).

**Figure 13.** Relationship between the percent of articulation and four physical parameters: (a) VOT, (b) SII, (c) loudness, and (d) (τ<sup>e</sup> )med (R: Correlation coefficient). The different symbols indicate different consonants. Reproduced from (**Figure 5**) in [49].
