**2.2.4 Stem cell evoked potentials**

The Vestibular Evoked Myogenic Potential is a test that is frequently performed on patients experiencing dizziness or balance problems. It evaluates additional portions of the inner ear providing a more complete evaluation of the vestibular system which controls balance. Electrodes are placed on the patients head and neck and a loud sound is delivered through inserted earphones. This test is very useful to screen infants and children under 5 years of age for hearing loss.

Auditory Brainstem Response is an electrical potentials activity in the brain that occurs in response to a sound. The test provides information on the cochlea and brain pathways for hearing. Three small disk electrodes are pasted onto the head and neck, and brain wave activity is recorded while the patient listens to a clicking sound. Soft headphones are placed into the patient's ears and quiet clicking sounds are played through the earphones.

Depending on the amount of time elapsing between the "click" stimulus and the auditory evoked response, potentials are classified as early (0-10 msec), middle (11-50 msec), or late

preamplifier and amplifier signal; an Analog-to-digital converter (ADC), and Digital Signal Processing (DSP) based on a central processing unit that provides ultra-fast calculation, such as filtering, averaging, FF, WVT, CA or HT in order to obtain otoacoustic emissions response. Additionally, the equipment must be accompanied by the printing and displaying

Because the subject being tested is not required to respond, this is an ideal test method for neonates and infants or for those who cannot be evaluated using conventional techniques (Buz & Bower, 2009). OAE are valuable in testing for ototoxicity, detecting blockage in the outer ear canal, as well as the presence of middle-ear fluid and damage to the cochlea outer

Fig. 10. Block Diagram of the measuring system used to detect OAE

The Vestibular Evoked Myogenic Potential is a test that is frequently performed on patients experiencing dizziness or balance problems. It evaluates additional portions of the inner ear providing a more complete evaluation of the vestibular system which controls balance. Electrodes are placed on the patients head and neck and a loud sound is delivered through inserted earphones. This test is very useful to screen infants and children under 5 years of

Auditory Brainstem Response is an electrical potentials activity in the brain that occurs in response to a sound. The test provides information on the cochlea and brain pathways for hearing. Three small disk electrodes are pasted onto the head and neck, and brain wave activity is recorded while the patient listens to a clicking sound. Soft headphones are placed

Depending on the amount of time elapsing between the "click" stimulus and the auditory evoked response, potentials are classified as early (0-10 msec), middle (11-50 msec), or late

into the patient's ears and quiet clicking sounds are played through the earphones.

**2.2.4 Stem cell evoked potentials** 

age for hearing loss.

module.

hair cells.

(51-500 msec). Early potentials reflect electrical activity at the cochlea, eighth cranial nerve, and brain-stem levels, while later potentials reflect cortical activity. In order to separate evoked potentials from background noise, a system computer, as shown in Figure 11 (Nicolet™ EMG/PE), analyzes how well the ears respond to the sound by averaging auditory evoked responses at 1,000 to 2,000 clicks at least. Early evoked responses may be analyzed to estimate the magnitude of hearing loss and to differentiate among cochlear, eighth nerve, and brainstem lesions.

Fig. 11. Evoked Potential/EMG measuring system

For purposes of neonatal screening, only limited auditory evoked potentials or limited evoked otoacoustic emissions are considered medically necessary. Neonates who fail this screening test are then referred for comprehensive auditory evoked response testing or

Technology for Hearing Evaluation 17

Analog programmable hearing aids have a microchip that is programed for different listening environments. Program settings depend on the individual's hearing-loss profile, understanding of speech, and range of tolerance for louder sounds (Walden & Walden, 2004). Even with the improvement that analog programmable offer, 25.3% of analog hearing aid users reported that they have a hard time listening in presence of high background noise. Approximately 1% of the users reported difficulty in using the telephone. Examples of environments include quiet conversation in the home, noisy situations such as at a

In 1996, the Digital signal processing (DSP) chip was introduced into digital programmable hearing aids (Phillips et al., 2007). These hearing aids use digitized sound-processing algorithms to convert sound waves into digital signals. Key benefits of these include improvement in programmability, greater precision in fitting, management of loudness discomfort, control of acoustic feedback, and noise reduction. A processor chip in the aid analyzes the signals to determine whether the sound is noise or speech. It then makes modifications to provide a clear, amplified, distortion-free signal (Clopton & Spelman,

Digital hearing aids are usually self-adjusting. The digital processing allows for more flexibility in programming the aid. Thus, the sound transmitted matches the patient's specific hearing-loss pattern. This digital technology is more expensive than that of the conventional analog, but it offers many advantages: these generally have a longer life span and may provide better hearing in different listening situations. Some aids can store several programs, i.e., when the listening environment changes, it is possible to change the hearing aid settings. This is usually done by pushing a button on the hearing aid or by using a remote control to switch channels. The aid can be reprogrammed by the Audiologist if the

Of all of the advances in hearing aid technology in the last several years, perhaps the greatest has been the performance of directional microphones. The use of DSP in hearing aids has opened the door to the many different types of algorithms used in directional microphones. Digital technology offers many options, including automatic, automatic adaptive, multiband automatic adaptive, and, most recently, asymmetric directionality (Kerckhoff, 2008). Each of these options possesses benefits, but some also have limitations and may not prove to be as beneficial to the patient as advertised by hearing aid

Directional microphones were developed in an attempt to improve SNR performance. These microphones can employ different types of polar patterns, some of which have multiple nulls. The fixed directional microphone contains two sound ports and operates by acoustically delaying the signal entering the back microphone port and subtracting this from the signal entering the front port. This creates a null at an azimuth, corresponding to the location where the microphone is least sensitive, and which can be plotted graphically on a polar pattern (Chung, 2004). These patterns are predetermined; thus, the location of sound attenuation always remains the same. Therefore, if the interfering sound is located directly behind the patient, this design acts to attenuate the input level to the hearing aid at the 180° null. If, however, the offending sound arrives from behind but not directly at, 180°, the

restaurant, or in large areas such as a theater.

user's hearing or hearing needs change.

microphone will be less effective in improving SNR.

2000).

manufacturers.

comprehensive otoacoustic emissions. Comprehensive auditory evoked response testing and comprehensive otoacoustic emissions are considered experimental and investigational for neonatal screening because there is a lack of evidence of the value of comprehensive testing in limited auditory evoked potentials or limited otoacoustic emissions for this indication.

### **2.2.5 Videonystagmography**

This technique is used to evaluate the function of the vestibular system; the inner-ear portion may be the cause of any balance or dizziness problems. The instrument records eye movements, most notably involuntary eye movements called nystagmus. Eye movements are recorded by using infrared goggles. There are three evaluations, including: 1) following a light as it moves in different ways; 2) lying flat on the examination table and the subject's moving his/her head left or right, and 3) stimulating the vestibular system with warm and cool air or water.

#### **2.3 Hearing assistance technology**

Hearing loss can be categorized according to which part of the auditory system is damaged, the degree or severity of impairment and the configuration or pattern of injury across tones. There are three basic types of hearing loss: conductive hearing loss, sensorineural loss, and mixed hearing loss. Each of these should be approach with assistive devices, such as hearing aids and cochlear implants, so that individual best adapt to managing conversations and take charge of their communication.

#### **2.3.1 Hearing aids**

From; tremendous advances in technology of amplification have occurred from the days that ear trumpets and animal horns were used to help to transmit sounds into the ear. A hearing aid is an electroacoustic device that typically fits in or behind the wearer's ear. It is designed to amplify and modulate sound in order to direct the flow of sound into the ear canal, thus enhancing sound quality (Killion, 1997). Hearing aids differ in design, size, ease of handling, volume control, amount of amplification, and the availability of special features such as digitized processing. Their basic functional parts include a microphone to pick up sound and an associated preamplifier, an automatic gain control circuit, a set of active filters, a mixer and power amplifier to make the sound louder, and an output transducer or receiver (a miniature loudspeaker that can be made in integrated form with a field-effect transistor preamplifier) to deliver the amplified sound into the ear. All electronic circuitry is packaged in housing works on a battery. The use of multiple channels in this design provides different compression characteristics for different frequency ranges. Typically, crossover frequencies of the channels and compression characteristics can be adjusted with potentiometers or digital control.

Conventional analog hearing aids are designed for a particular frequency and utilize a fixed or dedicated directional microphone. Although some adjustments are necessary, the aid essentially amplifies all sounds (speech and noise) in the same manner. The directional microphone mode, amplifies sounds from in front more than sounds from other directions. (Berger, 1984).

comprehensive otoacoustic emissions. Comprehensive auditory evoked response testing and comprehensive otoacoustic emissions are considered experimental and investigational for neonatal screening because there is a lack of evidence of the value of comprehensive testing in limited auditory evoked potentials or limited otoacoustic emissions for this

This technique is used to evaluate the function of the vestibular system; the inner-ear portion may be the cause of any balance or dizziness problems. The instrument records eye movements, most notably involuntary eye movements called nystagmus. Eye movements are recorded by using infrared goggles. There are three evaluations, including: 1) following a light as it moves in different ways; 2) lying flat on the examination table and the subject's moving his/her head left or right, and 3) stimulating the vestibular system with warm and

Hearing loss can be categorized according to which part of the auditory system is damaged, the degree or severity of impairment and the configuration or pattern of injury across tones. There are three basic types of hearing loss: conductive hearing loss, sensorineural loss, and mixed hearing loss. Each of these should be approach with assistive devices, such as hearing aids and cochlear implants, so that individual best adapt to managing conversations and

From; tremendous advances in technology of amplification have occurred from the days that ear trumpets and animal horns were used to help to transmit sounds into the ear. A hearing aid is an electroacoustic device that typically fits in or behind the wearer's ear. It is designed to amplify and modulate sound in order to direct the flow of sound into the ear canal, thus enhancing sound quality (Killion, 1997). Hearing aids differ in design, size, ease of handling, volume control, amount of amplification, and the availability of special features such as digitized processing. Their basic functional parts include a microphone to pick up sound and an associated preamplifier, an automatic gain control circuit, a set of active filters, a mixer and power amplifier to make the sound louder, and an output transducer or receiver (a miniature loudspeaker that can be made in integrated form with a field-effect transistor preamplifier) to deliver the amplified sound into the ear. All electronic circuitry is packaged in housing works on a battery. The use of multiple channels in this design provides different compression characteristics for different frequency ranges. Typically, crossover frequencies of the channels and compression characteristics can be adjusted with

Conventional analog hearing aids are designed for a particular frequency and utilize a fixed or dedicated directional microphone. Although some adjustments are necessary, the aid essentially amplifies all sounds (speech and noise) in the same manner. The directional microphone mode, amplifies sounds from in front more than sounds from other directions.

indication.

cool air or water.

**2.3.1 Hearing aids** 

**2.2.5 Videonystagmography** 

**2.3 Hearing assistance technology** 

take charge of their communication.

potentiometers or digital control.

(Berger, 1984).

Analog programmable hearing aids have a microchip that is programed for different listening environments. Program settings depend on the individual's hearing-loss profile, understanding of speech, and range of tolerance for louder sounds (Walden & Walden, 2004). Even with the improvement that analog programmable offer, 25.3% of analog hearing aid users reported that they have a hard time listening in presence of high background noise. Approximately 1% of the users reported difficulty in using the telephone. Examples of environments include quiet conversation in the home, noisy situations such as at a restaurant, or in large areas such as a theater.

In 1996, the Digital signal processing (DSP) chip was introduced into digital programmable hearing aids (Phillips et al., 2007). These hearing aids use digitized sound-processing algorithms to convert sound waves into digital signals. Key benefits of these include improvement in programmability, greater precision in fitting, management of loudness discomfort, control of acoustic feedback, and noise reduction. A processor chip in the aid analyzes the signals to determine whether the sound is noise or speech. It then makes modifications to provide a clear, amplified, distortion-free signal (Clopton & Spelman, 2000).

Digital hearing aids are usually self-adjusting. The digital processing allows for more flexibility in programming the aid. Thus, the sound transmitted matches the patient's specific hearing-loss pattern. This digital technology is more expensive than that of the conventional analog, but it offers many advantages: these generally have a longer life span and may provide better hearing in different listening situations. Some aids can store several programs, i.e., when the listening environment changes, it is possible to change the hearing aid settings. This is usually done by pushing a button on the hearing aid or by using a remote control to switch channels. The aid can be reprogrammed by the Audiologist if the user's hearing or hearing needs change.

Of all of the advances in hearing aid technology in the last several years, perhaps the greatest has been the performance of directional microphones. The use of DSP in hearing aids has opened the door to the many different types of algorithms used in directional microphones. Digital technology offers many options, including automatic, automatic adaptive, multiband automatic adaptive, and, most recently, asymmetric directionality (Kerckhoff, 2008). Each of these options possesses benefits, but some also have limitations and may not prove to be as beneficial to the patient as advertised by hearing aid manufacturers.

Directional microphones were developed in an attempt to improve SNR performance. These microphones can employ different types of polar patterns, some of which have multiple nulls. The fixed directional microphone contains two sound ports and operates by acoustically delaying the signal entering the back microphone port and subtracting this from the signal entering the front port. This creates a null at an azimuth, corresponding to the location where the microphone is least sensitive, and which can be plotted graphically on a polar pattern (Chung, 2004). These patterns are predetermined; thus, the location of sound attenuation always remains the same. Therefore, if the interfering sound is located directly behind the patient, this design acts to attenuate the input level to the hearing aid at the 180° null. If, however, the offending sound arrives from behind but not directly at, 180°, the microphone will be less effective in improving SNR.

Technology for Hearing Evaluation 19

and Drug Administration (FDA), the manufacture and sale of hearing aids must meet the

1. Dispensers must obtain a written statement from the patient, signed by a licensed

2. A patient aged 18 years or older can sign a waiver for a medical examination, but dispensers must avoid encouraging the patient to waive the medical evaluation

3. Dispensers must advise patients who appear to have a hearing problem to consult a

4. FDA regulations also require that an instruction brochure be provided with the hearing

The FDA Web site that provides standards for hearing aids is at http://www.accessdata.fda.gov/scripts/cdrh/cfdocs/cfStandards/Detail.CFM?STANDAR

Recent developments in the access to newer forms of wireless transmission and improvements in coupling this technology with hearing aids not only enhance patients' abilities to use telephones or other external devices, but also allow improvement in SNR

A cochlear implant is a prosthetic inner-ear replacement that provides direct electrical stimulation to the inner ear's auditory nerve, allowing for perception of the sensation of sound. These devices are used for patients with severe-to-profound hearing sensorineural loss who cannot be helped with hearing aids. These implants can benefit patients with boneconduction thresholds as poor as 65 dB HL. Because of this damage, sound cannot reach the auditory nerve. With a cochlear implant, the damaged hair cells are bypassed and the

Part of the cochlear implant is surgically implanted into the mastoid bone behind the target ear with a titanium screw (osseointegrated material), and a tiny electrode array is inserted into the cochlea at set intervals depending on the number of channels or number of frequency bands to excite (Medical Advisory Secretariat, 2002). The other part of the device

The signal from the microphone is sent to the speech processor, which comes in two designs. It may be either a BTE model Nucleus Freedom™, which looks like a hearing aid, or a Body-worn device (BWD) that it attached to the belt, for example, the Cordelle II (European Assistive Technology Information Network, 2010), manufactured by Cochlear

The microphone looks like a BTE hearing aid. It picks up sounds—just as a hearing aid microphone does—and sends these to the speech processor. The speech processor is a computer that analyzes and digitizes the sound signals and sends them to a transmitter worn on the head just behind the ear. The transmitter sends the coded signals to a receiver implanted immediately under the skin. The internal or implanted parts include a receiver and electrodes. The receiver is just under the skin behind the ear. The receiver takes the

is external and includes a microphone, a speech processor, and a transmitter coil.

performance for better speech recognition through noise reduction algorithms.

aid that illustrates and describes its operation, use, and care.

auditory nerve is electronically stimulated directly (Spitzer, 2010).

Deutschland GmbH & Co. KG, as shown in Table 2.

following requirements:

physician;

requirement;

physician promptly, and

D\_IDENTIFICATION\_ NO=14730

**2.3.2 Cochlear implants** 

Several studies have reported the effectiveness of fixed directional microphones in improving SNR for the hearing aid user by at least 5 dB, (Bilsen et al., 1993). Gravel, Fausel, and Liskow (Gravel et al., 1999) found that children listening with dual microphones achieved a mean improvement of 4.7 dB in SNR when compared with the omnidirectional condition.

Automatic directional microphones were subsequently developed so that patients would not have to bother with manually changing the hearing aid program or setting to the directional microphone mode. Automatic directional microphones utilize an algorithm in which the microphones switch automatically between omnidirectional and directional. Input level, signal location, and SNR are factors that contribute to determining when the microphones switch (Preves & Banerjee, 2008).

The automatic microphone feature works well for patients who do not want to be concerned with manual switching between omnidirectional and directional modes. However, automatic switching can be problematic for patients when the microphone switches but the patient does not prefer switching, or if the switching takes place too rapidly and amplifies unwanted sounds such as a cough or a dog barking (Preves & Banerjee, 2008). The other limitation with automatic directional microphones is that the null is fixed when in the hearing aid is in directional mode. Depending on the location of the noise source and the azimuth of the null in the microphone, there is the possibility that the noise source may not be maximally attenuated.

Although directional microphones have been shown to be successful in the laboratory, there is no guarantee that this success will be achieved in real-life situations for all hearing aid users, due to the difficulty that some persons have in manipulating the hearing aid's controls.

There are four hearing aids styles or configurations. These include the following: the In-thecanal (ITC) style; the In-the-ear (ITE) hearing instruments, which are very easy to operate even if the user has poor dexterity; the behind-the-ear (BTE) style, which is extremely flexible for all hearing loss types, and the Completely-in-the-canal (CIC) style, as depicted in Table 1 (Miller,2006).


Table 1. Styles of Hearing Aids

There are many manufactures of hearing aids such as Viennatone™, Hansaton™, Bernafon™, Oticon™, Siemens™, Sonic™, Unitron™, and Phonak™. According to the Food

Several studies have reported the effectiveness of fixed directional microphones in improving SNR for the hearing aid user by at least 5 dB, (Bilsen et al., 1993). Gravel, Fausel, and Liskow (Gravel et al., 1999) found that children listening with dual microphones achieved a mean improvement of 4.7 dB in SNR when compared with the omnidirectional

Automatic directional microphones were subsequently developed so that patients would not have to bother with manually changing the hearing aid program or setting to the directional microphone mode. Automatic directional microphones utilize an algorithm in which the microphones switch automatically between omnidirectional and directional. Input level, signal location, and SNR are factors that contribute to determining when the

The automatic microphone feature works well for patients who do not want to be concerned with manual switching between omnidirectional and directional modes. However, automatic switching can be problematic for patients when the microphone switches but the patient does not prefer switching, or if the switching takes place too rapidly and amplifies unwanted sounds such as a cough or a dog barking (Preves & Banerjee, 2008). The other limitation with automatic directional microphones is that the null is fixed when in the hearing aid is in directional mode. Depending on the location of the noise source and the azimuth of the null in the microphone, there is the possibility that the noise source may not

Although directional microphones have been shown to be successful in the laboratory, there is no guarantee that this success will be achieved in real-life situations for all hearing aid users, due to the difficulty that some persons have in manipulating the hearing aid's

There are four hearing aids styles or configurations. These include the following: the In-thecanal (ITC) style; the In-the-ear (ITE) hearing instruments, which are very easy to operate even if the user has poor dexterity; the behind-the-ear (BTE) style, which is extremely flexible for all hearing loss types, and the Completely-in-the-canal (CIC) style, as depicted in

**HEARING AIDS ITC ITE BTE CIC**  Device User Device User Device User Device User

There are many manufactures of hearing aids such as Viennatone™, Hansaton™, Bernafon™, Oticon™, Siemens™, Sonic™, Unitron™, and Phonak™. According to the Food

condition.

microphones switch (Preves & Banerjee, 2008).

be maximally attenuated.

Table 1 (Miller,2006).

Table 1. Styles of Hearing Aids

controls.

and Drug Administration (FDA), the manufacture and sale of hearing aids must meet the following requirements:


The FDA Web site that provides standards for hearing aids is at http://www.accessdata.fda.gov/scripts/cdrh/cfdocs/cfStandards/Detail.CFM?STANDAR D\_IDENTIFICATION\_ NO=14730

Recent developments in the access to newer forms of wireless transmission and improvements in coupling this technology with hearing aids not only enhance patients' abilities to use telephones or other external devices, but also allow improvement in SNR performance for better speech recognition through noise reduction algorithms.

#### **2.3.2 Cochlear implants**

A cochlear implant is a prosthetic inner-ear replacement that provides direct electrical stimulation to the inner ear's auditory nerve, allowing for perception of the sensation of sound. These devices are used for patients with severe-to-profound hearing sensorineural loss who cannot be helped with hearing aids. These implants can benefit patients with boneconduction thresholds as poor as 65 dB HL. Because of this damage, sound cannot reach the auditory nerve. With a cochlear implant, the damaged hair cells are bypassed and the auditory nerve is electronically stimulated directly (Spitzer, 2010).

Part of the cochlear implant is surgically implanted into the mastoid bone behind the target ear with a titanium screw (osseointegrated material), and a tiny electrode array is inserted into the cochlea at set intervals depending on the number of channels or number of frequency bands to excite (Medical Advisory Secretariat, 2002). The other part of the device is external and includes a microphone, a speech processor, and a transmitter coil.

The signal from the microphone is sent to the speech processor, which comes in two designs. It may be either a BTE model Nucleus Freedom™, which looks like a hearing aid, or a Body-worn device (BWD) that it attached to the belt, for example, the Cordelle II (European Assistive Technology Information Network, 2010), manufactured by Cochlear Deutschland GmbH & Co. KG, as shown in Table 2.

The microphone looks like a BTE hearing aid. It picks up sounds—just as a hearing aid microphone does—and sends these to the speech processor. The speech processor is a computer that analyzes and digitizes the sound signals and sends them to a transmitter worn on the head just behind the ear. The transmitter sends the coded signals to a receiver implanted immediately under the skin. The internal or implanted parts include a receiver and electrodes. The receiver is just under the skin behind the ear. The receiver takes the

Technology for Hearing Evaluation 21

hearing aids or cochlear implants to make hearing easier—and thereby reduce stress and fatigue. HATS must be directed toward resolving any one of the following situations: distance; noise, or reverberation that can create listening problems (Medical Services

FM systems operate on special frequencies. A receiver worn around the neck transmits sound to the hearing aid. The sound comes from a transmitter microphone used by a speaker, although in many public places, the transmitter is built into the general sound

Because of their flexibility, mobility, and sturdiness, these systems are among the most commonly used HATS. Studies have shown that FM systems have the best results when implementation is carried out early in the amplification-fitting or cochlear-implant process. Also, infrared wireless headset are available for television listening and interface. However, there are other systems, denominated sound-field systems, which assist listening for all of the children in the classroom. The teacher speaks into a microphone transmitter and his/her

Newer devices, such as the BAHA™ system manufactured by Entific Medical (Medical Advisory Secretariat, 2002) have been developed for patients diagnosed with unilateral profound sensorineural hearing loss, also referred to as single-sided deafness. Other devices have been designed for patients exhibiting severe high-frequency transposition hearing loss and comprise self-learning features on hearing aids and cochlear implants that allow integrate of actual measurements. Finally, an infrared wireless headset is used with television for listening at a higher volume than others sitting in the same room. Bluetooth interface allows persons to hear telephone conversations more easily, amplifying any

The present section provided a brief guide on equipment for diagnosis of deafness and

Although, audiology equipment for evoked potentials and otoacoustic emissions provides highly relevant information deriving from hearing damage, in future, new technological developments should be directed toward improving the hearing test. The research will continue to study algorithms for more accurate, physically realistic modeling of the cochlea,

Audiometers, Tympanometers and other electronic equipment for hearing diagnosis must be designed taking into account specific data formats, communication protocols and interoperability standards, as such HL7 (Health Level Seven) to send data from audiology equipment to electronic medical record, then it is possible to share and use data for research

which should assist in the process of diagnosing local inner-ear problems.

voice is projected through speakers mounted around the classroom.

Advisory Committee, 2010).

**2.4.2 High-frequency hearing loss** 

devices that employ this technology.

hearing assistive technology.

**3. Conclusion** 

and clinical propose.

**2.4.1 FM systems** 

system.

coded electrical signals from the transmitter and delivers them to the array of electrodes that have been surgically inserted into the cochlea. The electrodes stimulate the fibers of the auditory nerve and sound sensations are perceived. Figure 12 depicts a series of stages of the speech processor of a typical cochlear implant and the associated processing waveforms at each stage (Miller, 2006, as cited in Loizou, 1998).


Table 2. Styles of Cochlear Implants.

Fig. 12. Block Diagram of a typical cochlear implant and processing waveforms
