**1. Introduction**

The subjective hearing test, although fundamental in the study of hearing loss, depends on the active collaboration of the patient and is, therefore, subject to the patient, is very difficult to carry out in young children and impossible in babies. Current methods of objective hearing screening, known as "Electrical Response Audiometry," are established by means of acoustic stimulation of the ear with insert earphones. This method does not exactly reproduce the natural stimulation of the ear that is carried out by sounds in our environment and which are usually transmitted through the air. With the new method we propose, using a loudspeaker, we transmit the stimulation of the ear in a natural way through the air and thus obtain results that more closely resemble natural hearing conditions [1].

#### **1.1 Generalities**

The Electrical Response Audiometry quantifies and qualifies the activity of the auditory central nervous system, in the brainstem, in response to sound stimulation without the need for the active participation of the subject and in a harmless manner. This response is called "Auditory Brainstem Response (ABR)" and is registered as voltage fluctuations generated by the nervous system in response to an appropriate acoustic stimulus. For this registered response, it is necessary to extract from the electroencephalographic tracing the electrical activity coming exclusively from the auditory system [2]. The acquisition and recording of this potential require the auditory nerve stimulus to be synchronized and significant. The synchronization of the electrical activity requires very brief stimuli, which is why clicks or filtered clicks are used. This mechanical stimulus is converted in the organ of Corti into an electrical stimulus that travels along with the acoustic pathway to the auditory cerebral cortex [3, 4].

The better-registered response is now being obtained thanks to modifications in pacing parameters and response processing, together with advances in software and hardware that facilitate and simplify the register. The reduction in hardware size has allowed for less bulky equipment, facilitating mobility with the ability to be easily transported to the operating room and neonatology wards [1].

#### **1.2 The sound**

In acoustics, sound (from the Latin *sonitus*) is a longitudinal wave created by the vibration of objects from a sound source (any object capable of disturbing the first particle of the medium) and propagating through a medium. The medium is understood as a set of interlocked and ordered particles interacting with each other. The sound wave propagates by the interaction of the particles of the medium (mechanical waves), so it is not transmitted through a vacuum, unlike electromagnetic waves [5].

Literally, sensation is defined as "the impression that a living being receives when one of its receptor organs is stimulated." Therefore, we call the sensation produced in the organ of hearing by the vibratory movement of bodies (sound), transmitted by an elastic medium such as air, "hearing" [6].

The propagation in the air is determined as a function of temperature, humidity, and atmospheric pressure [7], this speed being 331.5 m/s at 0°C and 50% humidity at sea level [8]. Under these conditions, the speed of sound increases at a rate of 0.61 m/s for each degree of temperature. Therefore, in our environment, with a temperature of 22°C and a humidity of 50% at sea level, the speed of sound is 344.42 m/s [9].

#### *1.2.1 Sound intensity assessment*

The dB (decibel) is considered as a measure of intensity for the human ear. The scale that measures the dB has certain characteristics; it is logarithmic, non-linear, it is relative where 0 (zero) does not mean the absence of sound (sensation), and it is expressed with different reference levels.

The *intensity level* is determined by a reference. Zero dB indicates that the power intensity is equal to the reference [10].

The *sound pressure level* (SPL) indicates that the reference is the sound pressure. The *hearing level* (HL) corresponds when the reference is the hearing level. It consists of a scale created to adapt dB SPL to dB HL because the human ear does not perceive different frequencies with the same intensity. In this way, the intensities and frequencies are adapted in an audiogram by weighting the intensity to obtain a linear graph that is easily readable visually. This scale considers differences at different frequencies so that 0 dB HL corresponds to the different frequencies in **Table 1**.

#### **1.3 The human ear as a receiver of sound waves**

Based on the principle of resonance, we hear sounds because the propagation of the wave in the air causes a displacement of the tympanic membrane. This displacement will result in mechanical transmission and amplification through the middle ear mechanisms and the displacement of the stapes plate. The stapes activates the basilar membrane that represents different elastic properties along with its length, being stiffer near the base and more elastic as it approaches the apex. Consequently, each segment of the basilar membrane is resonant at different frequencies, with high frequencies near the oval window and low frequencies at the opposite end. The organ of Corti sits on top of the basilar membrane, reproducing the movements of the basilar membrane and thus the movement of the stereocilia, resulting in electrical impulses that stimulate nerve fibers for central auditory processing. The combined action of the basilar membrane and the organ of Corti will create a spectral analysis, temporal identification, and intensity variation of the received sound wave which, transmitted through the acoustic pathway to the auditory areas of the cerebral cortex [11], will, in turn, create patterns of frequency, intensity and time, a fundamental process for decoding the communicative content of sound waves [5].

The human ear is an extraordinary receiver capable of receiving waves of very low intensity and can withstand, without being damaged, sounds a billion times more intense than its threshold of perception [1].

#### **1.4 Electrophysiological basis of auditory examination**

The auditory evoked potentials correspond to the recording, from surface electrodes, of the electrical activity of the acoustic pathway at the moment of an adapted sound stimulus. Therefore, to study this signal, it must be isolated from noise, that is unwanted electrical activities, such as electroencephalogram (EEG), electrocardiogram (ECG), and electromyogram (EMG), and the signal-to-noise ratio must be improved [12]. The electrical synchronization of these fibers requires very short stimuli, as in continuous noise, the unitary activity of the cochlear root is not synchronous [13].


#### **Table 1.** *Correlation between dB SPL and dB HL.*

From the generation of the stimulus to the activation of the cerebral cortex, approximately 300 ms elapse, a period we call "latency" [14]. However, each level of the acoustic pathway will generate a response with a different latency, which is why auditory evoked potentials will be classified according to the time segment in which we study this latency [13]. Thus:


**Figure 1.** *Auditory brainstem response (ABR) recording.*

#### *Precocious Auditory Evoked Potential Recording with Free-Field Stimulus DOI: http://dx.doi.org/10.5772/intechopen.102569*

The origin of these different waves is not clearly defined considering the complexity of the auditory pathway and the number of synaptic steps involved in its functioning [15]. However, the location of generation of each of the responses that give rise to each of the waves has been widely agreed since the 1996 studies by Melcher et al. in the cat [16]. These are as follows [5, 17–21]:


#### *1.4.1 Characteristics of auditory brainstem response (ABR)*

*Presence of response*: Obviously we have to obtain the described curve with the presence of the five fundamental waves or, at least, of the three most frequent (I, III, and V) [21].

*Latency:* Each wave has a latency defined under normal conditions and corresponds to the time elapsed between the production of the stimulus and the appearance of the wave. Waves III and V are the most stable waves and wave I appears only at medium and high intensities [12]. The last wave in disappearing is the V-wave considering the psychoacoustic threshold at the last intensity at which its presence is observed. This threshold corresponds to frequencies between 2000 and 5000 Hz with the use of filtered clicks [22]. The interlatencies correspond to values between waves, the most important being the I-III interlatency, the III-V interlatency, and, above all, the I-V interlatency [14].

The auditory evoked potentials can already be performed at birth. From the first studies, an increased latency of wave V and a different morphology of the birth response curve have already been observed. The interlatency I-III and III-V are also increased, but to a lesser extent than I-V [23]. These changes recover progressively with age, with amplitudes at 3 months and latencies at 1 year equaling those of adults [24].

Some authors have described the latencies of neonates [17, 20], expressed in ms (**Table 2**).

Increased wave I latency is interpreted as incomplete maturation of the basal cochlear zone and/or transmission of hair cells and auditory nerve fibers. An increase in the interval of interlatency, and especially I-V, is considered to be incomplete myelination of axons and increased synaptogenesis [10].

*Amplitude:* The height of each wave manifests the amplitude measured in microV, although their values are very unstable.

In ABRs, a transient potential is elicited in response to a click, which returns to its initial resting state because each stimulus is followed by a sufficiently long interval before the next stimulus. But if we perform the stimulus with a sufficiently fast stimulation frequency so that the response to one stimulus is not extinguished before the emission of the next stimulus, we obtain a succession of overlapping responses. The sum of these potentials results in a sinusoidal response that will have exactly the same frequency


#### **Table 2.**

*Normal values in pediatrics. Auditory brainstem response with 70 dB stimulus, with stimulation by insert earphones.*

as the modulation frequency of the stimulus. These are called Auditory Steady-State Response (ASSR). Unlike transient potentials, this response will be maintained over time, as will the stimulus that provokes it [25]. Therefore, a repetitive sound at frequencies between 3 and 300 Hz evokes a steady-state response and can be said to be quasisinusoidal periodic responses whose amplitude and phase are maintained over time [26].

With a fast stimulation frequency range of 70–110 Hz, the overlapping transient responses are of shorter latency and generated in the brainstem similar to those of the ABR [27]. This is why they are not affected by sleep or sedation, being optimal in the study of auditory function in infants and young children [7], being this range the one used in the stimulus of our exploration.

To shorten scanning times without appreciable loss of diagnostic accuracy [25], we use multifrequency as a method of stimulating ASSRs that allows simultaneous stimulation of several frequencies, and even binaurally, requires that each tone is modulated at an identifying frequency different from the stimulation frequencies of the rest of the tones so that it can be identified later in the frequency analysis of the response [8]. We can separate in each ear the response for each frequency by evaluating the spectral component for each stimulus. In this way, we simultaneously stimulate four frequencies (500, 1000, 2000, and 4000 Hz) and both ears (ASSR-MF) [28].

To establish hearing thresholds in infants and young children, we use ABR and ASSR-MF recordings together using either insertion headphone or bone conduction stimulation, however, we are not aware of normality criteria using the free field as the sound stimulus in ABR and ASSR, that is using a loudspeaker close to the patient, a stimulus more similar to natural hearing stimulation.

The aim of this study is to determine criteria for normality in ABR and ASSR recordings with free-field stimulus and to be able to apply these neurophysiological tests in patients where they cannot be performed conventionally.
