Auditory Evoked Potential: Definition, Types, and Significance in Hearing Assessment
Hypoacusis, or hearing loss, stands as the most widespread sensory disability globally, often leading to impediments in speech. Early and effective hearing screening tests using electroencephalography (EEG) have become a key approach to tackling this issue.
EEG-based hearing threshold level determination is particularly suitable for individuals who lack verbal communication and behavioral response to sound stimulation. An ad-hoc intelligent hearing level assessment system is essential for determining the threshold level of a person for newborns, infants and multiple handicaps. The technological advancement in electronics and digital signal processing techniques enable the implementation of intelligent hearing level assessment system using the EEG as a reliable measurement technology.
Auditory evoked potential (AEP) is a type of EEG signal emanating from the brain scalp in response to an acoustical stimulus. AEP response reflects the auditory ability level of an individual. An intelligent hearing perception level system enables the examination and determination of the functional integrity of the auditory system.
This article presents a review on the recent physiological experiments that examined the effectiveness of auditory evoked potential in determining the hearing threshold level. Let's delve deeper into the fundamentals of event-related potentials and evoked potentials, and explore how AEP signals are used to estimate hearing threshold levels. The electroencephalography is a non-invasive technique widely used in diagnosing many neurological diseases and problems associated with brain dynamics. The EEG signal is a clear indicator of the electrical activity of the brain and contains useful information about the brain state.
 Waveform)
Auditory Brainstem Response (ABR) Waveform
Event-Related Potentials (ERPs)
ERPs are task-oriented potentials, spatio-temporal patterns of brain signals, occurring in response to an event or task concerning an applied stimulus time. The transition from a disordered state to an ordered state of a brain in response to synchronization of certain tasks or events gives rise to event-related potential. ERP reflects the activity originating within the brain and is phase-locked to the stimulus onset. ERP provides a powerful tool for objective assessment of cognitive status and clinical studies of brain functions such as attention, memory, and language.
Evoked Potentials (EPs)
Evoked potentials commonly occur in response to a physical stimulus. The physical stimuli are patterns of band energy received by the senses, and their corresponding sensory receptors convert this energy into nerve impulses to the brain. The nerve impulses are interpreted in the cerebral cortex as sensations. These sensations are evoked by delivering auditory stimuli such as tone burst or click stimuli.
Evoked potential components comprise both exogenous and endogenous components of the brain. Evoked potential components generally fall into two categories:
- Visual Evoked Potential (VEP)
- Auditory Evoked Potential (AEP)
Visual Evoked Potential (VEP)
VEP is an electrical signal emanating from the brain while a visual stimulus is presented to the subject in a time-locked manner. The VEP can be used as a diagnostic tool to detect ocular diseases in patients with visual impairment. VEP response can also be used to detect eye diseases like glaucoma, diabetic retinopathy, multiple sclerosis, ocular hypertension, loss of peripheral (side) vision, macular degeneration, and color blindness.
Auditory Evoked Potential (AEP)
AEP is an electrical signal elicited from the brain while an auditory stimulus is presented in a time-locked manner. AEP signal consists of reproducible positive or negative peaks, latency, amplitude, and behavioral correlation. AEPs are much smaller in amplitude compared to the EEG signals. AEP signals can be classified as either transient or steady-state.
- Transient AEP: The AEP signal emanated while perceiving an audio stimulus with a slow rate to avoid overlap of the immediate stimuli response and the corresponding evoked potentials.
- Steady-State AEP: The AEP signal emanated while perceiving an audio stimulus with a fast rate to induce overlap of individual responses.
Several components comprise AEPs, each providing unique information about the auditory pathway:
- Auditory Brainstem Response (ABR): ABR comprises the early portion (0-12 milliseconds) of AEPs. ABR is composed of several waves and peaks, known as Jewett waves. The ABR waves or peaks are normally labeled using Roman numerals I-VII. Waves I, III, and IV are generally considered as clinically significant. ABRs are used for diagnosis and localization of pathologies affecting brainstem pathways.
- Middle Latency Auditory Evoked Potential (MLAEP): MLAEP comprises of (8-50 millisecond) AEPS. Middle latency components were observed to be elicited when the subject was not perceived in the direction of attention to the auditory stimulus. Directing attention of the subject to perceive the auditory stimulus has been reported to enhance the N1 (90 millisecond), P2 (170 millisecond) components of the evoked potential response.
- Mismatch Negativity (MMN): MMN comprises a portion of (200-400 millisecond) AEPs. MMN is elicited when a flow of identical pure tone (standard stimulus) sounds is followed by a ‘deviant’ sound. The MMN is one of the genetically earliest perception-related responses recorded over the scalp. An MMN peaking at (200-400 millisecond) has been elicited by burst tone frequency change and phonemic vowel change in newborns. The MMN phenomenon is valuable when investigating subjects whose feedback responses are unavailable or unreliable, such as infants. The MMN is a component of brain event-related potential that helps to understand the brain process forming the biological substrate of central auditory perception and various forms of auditory memory.
Hearing Threshold Level Estimation from Auditory Evoked Potential Signal
In earlier studies, normal hearing persons were subjected to various levels of acoustic stimuli, and their corresponding auditory responses were recorded. Auditory evoked potential response reflects the hearing perception level of an individual.
Key Findings and Techniques
Several studies have contributed to the understanding and application of AEPs in hearing assessment:
- Picton et al. examined the effects of attention on auditory evoked potentials in humans. When standard auditory stimuli were perceived with attention, there was a significant increase in the N1 (90 msec) P2 (170 msec) components, and a further peak was evoked near 450 msec to the perceived signal. The detected signals based on stimuli condition were used to index the hearing perception level of a subject.
- Jewett and Williston first reported a remarkable distinct series of waves (I-VI) after the click stimulus. This auditory response waveform was consistent and detectable in all subjects and was later known as auditory evoked potential response or auditory evoked potential.
- Barrie W. Jervis et al. showed that the evidence of AEP was due to phase reordering and further contains additive energy in each harmonic component. This was established by employing angular statistics techniques. The significance of the ABR waveform is that it does not require any attention or feedback response from the subject under test.
- Delgada et al. proposed a complete automated system for ABR response identification and waveform recognition. The analysis portion was divided into: (i) peak identification and labeling, (ii) ABR interpretation. When the threshold level was above 20 dB hearing loss, the subjects were flagged as having a form of hearing loss. When the peak intensity level was greater than 4.40 ms, the subjects were flagged as having hearing pathologies.
- Arnaud Jacquin et al. combined signal adaptive denoising techniques based on complex wavelets with the figure of signal quality (Fsp) to denoise Brainstem Auditory Evoked Response (BAER) signals. This proposed technique increases the detection rate of AEP response waves, which determines the presence of hearing loss of a patient in real-time.
- Robertson et al. described the evaluating alternation of techniques for estimating the spectrum of the ABR signal. The mid-frequency and high-frequency components of the ABR waves are most important for determining the latencies and threshold estimation. Peak V has primary importance in identifying the hearing level of a person, because this peak V clearly occurs only at the lowest sound stimulus intensity.
- Strauss et al. proposed the fast detection of wave V in ABR using single sweep analysis with a hybrid supervised system. This proposed work provides 100% sensitivity and 90% specificity in terms of identifying the peak V for normal hearing persons.
- Rushaidin et al. estimated the peak instantaneous energy of peak V of the ABR wave as a feature and discriminated the normal and abnormal hearing persons based on their derived threshold values.
- Robert Boston et al. proposed an expert decision support system for interpretation of brainstem auditory evoked potential response. The prototype system consists of 36 rules. Thirteen rules were framed to find the presence of a neural response, and ten rules to identify a peak as peak V.
- D. Alpsan et al. proposed that a Feedforward neural network was employed to detect the brainstem auditory evoked potential response and no response signals. The maximum classification of 75.6% was reported in discriminating the “response” and “no response” classes.
- R. Sanchez et al. extracted individual feature sets and a combination of feature sets from brainstem auditory evoked potential (BAEP) were given as input feature vectors to linear discriminate function and artificial neural networks. The maximum classification accuracy of 97.2% and 98.85% was reported using linear discriminate functions and artificial neural networks, respectively.
- Edwige Vannier et al. proposed a brainstem auditory evoked potential detection method in the time domain based on supervised pattern recognition. The pattern of normal BAEP was used based on cross-correlation with a template. The accuracy of detecting the BAEP pattern was 90%, and determination of threshold level is with a mean error of 5 dB.
- Nurettin et al. proposed automated recognition of ABR waveform to detect the hearing threshold of a person. In this method, amplitude values, discrete cosine transform coefficients, and discrete wavelet transform coefficients were extracted from the ABR waveform classified using a support vector machine, and a classification accuracy of 97.7% was reported.
The table below summarizes the key components and characteristics of different AEPs:
| AEP Type | Latency (ms) | Components | Clinical Significance |
|---|---|---|---|
| ABR | 0-12 | Waves I-VII (Jewett waves) | Diagnosis and localization of brainstem pathologies |
| MLAEP | 8-50 | Middle latency components (N1, P2) | Assessment of auditory attention |
| MMN | 200-400 | Mismatch negativity | Evaluation of auditory discrimination and memory |