Understanding Spatial Hearing Loss: Symptoms, Causes, and Treatment
There is an increasing global recognition of the negative impact of hearing loss, and its association to many chronic health conditions. Hearing loss is the third leading cause of disability globally, with approximately 466 million people living with disabling hearing loss around the world. Hearing loss is associated with increased mortality and has widespread effects on overall health, including links to cognitive decline, cardiovascular disease, depression, sleep disorders and impaired socialization.
The societal impact is high. dollars annually, and is largely due to higher unemployment rates and lost work productivity. These estimates, however, do not take into account the proportion of adults with undiagnosed or unaddressed hearing loss, such as those with unilateral hearing impairment.
Disabling hearing loss has long been associated with bilateral age-related changes. However, disability and handicap in those with unilateral hearing loss have been shown to match or exceed those that are reported in individuals with bilateral hearing loss, despite hearing with only one normally functioning ear. Accumulating evidence points to the loss of spatial perception as the driving factor in functional disability for those with unilateral hearing loss.
Profound unilateral sensorineural hearing loss, often termed single-sided deafness (SSD), refers to clinically-unaidable hearing, as defined by severe-to-profound hearing thresholds with a poor word recognition ability. Acquired unilateral hearing loss occurs in 12-27 per 1,000,000 persons annually.
The etiology of SSD is widespread, including such pathologies as cochleovestibular abnormalities, temporal bone trauma, Meniere’s disease, vestibular schwannoma, vascular ischemia, autoimmune disorders and infections, although it is commonly idiopathic in nature. Often, this loss can be sudden in onset, leaving the patient extremely debilitated. Despite this, prevailing misperceptions that one normal hearing ear is sufficient for daily communication persists.
Long overlooked are the deficits and disability associated with SSD, which have substantial but differential impacts on the affected. Unlike other paired systems, such as vision, where the impact of unilateral impairment is readily acknowledged, hearing is subject to an invisibility factor, where the disability itself is less overt and subsequently underappreciated. As such, the impact of SSD is often underestimated.
Increased effort is required to compensate for unilateral hearing loss in complex listening environments. Over time, such additional stressors result in auditory fatigue and reduced performance at work. The increased hearing handicap in this group is most strongly linked to deficits in spatial perception.
The Importance of Spatial Perception
Spatial perception is multisensory and multifaceted. The auditory system plays a particularly important role, helping to map where we are in space by continuously sensing auditory events. The constructs of auditory space are complex, depending on our interaction with signals that are dynamically changing in terms of frequency spectrum, level and time. The more complex the listening environment, the more these signals interact and overlap.
Therefore, hearing is not just about sound detection or awareness, but also about managing these complex interactions to provide meaning to those signals. Our auditory system is quite sophisticated in wholly managing the acoustic information presented to our ears in dynamically changing environments. Human listeners are able to rapidly process this information to identify and orient to acoustic stimuli, selectively attending to signals of importance while suppressing competing signals.
Spatial hearing is dependent on the processing of monaural and binaural hearing cues.
Monaural and Binaural Hearing Cues
While monaural spectral-shape cues provide important information regarding elevation and contribute to our ability to determine the distance of a sound source, binaural hearing cues play a much larger role in spatial hearing abilities. The integration of acoustic information from both ears is essential for spatial hearing, and serves to provide critical information for speech processing, localization, the segregation of auditory streams and the perception of fused sounds.
Binaural hearing gives rise to a wide array of auditory phenomena due to the integration and processing of differences in arrival time and intensity between the signals at the two ears. The interaural timing difference (ITD) is the difference in arrival time for a stimulus to reach both ears; it is greatest for low frequency signals below 1000 Hz. Sounds presented directly to the front of a listener have an ITD of 0 μs, increasing in time as the signal moves laterally in the horizontal plane, with the largest ITD occurring for signals presented ±90° azimuth, reaching ~600 μs.
Due to the extreme binaural sensitivity of this cue, when a sound is presented to a listener, the ear closest to the signal of interest will detect that sound before the ear farthest from the signal. Likewise, the interaural level difference (ILD), or the difference in the intensity of a stimulus reaching both ears, dictates that the ear closer to a stimulus will receive a more intense signal compared to the contralateral ear. As the signal deviates away from 0° azimuth in either direction, the ILD increases.
As with ITDs, ILDs are a frequency-dependent cue, with level differences increasing as a function of frequency, reaching values of 20 dB of attenuation or more. This is due to the acoustic shadow created by the head. The head acts as a physical barrier to sounds, resulting in an attenuation of the signal in the ear not directed at the source.
Sounds arriving at the ears from different locations in space allow listeners to take advantage of spatially separated sounds to improve the signal-to-noise ratio (SNR). For example, the ear closest to the signal benefits from the shadow created by the head to block noise that would otherwise mask the signal. Furthermore, the integration of the input from the two ears results in a summation of the signals leading to perceived enhancement.
In addition to boosting the target signal, a reduction of competing noise occurs with binaural hearing. This is known as the squelch effect, where the processing mechanism takes advantages of amplitude and phase differences of the inputs as they arrive at the two ears to suppress competing noise. Collectively, these binaural processes provide listeners with a 4-10 dB benefit in processing speech in complex environments.

Figure 1. Binaural Hearing Cues
Impact of SSD on Hearing Cues
These essential interaural hearing cues are unavailable to those with SSD. The reduced ability to discriminate ILDs and ITDs results in primary deficits in speech understanding in noise and localization. As monaural listeners, they are left to rely on the normal hearing ear to process all of the incoming acoustic information, thereby losing the ability to segregate spatially separated streams of sound or take advantage of spatially separated signals in complex listening environments.
This is further complicated by reduced access to high frequency speech cues as a result of the acoustic head-shadow. For monaural listeners, access to sound at the normal hearing ear is both frequency- and direction-dependent. As can be observed in Figure 1C, low frequency information can be well detected even when directed at the deafened ear.
Low frequency signals are characterized by long wavelengths, which allow them to easily wrap around the head to stimulate the normal hearing ear. Conversely, high-frequency pinna cues are diffracted by the head, making them virtually undetectable when presented at the side of the deafened ear.
The auditory system’s ability to adapt to the lack of binaural cues to preserve spatial hearing abilities as much as possible is remarkable. This process is achieved by reweighting the available cues, exploiting the location-dependent monaural cues. However, in a normal every-day acoustic scenario, sounds are constantly changing in level, location and frequency, which makes it impossible to have accurate spatial hearing based only on monaural cues.
Losses in gain for deaf-ear listening progressively increase as a function of frequency, reducing access to high-frequency phonemes that give rise to speech intelligibility and discrimination. The resulting outcome is a disruption in speech perception, which is further amplified in the presence of competing noise. Specifically, the parts of speech that provide listeners with the ability to distinguish one word from another are inconsistently available to monaural listeners.
This can be observed in Figure 2A,B, where the acoustic interactions between the filter of the head and pinnae, and differing speech stimuli are illustrated. The spectrograms represent the frequency components over time on each corresponding position of the horizontal space. The waveforms help to visualize the overall loudness effect on the signal (i.e., changes in amplitudes) caused by the sound interaction with the head and pinnae.
Here, noticeable changes are observed in the spectrum of the words for direct signals (+90°, deaf side) compared to those in the acoustic head-shadow (−90°, hearing side). The acoustic construct of the “s” and “h” phonemes is fundamentally different (+90°). The available acoustic information not only makes these two words intelligible but also unique and, therefore, distinguishable.
The head-shadow filter is evident in Figure 2A,B at −45° and −90°, where there is a decrease in the overall level and attenuation of the high frequency components of the speech, resulting in an indistinguishable speech construct at the hearing side. The frequency information on the hearing side clearly shows that each word conserves the main speech formants, but has lost the unique informational marks that makes it singular from the others.
Since this is the only information available on the functional side, the hearing brain will not be able to correctly differentiate the presented words, leading to difficulties in contextual interpretations. So, while the monaural listener may detect or hear the signal, the ambiguity of the speech sounds diminishes their ability to extract meaning from that signal.

Figure 2. Comparison of the head-shadow and pinnae filtering for the words (A) ”Hat” and (B) “Sat” when presented at the deaf side.
Normal hearing listeners are highly accurate in their ability to localize sound in space, whereas this is largely disrupted in monaural listeners due to the lack of available ITD and ILD cues. This hinders the ability of a monaural listener to accurately and rapidly orient themselves to signals of interest or importance. A number of functional deficits arise from the inability to localize, such as issues regarding safety as they navigate the world around them, the inability to locate the talker, target confusions in multi-talker situations, and an overall sense of uncertainty in complex listening environments.
However, monaural listeners can adapt to use spectral pinna cues of the normal hearing ear to quell some of the consequential spatial deficits that arise from the loss of binaural hearing. Figure 1B displays the amplification of high frequency cues that occurs when directed from the better hearing ear. The pinna provides a gain increase of approximately 15 dB for 2-4 kHz. In addition to contributing to speech discrimination, high frequency spectral cues are important for localizing sound in elevation and determining if a signal is in front of or behind a listener.
Figure 3A illustrates the high accuracy target identification ability of a normal hearing listener, and how the loss of ITDs and ILDs results in a primary deficit for the location of signals in azimuth. Figure 3B indicates how spectral cues contribute to good localization accuracy for signals presented toward the hearing side, but spatial hearing ability is perturbed on the deaf side.
Some monaural listeners learn to adapt the spectral cues, as well as loudness differences created by the acoustic head-shadow, to infer where an object is in space. While this does not lead to high accuracy or normal hearing performance, it does allow some lateralization of the signal for reliable loudness cues.

Figure 3. An illustration of spatial hearing abilities for (A) normal hearing and (B) single-sided deaf listeners.
Types of Hearing Loss
Hearing loss affects people of all ages and can be caused by many different factors. The three basic categories of hearing loss are sensorineural hearing loss, conductive hearing loss and mixed hearing loss.
- Sensorineural Hearing Loss: This type of hearing loss occurs when the inner ear or the actual hearing nerve itself becomes damaged. Sensorineural loss is the most common type of hearing loss. It can be a result of aging, exposure to loud noise, injury, disease, certain drugs or an inherited condition. Sudden sensorineural hearing loss may occur very suddenly or over the course of a few days. It is imperative to see an otologist (a doctor specializing in diseases of the ear) immediately.
- Conductive Hearing Loss: This type of hearing loss occurs in the outer or middle ear where sound waves are not able to carry all the way through to the inner ear. In some people, conductive hearing loss may be reversed through medical or surgical intervention.
- Mixed Hearing Loss: Sometimes people can have a combination of both sensorineural and conductive hearing loss.
Hearing Testing and Solutions
Hearing testing is critical for discovering exactly what type of hearing loss you have, and will help determine the hearing care solution that is right for you. People over age 50 may experience gradual hearing loss over the years due to age-related changes in the ear or auditory nerve. The medical term for age-related hearing loss is presbycusis. Most adults received their last hearing test when they were in grade school. It is a good idea to have your hearing checked when you are an adult at least once during your annual physical.
There are many kinds of over-the-counter hearing aids on the market, ranging from inexpensive hand-held amplifiers to self-fit devices that can be calibrated to your amplification needs with a smartphone app.