Ap Cam

Find The Best Tech Web Designs & Digital Insights

Technology and Design

Introduction to Hearing Psychology

In this chapter, we will review basic information about sound and how the human auditory system performs the process called hearing. We will describe some fundamental auditory functions that humans perform in their everyday lives, as well as some environmental variables that may complicate the hearing task. Also, we will discuss the types of hearing loss or disorder that can occur and their causes.

Introduction to Sound

Hearing allows one to identify and recognize objects in the world based on the sound they produce, and hearing makes communication using sound possible. Sound is derived from objects that vibrate producing pressure variations in a sound-transmitting medium, such as air. A pressure wave is propagated outward from the vibrating source. When the pressure wave encounters another object, the vibration can be imparted to that object and the pressure wave will propagate in the medium of the object. The sound wave may also be reflected from the object or it may diffract around the object. Thus, a sound wave propagating outward from a vibrating object can reach the eardrum of a listener causing the eardrum to vibrate and initiate the process of hearing.

Sound waves can be mathematically described in two ways, that is, in two domains. In the time domain, sound is described as a sequence of pressure changes (oscillations) that occur over time. In other words, the time-domain description of a sound wave specifies how the sound pressure increases and decreases over time. In the frequency domain, the spectrum defines sound in terms of the tonal components that make up the sound. A tonal sound has a time-domain description in which sound pressure changes as a regular (sinusoidal) function of time. If one knows the tonal components of sound as defined in the frequency domain, one can calculate the time-domain description of the sound. Using the same analytic tools, the frequency domain representation of a sound can also be calculated from the time-domain description. Thus, the time and frequency domain descriptions of sound are two different ways of measuring the same thing (i.e., the time and frequency domains are functional equivalents). Thus, one can describe sound as temporal fluctuations in pressure, or one can describe sounds in terms of the frequency components that compose the sound.

Largely because tonal (sinusoidal) sounds are the bases of the frequency domain description of sound, a great deal of the study of hearing has dealt with tonal sounds. However, everyday sounds are complex sounds, which are made up of many tonal frequency components. A common complex sound used to study hearing is noise. Noise contains all possible frequency components, and the amplitude of the noise varies randomly over time. A noise is said to be “white noise” if it contains all frequency components each at the same average sound level.

A sound waveform has three basic physical attributes: frequency, amplitude, and temporal variation. Frequency refers to the number of times per second that the vibratory pattern (in the time domain) oscillates. Amplitude refers to sound pressure. There are many aspects to the temporal variation of sound, such as sound duration. Sound pressure is proportional to sound intensity (in units of power or energy), so sound magnitude can be measured in units of pressure, power, and energy. The common measure of sound level is the decibel (dB), in which the decibel is the logarithm of the ratio of two sound intensities or two sound pressures. Frequency is measured in units of hertz (Hz), cycles per second. Measures of time are expressed in various temporal units or can be translated into phase measured in angular degrees. Below are some definitions of terms and measures used to describe sound.

  • Sound pressure (p): sound pressure is equal to the force (F) produced by the vibrating object divided by the area (Ar) over which that force is being applied: p = F/Ar. DekaPascals or daPa; the Système International unit of pressure. One daPa = 100 dynes per cm2, and one atmosphere = 10132.5 daPa.
  • Sound intensity (I): sound intensity is a measure of power. Sound intensity equals sound pressure squared divided by the density (po) of the sound-transmitting medium (e.g., air) times the speed of sound (c): I = p2/poc. Energy is a measure of the ability to do work and is equal to power times the duration of the sound, or E = PT, where P is power and T is time (duration) in seconds.
  • Decibel (dB): dB = 10*log10(I/Iref) or 20*log10(p/pref), where I is sound intensity, p is sound pressure, ref is a referent intensity or pressure, and log10 is the logarithm to the base 10. When pref is 20 micropascals, then the decibel measure is expressed as dB SPL (sound pressure level).
  • Hertz (Hz): hertz is the measure of vibratory frequency in which “n” cycles per second of periodic oscillation is “n” Hz.
  • Phase (angular degrees): one cycle of a periodic change in sound pressure can be expressed in terms of completing the 360 degrees of a circle. Thus, half a cycle is 180 degrees, and so on. Thus, time (t) within a cycle can be expressed in terms of phase (θ, expressed in degrees), θ = 360o(t)(f), where f = frequency in Hz, and t = time in seconds.
  • Tone (a simple sound): a tone is a sound whose amplitude changes as a sinusoidal function of time: Asin(2 πft + θ), where sin is the trigonometric sin function, θ = peak amplitude, f = frequency in Hz, t = time in seconds, and θ = starting phase in degrees.
  • Complex sound: any sound that contains more than one frequency component.
  • Spectrum: the description of the frequency components of sound; amplitude spectrum describes the amplitude of each frequency component; phase spectrum describes the phase of each frequency component.
  • Noise: a complex sound that contains all frequency components, and whose instantaneous amplitude varies randomly.
  • White noise: a noise in which all of the frequency components have the same average level.

The term “noise” can refer to any sound that may be unwanted or may interfere with the detection of a target or signal sound. In some contexts, a speech sound may be the signal or target sound, and another speech sound or a mixture of other speech sounds may be presented as a “noise” to interfere with the auditory processing of the target speech sound. Often a mixture of speech sounds is referred to as “speech babble.”

The Auditory System

The ear is a very efficient transducer (i.e., a device that changes energy from one form to another), changing sound pressure in the air into a neural-electrical signal that is translated by the brain as speech, music, noise, etc. The external ear, middle ear, inner ear, brainstem, and brain each have a specific role in this transformation process.

Anatomy of the Human Ear

Anatomy of the Human Ear

The external ear includes the pinna, which helps capture sound in the environment. The external ear canal channels sound to the tympanic membrane (eardrum), which separates the external and middle ear. The tympanic membrane and the three middle ear bones, or ossicles (malleus, incus, and stapes), assist in the transfer of sound pressure in air into the fluid- and tissue-filled inner ear. When pressure is transferred from air to a denser medium, such as the inner ear environment, most of the pressure is reflected away. Thus, the inner ear offers impedance to conducting sound pressure to the fluid and tissue of the inner ear. The transfer of pressure in this case is referred to as admittance, while impedance is the restriction of the transfer of pressure. The term “acoustic immittance” is used to describe the transfer process within the middle ear: the word “immittance” combines the words impedance and admittance (im + mittance). As a result of this impedance, there is as much as a 35 dB loss in the transmission of sound pressure to the inner ear. The outer ear, tympanic membrane, and ossicles interact when a sound is present to focus the sound pressure into the inner ear so that most of that 35 dB impedance loss is overcome. Thus, the fluids and tissues of the inner ear vibrate in response to sound in a very efficient manner.

Sound waves are normally transmitted through the ossicular chain of the middle ear to the stapes footplate. The footplate rocks in the oval window of the inner ear, setting the fluids of the inner ear in motion, with the parameters of that motion being dependent on the intensity, frequency, and temporal properties of the signal. The inner ear contains both the vestibular system (underlying the sense of balance and equilibrium) and the cochlea (underlying the sense of hearing). The cochlea has three separate fluid compartments; two contain perilymph (scala tympani and scala vestibuli), similar to the body's extracellular fluid, and the other, scala media, contains endolymph, which is similar to intracellular fluids.

The scala media contains the sensorineural hair cells that are stimulated by changes in fluid and tissue vibration. There are two types of hair cells: inner and outer. Inner hair cells are the auditory biotransducers translating sound vibration into neural discharges. The shearing (a type of bending) of the hairs (stereocilia) of the inner hair cells caused by these vibrations induces a neural-electrical potential that activates a neural response in auditory nerve fibers of the eighth cranial nerve that neurally connect the hair cells to the brainstem. The outer hair cells serve a different purpose. When their stereocilia are sheared, the size of the outer hair cells changes due to a biomechanical alteration. The rapid change in outer hair cell size (especially its length) alters the biomechanical coupling within the cochlea.

The structures of the cochlea vibrate in response to sound with a particular vibratory pattern. This vibratory pattern (the traveling wave) allows the inner hair cells and their connections to the auditory nerve to send signals to the brainstem and brain about the sound's vibration and its frequency content. That is, the traveling wave motion of cochlear vibration helps sort out the frequency content of any sound, so that information about the frequency components of sound is coded in the neural responses being sent to the brainstem and brain.

The fact that the different frequencies of sound are coded by different auditory nerve fibers is referred to as the place theory of frequency processing, and the auditory nerve is said to be “tonotopically” organized in that each nerve fiber carries information to the brainstem and brain about a narrow range of frequencies. In addition, the temporal pattern of neural responses of the auditory nerve fibers responds to the temporal pattern of oscillations of the incoming sound as long as the temporal variations are less than about 5000 Hz.

In general, the more intense the sound is, the greater the number of neural discharges that are being sent by the auditory nerve to the brainstem and brain. Thus, the cochlea sends neural information to the brainstem and brain via the auditory nerve about the three physical properties of sound: frequency, temporal variation, and level. The biomechanical response of the cochlea is very sensitive to sound, is highly frequency selective, and behaves in a nonlinear manner. A great deal of this sensitivity, frequency selectivity, and nonlinearity is a function of the motility of the outer hair cells.

There are two major consequences of the nonlinear function of the cochlea: (1) neural output is a compressive function of sound level. This means that, at low sound levels, there is a one-to-one relationship between increases in sound level and increases in neural output; however, at higher sound levels, the rate at which the neural output increases with increases in sound level is lower. (2) The cochlea and auditory nerve produce distortion products. For instance, if the sound input contains two frequencies, f1 and f2, distortion products at frequencies equal to 2f1, 2f2, f2-f1, and 2f1-f2 may be produced by the nonlinear function of the cochlea. The distortion product 2f1-f2 (the cubic-difference tone) may be especially strong and this cubic-difference distortion product is used in several measures of auditory function.

At 60 dB SPL the bones of the skull begin to vibrate, bypassing the middle ear system. This direct vibration of the skull can cause the cochlea to vibrate and, thus, the hair cells to shear and to start the process of hearing. This is a very inefficient way of hearing, in that this way of exciting the auditory nervous system represents at least a 60 dB hearing loss.

There are many neural centers in the brainstem and in the brain that process the information provided by the auditory nerve. The primary centers in the auditory brainstem in order of their anatomical location from the cochlea to the cortex are: cochlear nucleus, olivary complex, lateral lemniscus, inferior colliculus, and medial geniculate. The outer, middle, and inner ears along with the auditory nerve make up the peripheral auditory system, and the brainstem and brain constitute the central auditory nervous system. Together the peripheral and central nervous systems are responsible for hearing and auditory perception.

Auditory Perception

In the workplace, hearing may allow a worker to:

  1. Communicate using human speech (e.g., communicate with a supervisor who is giving oral instructions);
  2. Process information-bearing sounds (e.g., respond to an auditory warning);
  3. Locate the spatial position of a sound source (e.g., locate the position of a car based on the sound it produces).

There is a wealth of basic knowledge about how the auditory system allows for communication based on sound, informative sound processing, and sound localization. Listeners can detect the presence of a sound; discriminate changes in frequency, level, and time; recognize different speech sounds; localize the source of a sound; and identify and recognize different sound sources.

The auditory system must often accomplish these workplace tasks when there are many sources producing sound at about the same time, so that the sound from one source may interfere with the ability to “hear” the sound from another source. The interfering sound may make it difficult to detect another sound, to discriminate among different sounds, or to identify a particular sound. A hearing loss may make it difficult to perform one or all of these tasks even in the absence of interfering sounds but especially in the presence of interfering sounds.

Sound Detection

The healthy, young auditory system can detect tones in quiet with frequencies ranging from approximately 20 to 20000 Hz.

How Human Ears Help Perceive and Distinguish Sounds

Hearing is the process by which the ear transforms sound vibrations in the external environment into nerve impulses that are conveyed to the brain, where they are interpreted as sounds. Sounds are produced when vibrating objects, such as the plucked string of a guitar, produce pressure pulses of vibrating air molecules, better known as sound waves. The ear can distinguish different subjective aspects of a sound, such as its loudness and pitch, by detecting and analyzing different physical characteristics of the waves. Pitch is the perception of the frequency of sound waves-i.e., the number of wavelengths that pass a fixed point in a unit of time. Frequency is usually measured in cycles per second, or hertz. The human ear is most sensitive to and most easily detects frequencies of 1,000 to 4,000 hertz, but at least for normal young ears the entire audible range of sounds extends from about 20 to 20,000 hertz. Sound waves of still higher frequency are referred to as ultrasonic, although they can be heard by other mammals. Loudness is the perception of the intensity of sound-i.e., the pressure exerted by sound waves on the tympanic membrane. The greater their amplitude or strength, the greater the pressure or intensity, and consequently the loudness, of the sound. The intensity of sound is measured and reported in decibels (dB), a unit that expresses the relative magnitude of a sound on a logarithmic scale. Stated in another way, the decibel is a unit for comparing the intensity of any given sound with a standard sound that is just perceptible to the normal human ear at a frequency in the range to which the ear is most sensitive. On the decibel scale, the range of human hearing extends from 0 dB, which represents a level that is all but inaudible, to about 130 dB, the level at which sound becomes painful.

In order for a sound to be transmitted to the central nervous system, the energy of the sound undergoes three transformations. First, the air vibrations are converted to vibrations of the tympanic membrane and ossicles of the middle ear. These in turn become vibrations in the fluid within the cochlea. Finally, the fluid vibrations set up traveling waves along the basilar membrane that stimulate the hair cells of the organ of Corti. These cells convert the sound vibrations to nerve impulses in the fibres of the cochlear nerve, which transmits them to the brainstem, from which they are relayed, after extensive processing, to the primary auditory area of the cerebral cortex, the ultimate centre of the brain for hearing. Only when the nerve impulses reach this area does the listener become aware of the sound.