Hearing Loss

I wrote this article for BAJ (British Audio Journal) in 2007 following a lecture to the Audio Engineering Society UK Section by Professor Brian Moore FRS of Cambridge University. (I was vice-chair of the section at the time.) It explains hearing loss and how to avoid it – a topic which should be of concern to every sensible headphone user.

I should add that one of the most important measures you can take if you've been exposed, deliberately or accidentally, to high sound pressure levels is allowing your ears ample time to recover afterwards in a quiet environment. 


Sir Fred Hoyle, arguably Britain’s most famous astronomer, recounts in his autobiography how, as a child, he was slapped round one ear by a teacher and later suffered deafness in the same ear. I have a friend who has had to live with the bane of tinnitus (‘ringing in the ears’) since somebody thought it a laugh to wind the volume right up when he was listening to headphones. A military weapons instructor at my school was barely able to hear a telephone ringing even when he stood next to it because of a working life spent firing, or being close to others firing, Lee-Enfield rifles and Bren guns.

These may be relatively unusual cases but even if you are careful to look after your hearing – which isn’t always easy in the audio industry – you are still likely to suffer from presbycusis, the medical term for age-related hearing loss. We’re all familiar with the idea that as we get older our ability to hear extreme high frequencies fades, but the practical outcomes are more insidious than simply not being able to perceive the upper half-octave or so of the audible spectrum. We also suffer degradation of our ability to pick out and attend to particular sounds within a cacophony of others – the so-called cocktail party effect – which makes it increasingly difficult, for instance, to understand speech within reverberant environments. Some of you reading this will already have noticed these unwelcome signs of the passing years.

Brian Moore FRS, Professor of Auditory Perception in the University of Cambridge, spoke on the topic of hearing loss in the last lecture of this year’s Audio Engineering Society UK Section lecture programme, on 10 July, under the title “Hearing loss: will we all get it, what are its consequences, and what can the audio industry do to help?” (The AES, as befits its scientific status, conforms to the academic year, so the lecture season begins in October and ends in July.)

Prof Moore began by outlining the gross anatomy of the human ear and distinguishing between two different classes of hearing loss: conductive, which results from a failure of sound vibrations to reach the cochlea due to, for instance, a build up of ear wax in the ear canal or of fluid in the middle ear; and sensorineural, which is most commonly due to cochlear malfunction.

Hearing loss is quantified by measuring audiograms, which are plots of the sensitivity of an individual ear at selected frequencies relative to the mean sensitivity for the ears of young people with no known hearing disorder. Such measurements show that over the age range 18 to 80 years the average hearing loss averaged at 0.5, 1.2 and 4kHz is as high as 25dB for 16 per cent of the population. In the age range 61 to 70 years this figure increases to 37 per cent, and in the range 71 to 80 years to 60 per cent. The loss in sensitivity is normally greatest at high frequencies but increased loss in the range 4 to 6kHz is often observed where hearing damage has occurred as a result of prolonged exposure to industrial noise or shorter-term exposure to high-level impulsive sounds (such as gunshots).

Although it is very hard to find elderly people with no hearing loss in the developed world, evidence from remote tribes suggests that hearing loss is not an inevitable result of ageing. Many factors contribute to hearing loss, of which noise exposure is only one. Others include: infections, ototoxic drugs (such as antibiotics with ‘mycin’ in their name), solvent exposure and smoking. There is also a genetic factor, people with dark hair and eyes being slightly less susceptible than those with fair hair and blue eyes.

Prof Moore then described the principal functional elements of the cochlea: the basilar membrane and the inner and outer hair cells. The outer hair cells incorporate a positive feedback mechanism that amplifies the basilar membrane’s mechanical response to incoming sound and sharpens the tuning of the auditory filter which provides frequency discrimination. The inner hair cells act as transducers, generating action potentials in the auditory nerve. Photomicrographs of mouse hair cells showed how exposure to high levels of noise damages the outer hair cells. Loud impulsive sounds can also damage the inner hair cells. In mammals (unlike birds) hair cells cannot re-grow, so the damage is permanent. (Asked in the Q&A session at the end of the lecture whether medical treatment to re-grow hair cells is feasible, Prof Moore responded that it may be but that it is unlikely to become viable for perhaps 30 years.)

Outer hair cell damage has various consequences: low-level sounds need to be more intense to be perceived; the tuning of the auditory filter becomes broader, thereby reducing frequency selectivity; and the ear’s input-output transfer characteristic becomes steeper. The perceptual consequences include an inability to hear low-level sounds and a degradation of the listener’s ability to separate out a particular sound source in the presence of many others (the ‘cocktail party effect’). It also leads to loudness recruitment, where weak sounds are not heard at all whereas intense sounds are heard with normal loudness.

Inner hair cell damage, which often occurs over a localised area of the cochlea, results in one or more ‘dead regions’ where no sound is heard in a particular frequency band. Dead regions at high frequencies result in poor speech intelligibility even in quiet surroundings.

Prof Moore then went on to describe various features of digital hearing aids and the signal processing that is applied to counteract the different effects of hearing loss. Simulations were played to illustrate the hearing impairment and the improvement produced by appropriate signal processing. These simulations are used both in the development of signal processing and to help those who live or work with the hearing impaired to understand the nature of their disability. The simulations are verified using subjects who have unilateral hearing impairment, in one ear only.

Digital hearing aids are a substantial advance over earlier analogue types because of the range and flexibility of the signal processing that they can apply but they also have problems: the signal processing introduces time delays (typically 2-9ms) that are long enough to disrupt spatial discrimination unless matched aids are fitted to each ear; strong noise reduction can introduce disturbing artefacts such as ‘musical noise’; feedback reduction schemes also introduce artefacts; and they are more complex to fit because of the many adjustments they incorporate. Typically NHS clinics do not have sufficient time to optimise all the aid’s settings for each individual, with many left at their factory defaults.

There are also more generalised hearing aid problems including wind noise, the occlusion effect (which makes the user’s voice sound loud and boomy) and irritation of the skin of the ear canal. Likely future developments include: an increased trend to open fittings, although this will require improved feedback reduction; processing to reduce the occlusion effect; the use of improved directional microphones; wider frequency range; communication between bilateral aids to enhance directional selectivity; automatic selection of programs for different listening situations; and implantable hearing aids which drive one of the middle ear bones directly.

Prof Moore concluded by calling on the audio industry to reduce hearing loss by avoiding exposure to high SPLs in control rooms and at concerts, where necessary by promoting the use of ‘musicians earplugs’ which provide constant attenuation across the entire audible frequency range. For those who already suffer from hearing loss, the industry can contribute by reducing background noise levels, avoiding large fluctuations in sound level and by increasing the availability of loop systems. Audio engineers can also contribute to the design of better hearing aids – but the challenge is a difficult one.

Many who attended this lecture went home questioning their current replay level practices and asking whether hearing loss is not a subject the audio industry at large should take more seriously. Few like Big Brother approaches to limiting our exposure to high-level sound, so it falls to the individual to understand the problem, appreciate the potential consequences and take the necessary steps to protect their and others’ hearing into old age.