Tap to Read ➤

An Overview of the Cocktail Party Effect in Psychology

Parul Solanki
Ever wondered how you are able to walk into a noisy room, and immediately start talking and paying attention to a particular sound? In psychology, this is known as the cocktail party effect.

Can You Hear Me?

According to tests carried out at Queen's University, Ontario, that involved 23 couples married for more than eighteen years, and it was found that even above the hubbub of a crowded room, people who have been together long enough can pick out their partner's voice easily.
You walk into a party, and immediately, your senses are assaulted with a number of sights and sounds.
Loud music, clinking glasses, people laughing and talking. Suddenly, you spot an old friend among the crowd of revelers, and you walk towards him and start talking. Now, even amidst the loud sounds and cacophony of voices, you are easily able to converse with your friend.
Have you ever wondered how this sound editing process occurs in the brain? With all these sounds in the room, how are we attuned to choosing the sound that we actually want to hear?
When you are able to focus on a particular sound while filtering out the din of other sounds, it is known as the cocktail party effect.
This is related to selective hearing and happens not just at cocktail parties but also in virtually any environment―a sporting event, a classroom, or crowded restaurant―even if that person's voice is seemingly drowned out by a jabbering crowd.
This listening ability can be attributed to a number of factors, from characteristics of the auditory system, the human speech production system, or the language and perceptual processing by humans. Although this might seem like a simple task, but focused auditory attention is an exceptional ability.
In this story we look at the research method involved in studying this complex method of filtering out unwanted stimuli while focusing attention on one stimuli.

Colin Cherry and the Dichotic Listening Test

During the early 1950s, air traffic controllers faced the problem of too many messages received by pilots over the loudspeakers. Confused by the intermingling of voices over a single loudspeaker, the air traffic controllers had a hard time doing their job.
One of the earliest researches in selective hearing was done by Colin Cherry in 1953. As an engineer, he wanted to build a machine that could filter out certain sounds while focusing on a particular signal.
He conducted a dichotic listening test which played back two different messages at the same time to people, under a variety of conditions. This helped him discern how easily we could filter out what we wanted to hear. He named it "the cocktail party problem".
This series of recordings also involved "shadowing" where the subject repeated words after hearing them from a recording.
The first set of experiments involved playing two different messages in the same voice. The messages were played through both ears of a pair of headphones. The participants were asked to shadow one or two of the messages.
To separate one message from the other, the participants could hear the clip over and over again. When presented with the two messages in the same voice to both ears at once, the participants had a difficult time separating and understanding the meaning of the different messages.
On the other hand, in the next set of experiments, when he fed one message to one ear, which had to be shadowed, while a second message was presented to the other ear, it was seen that very little information was gleaned from the second message, and participants rarely noticed, even when the message was spoken in a foreign language.
It was also observed that only a change in the gross physical features were noticeable, so, if the sex of the voice of the unattended message changed, the participants would notice this.
This is the reason why in a noisy room, a person just clubs all the noise as unattended auditory information that needs very little processing, and moves on to what he/she considers important.

Donald Broadbent and the Filter Model of Selective Attention

While Cherry's experiments may have paved the way for research in selective hearing and attention, it was Broadbent who first put forth the detailed theory of attention, known as the filter model, in 1958.
Since most people are able to listen and recall information that they actively attended to, and filter out all the unwanted attention, he concluded that the brain must have a filter mechanism to block out anything that is not attended to. According to the key assumptions in his theory:
✧ When any information enters the brain through the sensory organs, such as the ears, the information is stored in a buffer memory, which is also known as sensory memory.
✧ To process further information, the brain's filter mechanism discards the unwanted information, or only keeps it for a limited time in the buffer memory, and stores only the attended information.
✧ This selected attention is passed on to the short-term memory or the working memory, which then communicates with the long-term memory.

Anne Treisman and the Attenuation Model

Later, Anne Treisman developed the attenuation model, where she suggested that when the information is processed through the filter model, the unattended message instead of being completely blocked, is just attenuated or weakened.
If some words, such as the person's name is spoken, it may grab the person's attention from the unwanted stream. This is known as the threshold mechanism.
Selective attention is extremely common even in infants who turn towards the sound of their parent's voice while ignoring others. How this works is of keen interest to companies that make consumer electronic devices, because of the tremendous future market for all kinds of electronic devices with voice-active interfaces.
Although it is easy for humans to walk into a noisy room and have a private conversation with relative ease, the machine emulation of this basic human activity has been very difficult.
It is only through comprehensive research that the cocktail party effect can be understood and applied in filtering out sounds, and improving voice-activated electronic interfaces in order to properly detect verbal commands.