News, Culture & Society

Why you can’t listen to two people speak at once: Our brains have a ‘bottleneck’ for speech

If you struggle to keep track of more than one conversation at a dinner party, you’re not alone.

Scientists now suspect it is physically impossible to pay attention to more than one person speaking at once.

Researchers from the University of Maryland came to this conclusion after scanning 28 people’s brains in a ‘cocktail party’ scenario as they listened to two speakers. Their brains kicked in as normal to listen to the first speaker, but appeared not even to recognise the second person’s speech as words.

We struggle to listen to more than one conversation at once at dinner parties because of a ‘bottleneck’ in our perception of speech (stock image)

The findings provide an explanation for why it may be so hard to concentrate on a family member’s conversation when the television is on, or to follow what someone is saying in a crowded room.

It appears the brain requires a lot of effort to pick up sound, then run through lists of possible words to understand what is being said. This happens in the auditory cortex, which processes sounds, but scans of the study participants found it could not turn these sound patterns into words for more than one speaker.

Dr Christian Brodbeck, lead author of the study from the Institute for Systems Research at the University of Maryland, said: ‘We think that during speech perception, our brain considers the match between the incoming speech signal and many different words at the same time.

‘Put in a different way, the words compete for being recognised. It could be that this mechanism involves mental resources that can only process one speech signal at a time, making it impossible to attend to more than one speaker simultaneously.’ Researchers asked study participants to listen to two separate chapters from A Child’s History of England By Charles Dickens, read by both a man and woman.

The magnetic activity of their brain was scanned during the ‘cocktail party’ experiment, in which the two audiobooks were played simultaneously.

For the first speaker, listeners showed activity in the auditory cortex which picks up sounds, before brain activity moved almost an inch lower towards the language regions to decipher words.

Scientists said the new findings may reveal a 'bottleneck' in our brains' speech perception (stock image)

Scientists said the new findings may reveal a ‘bottleneck’ in our brains’ speech perception (stock image)

But for the second speaker, who people were asked to try to ignore, that shift did not happen. The researchers believe people were only able to receive sound patterns of that person’s speech and could not process them as words.

The study, which is published in the journal Current Biology, found in ordinary circumstances people can recognise sounds as words at lightning speed, in around a tenth of a second. But it concludes that word processing is ‘restricted’ in the brain, based on readings from a MEG scanner which looks like a whole-head hairdryer from a salon but uses magnetic sensors to detect brainwaves.

Dr Brodbeck said there may be a ‘bottleneck’ in the brain’s speech perception, adding: ‘It could be that this mechanism involves mental resources that have limitations on how many different options can be tried simultaneously when people are speaking at a rate of about three words per second. This leads people to selectively process speech sounds in noisy environments.’


Comments are closed.