During everyday interactions, people routinely speak at rates of 120 to 200 words per minute. For a listener to understand speech at these rates – and not lose track of the conversation – the brain must comprehend the meaning of each of these words very rapidly.
“That we can do this so easily is an amazing feat of the human brain -- especially given that the meaning of words can vary greatly depending on the context,” says Edmund Lalor, associate professor of biomedical engineering and neuroscience at the University of Rochester and Trinity College Dublin. “For example, ‘I saw a bat flying overhead last night’ versus ‘the baseball player hit a home run with his favorite bat.’”
Now, researchers in Lalor’s lab have identified a brain signal that indicates whether a person is indeed comprehending what others are saying – and have shown they can track the signal using relatively inexpensive EEG (electroencephalography) readings taken on a person’s scalp.
This could have a number of “potentially significant” applications, Lalor says. They include:
• testing language development in infants,
• determining the level of brain function in patients who are in a reduced state of consciousness, such as a coma.
• confirming that a person in a particularly critical job has understood the instructions they have received (e.g., an air traffic controller or a soldier)
• testing for the onset of dementia in older people based on their ability to follow a conversation.
The research, described in a paper published in Current Biology (http://www.cell.com/current-biology/fulltext/S0960-9822(18)30146-5), applied machine learning to audio books that human subjects listened to. “One can train a computer by giving it a lot of examples and by asking it to recognize which pairs of words appear together a lot and which don’t,” Lalor explains. “By doing this, the computer begins to ‘understand’ that words that appear together regularly, like ‘cake’ and ’pie,’ must mean something similar. And, in fact, the computer ends up with a set of numerical measures capturing how similar any word is to any other.”
The researchers then correlated the numerical measures with brainwave signals that were recorded as participants listened to the corresponding sections of the audiobooks. They were able to identify a brain response that reflected how similar or different a given word was from the words that preceded it in the story.
This was verified In one experiment, for example, when subjects listened to Hemingway’s Old Man and the Sea. “We could see brain signals telling us the people could understand what they were hearing,” Lalor said. “When we had the same people come back and hear the same audio book played backwards, the signal disappears entirely.”
In another experiment, participants listened to a speech by Barack Obama that was “buried in a fair amount of background noise, so you can make out only a couple words here and there,” Lalor said. When participants then watched a video of the speech, and could use facial cues to better understand what Obama was saying, the signal “intensifies dramatically.”
In the paper, Lalor’s team notes that there is more work to be done to fully understand the full range of computations that our brains perform when we understand speech. They have begun searching for other ways that brains might compute meaning, how those computations differ from what computers do, and how best to apply this new approach.
Lalor joined the University of Rochester in 2016, after serving five years as an assistant professor at Trinity College in Dublin, Ireland. He is still affiliated with Trinity, and three of his graduate students there – lead author Michael Broderick, Giovanni Di Liberto, and Michael Crosse, now a postdoc at Albert Einstein College of Medicine – contributed to this study, as did Andrew Anderson, a postdoctoral fellow in Lalor’s lab in Rochester.
Also part of the research team, Nate Zuk, a postdoctoral researcher, and Aisling O'Sullivan, 2nd year graduate student, appear in this video, prepping Abdo Sharaf '20 with the EEG connections for testing and data collection.
Subscribe to the University of Rochester on YouTube: https://www.youtube.com/channel/UCZRLVZGCUZWYUEj2XQlFPyQ
Follow the University of Rochester on Twitter: https://twitter.com/UofR
Be sure to like the University of Rochester on our Facebook page: https://www.facebook.com/University.of.Rochester/
Help us caption & translate this video!
https://amara.org/v/et7J/
“That we can do this so easily is an amazing feat of the human brain -- especially given that the meaning of words can vary greatly depending on the context,” says Edmund Lalor, associate professor of biomedical engineering and neuroscience at the University of Rochester and Trinity College Dublin. “For example, ‘I saw a bat flying overhead last night’ versus ‘the baseball player hit a home run with his favorite bat.’”
Now, researchers in Lalor’s lab have identified a brain signal that indicates whether a person is indeed comprehending what others are saying – and have shown they can track the signal using relatively inexpensive EEG (electroencephalography) readings taken on a person’s scalp.
This could have a number of “potentially significant” applications, Lalor says. They include:
• testing language development in infants,
• determining the level of brain function in patients who are in a reduced state of consciousness, such as a coma.
• confirming that a person in a particularly critical job has understood the instructions they have received (e.g., an air traffic controller or a soldier)
• testing for the onset of dementia in older people based on their ability to follow a conversation.
The research, described in a paper published in Current Biology (http://www.cell.com/current-biology/fulltext/S0960-9822(18)30146-5), applied machine learning to audio books that human subjects listened to. “One can train a computer by giving it a lot of examples and by asking it to recognize which pairs of words appear together a lot and which don’t,” Lalor explains. “By doing this, the computer begins to ‘understand’ that words that appear together regularly, like ‘cake’ and ’pie,’ must mean something similar. And, in fact, the computer ends up with a set of numerical measures capturing how similar any word is to any other.”
The researchers then correlated the numerical measures with brainwave signals that were recorded as participants listened to the corresponding sections of the audiobooks. They were able to identify a brain response that reflected how similar or different a given word was from the words that preceded it in the story.
This was verified In one experiment, for example, when subjects listened to Hemingway’s Old Man and the Sea. “We could see brain signals telling us the people could understand what they were hearing,” Lalor said. “When we had the same people come back and hear the same audio book played backwards, the signal disappears entirely.”
In another experiment, participants listened to a speech by Barack Obama that was “buried in a fair amount of background noise, so you can make out only a couple words here and there,” Lalor said. When participants then watched a video of the speech, and could use facial cues to better understand what Obama was saying, the signal “intensifies dramatically.”
In the paper, Lalor’s team notes that there is more work to be done to fully understand the full range of computations that our brains perform when we understand speech. They have begun searching for other ways that brains might compute meaning, how those computations differ from what computers do, and how best to apply this new approach.
Lalor joined the University of Rochester in 2016, after serving five years as an assistant professor at Trinity College in Dublin, Ireland. He is still affiliated with Trinity, and three of his graduate students there – lead author Michael Broderick, Giovanni Di Liberto, and Michael Crosse, now a postdoc at Albert Einstein College of Medicine – contributed to this study, as did Andrew Anderson, a postdoctoral fellow in Lalor’s lab in Rochester.
Also part of the research team, Nate Zuk, a postdoctoral researcher, and Aisling O'Sullivan, 2nd year graduate student, appear in this video, prepping Abdo Sharaf '20 with the EEG connections for testing and data collection.
Subscribe to the University of Rochester on YouTube: https://www.youtube.com/channel/UCZRLVZGCUZWYUEj2XQlFPyQ
Follow the University of Rochester on Twitter: https://twitter.com/UofR
Be sure to like the University of Rochester on our Facebook page: https://www.facebook.com/University.of.Rochester/
Help us caption & translate this video!
https://amara.org/v/et7J/
- Category
- Academic
Sign in or sign up to post comments.
Be the first to comment