3 Engineers Walk into a Cocktail Party
It’s a typical scene: you’re at a party, it’s crowded, and there’s people talking all around you in various volumes and tones. Your friend is speaking directly to you, and even with all of the surrounding chatter, you can still hear and understand what they’re saying, almost as if the background noise doesn’t exist. This phenomenon is called the “cocktail party problem,” which humans solve through selective listening. How do our brains do this? And what happens when someone’s brain can’t cut out the clutter? These are the questions that a few biomedical engineer-neuroscientist hybrids at Â鶹ÊÓƵare trying to answer.
For Ross Maddox, Ph.D., an assistant professor in the Departments of Biomedical Engineering and Neuroscience with a lab in the University of Rochester Center for Advanced Brain Imaging & Neurophysiology (UR CABIN), research begins long before anyone is old enough to “party.” All newborn babies in the U.S. are screened with a hearing test, and if potential issues are found they are referred for an auditory brainstem response (ABR) exam. Several pitches need to be evaluated in each ear, and the test can only be done when the infant is sleeping and still. Needless to say, it’s time consuming. A recent study by Maddox’s lab, published in Trends in Hearing, developed a new method for measuring ABR in infants to test all frequencies in both ears all at once. Rough estimates from the study say the test may become two to four times faster than current methods, meaning it could be possible to get a much better diagnosis in the same amount of time, or confirm normal hearing in a much shorter time. Further testing is required before it becomes an industry standard, but Maddox has big aspirations: “My main goal is for my grandkids’ hearing to be tested with my technique, in whatever hospital they're born in. I want it to be the technique that is used everywhere.”
The work of Maddox and his colleagues resides at the intersection of neuroscience and engineering, and emerging collaboration that holds the potential to unlock the complexity of human cognition, understand neurological diseases, and develop new technologies that help researchers and clinicians study, diagnose, and treat these disorders.
Another way that Maddox’s lab merges neuroscience and engineering is with their work to develop ways of processing speech for studying the brain. They’ve engineered new tools for analyzing brainwaves to try and get a good match between the speech that's coming in, and the brainwaves that are coming out that correspond to the earliest parts of the auditory system. “What I really like about being split between BME and Neuroscience is that I feel like I have free rein to research in a broad realm,” said Maddox.
A music lover who went to undergrad as a sound engineering major and wanted to run a recording studio someday, Maddox eventually found himself in grad school, interested in how the brain processes sound. Though he ultimately traded science for music academically, a recent grant from the National Science Foundation is taking him back to his roots. Along with fellow BME professor Anne Luebke, Ph.D., and Elizabeth Marvin, Ph.D., at the Eastman School of Music, Maddox is investigating whether formal musical training is associated with enhanced neural processing and perception of sounds, including speech in noisy backgrounds. Rochester was perfectly primed to be one of six universities running the study with its pool of auditory neuroscientists that span several departments and schools.
Working with professors in electrical engineering and computer science, Maddox is developing assistive technology that is sort of a ‘visual hearing aid.’ Their focus is to accurately represent the speech sounds visually to improve understanding. “Being able to see someone talking is really helpful for understanding, which is especially true for people who are hard of hearing or use hearing aids or cochlear implants. But, there’s a lot of situations like talking on the phone or listening to a podcast where we don’t have a face to see,” Maddox explains. “We’re working on a project that takes in speech and renders a talking face saying that speech in real time.”
With too many additional collaborators across the University to list, Maddox stresses that this is the most collaborative university he’s ever been affiliated with: “It’s fun to work with people who are way smarter than you at other things. It’s a really exciting place to do science.”
Maddox is only one of several faculty members across the University merging engineering with neuroscience. Just down the hall in the CABIN sits fellow engineer, Ed Lalor, Ph.D., an associate professor in the Departments of BME and Neuroscience. Lalor took a non-traditional path to academia, first working as an electrical engineer and then an elementary school teacher before accepting a job analyzing EEG data in kids with attention problems. His lab is interested in how humans process the world around them. Focused on trying to understand how human brains operate in scenarios that are more naturalistic and closer to the real world than has been traditionally done, they look to find how humans perceive and pay attention to objects and integrate information from different senses to help them navigate the world.
“We try to look at human brains and how they process naturalistic stimuli – music, speech, language, video, and how we actually deal with that kind of stuff,” he explains. “That’s where engineering comes in—there’s lots of data to make sense of, and we come in with slightly more sophisticated signal processing techniques to analyze and interpret those data.”
Because Lalor’s work can be applied to multiple cohorts, he has several collaborators across the University. He’s currently working with Feng Vankee Lin, Ph.D., R.N., in the School of Nursing on language processing in older people as a potential way to detect Alzheimer’s disease early on. The idea is that if people are just slightly sluggish in processing the meaning of words in their context, it could be a sensitive way to detect decline early on. Lalor’s lab is also looking to work with the CABIN’s top floor resident, the Cognitive Neurophysiology Lab, led by John Foxe, Ph.D., and Ed Freedman, Ph.D. They’re hoping to ramp up work looking at how perception might go awry in children with autism. Additionally, they’re beginning a new collaboration with David Dodell-Feder, Ph.D., an assistant professor in the Psychology department, on speech production.
“Ross and Ed have developed exciting multidisciplinary research labs spanning neuro-engineering and neuroscience. Such collaborations between the Department of Biomedical Engineering and the Del Monte Institute for Neuroscience offer opportunities to build a truly unique research and innovation environment at the UR to lead new breakthroughs in technologies to understand the brain and treat neurological diseases,” said Diane Dalecki, Chair of the UR Department of Biomedical Engineering. As for what the future holds for these ever-changing fields at the University? “We’re all anticipating fantastic new discoveries at the intersection of neuroscience and engineering by this collaborative, multidisciplinary team,” said Dalecki.