VISE researchers receive $3.1M grant for customizable cochlear implant programming
By Michelle Bukowski
A team of Vanderbilt University and Vanderbilt University Medical Center researchers has received a $3.1 million NIH grant to develop advanced patient-specific cochlear implant stimulation models for customized implant programming.
Traditional cochlear implant programming is done by expert audiologists using a guess-and-check approach based on subjective patient feedback regarding sound quality as well as changes in speech recognition rates. This research aims to develop computational models for simulating how the cochlear implant activates the auditory nerves for individual patients.
These models will ultimately enable development of next-generation programming strategies that rely on these computational simulations of implant performance to find programming settings that greatly improve sound quality for cochlear implant users compared to the traditional programming approach.
Jack Noble, assistant professor of electrical engineering and computer science, leads the team and is the principal investigator. Co-investigators are Rene H. Gifford, professor of hearing and speech and director of the Division of Audiology’s Cochlear Implant Program; Robert Labadie, MD, professor of otolaryngology, and Benoit Dawant, professor of electrical engineering. All are Vanderbilt Institute for Surgery and Engineering affiliates.
The five-year grant, Noble said, “is to develop new, more advanced patient-custom programming strategies using novel methods for comprehensive patient-specific modeling of neural stimulation with cochlear implants.”
Cochlear implants are small electronic devices with an external portion that sits behind the ear and a second portion surgically placed under the skin. The device uses an array of implanted electrodes to stimulate auditory nerves and induce hearing sensation.
More than half a million people have received cochlear implants worldwide, and for severe sensory-based hearing loss the devices are considered the standard of care treatment. While results with cochlear implants have been remarkably successful in general, a significant number of cochlear implant recipients continue to have poor ability to understand human speech.
“Even among the most successful cases, restoration to normal auditory fidelity is rare,” Noble said. “It is estimated that less than 10 percent of those who could benefit from this technology pursue implantation, in large part due to the high-degree of uncertainty in outcomes.”
Due to patient-specific differences in how the auditory nerves are activated by the implant, the default programming settings that are typically used with cochlear implants often result in sub-optimal stimulation of the auditory nerves. With sub-optimal settings, the patient experiences low sound quality and difficulty understanding speech.
Sub-optimal stimulation of the auditory nerves has been a big part of the variability in cochlear implant outcomes, but thus far approaches for estimating how the electrodes stimulate the nerves on a patient-specific basis have not been reliable enough to help audiologists consistently improve outcomes through programming adjustments. Because of this, typically audiologists leave the vast majority of programming settings at the default values.
The hypothosis of this project is that more effective, customized programming strategies can be found using new patient-specific computational models of cochlear implant stimulation. These computational models enable computer simulation of the quality of auditory nerve stimulation achieved with specific programming settings. The performance of different programming options can then be evaluated in simulation. These simulations can be used to find customized programming settings that are predicted to result in high quality stimulation of the auditory nerves for each individual patient.
The ultimate goal of this project is to optimize implant performance by designing customized stimulation strategies for each patient using these individualized computational models.
The grant R01DC014037 is funded by the National Institute on Deafness and Other Communication Disorders (NIDCD).
Contact: Brenda Ellis, 615 343-6314