Dialogue Summer 2017

Using Reality to Improve Hearing

A New Sound Room at the Center for Audiology, Speech, Language, And Learning Is Changing the Way We Treat Hearing Disabilities

For anyone suffering from hearing loss, sitting in a crowded restaurant surrounded by boisterous conversation can be a wholly unpleasant experience. Amid ambient noise and multidirectional sounds, hearing aids often prove ineffectual, leaving the wearer feeling frustrated and isolated.

Clinicians have typically tried to calibrate a hearing aid based on their own experience, often with mixed results. Even if extensively researched, such a calculation is still just an educated guess and might not take into account the acoustics of a particular space or the movement of people around the listener, says Sumitrajit Dhar, chair of the Roxelyn and Richard Pepper Department of Communication Sciences and Disorders.

The Center for Audiology, Speech, Language, and Learning—the School of Communication’s innovative clinic— plans to change that with the Virtual Sound Room, or ViSoR. Equipped with 16 microphones, 33 speakers, 4 subwoofers, and a large video screen, the state-of-the-art space is designed to improve the hearing outcomes of the center’s patients. “The immersive environment… allows real-time, ecologically valid interactions,” says a paper that Dhar and colleagues presented last year at the 22nd annual International Congress on Acoustics in Buenos Aires. “Designed as a regular room, as opposed to a speaker sphere, the room allows the creation of everyday listening environments such as a living room, a restaurant, or a concert hall. The motivation behind the creation of the environment was to enable real-world adjustments to amplification and assistive hearing devices and to evaluate complex auditory capabilities such as tracking-panning audio and localization in a three-dimensional space.”

The 242-square-foot room can hold up to ten people, though only about two or three typically use it at one time. It was designed as a freestanding pod with double-stud and multilayer drywall construction to provide isolation from surrounding noise. Other sound-attenuation features include remote fans, lined ductwork, and slow air velocities to reduce any external influences on the environment.

“This mixture of diffusion, absorption, and reflection makes the fictionalized environment more realistic than what is otherwise possible in anechoic rooms, which require a large number of loudspeaker channels,” Dhar writes. “The diffusive/reflective zones in this room help to enable the creation of phantom sources between loudspeakers, contributing to the realism.”

Constellation by Meyer Sound installed ViSoR’s active acoustic system, which for maximum realism includes speakers in the walls, near the floor, and on the ceiling. The room’s microphones pick up its ambient acoustics to help measure reverberation time—the time required for sound to decay into inaudibility.

Dhar says that because users can move around the room, ViSoR is uniquely capable of helping them improve hearing through their devices. The space is engineered using SpaceMap-integrated multichannel panning to ensure that directional sound sources from a variety of locations influence the experience realistically. This maximizes the sweet spot so listeners don’t have to stay in a critical listening position.

The room can also help hearing-aid users who complain about their own voice sounding hollow; they can test out adjustments to their hearing aids in real time and gauge improvements. The computers and software that control ViSoR’s sound presentation are completely customizable. Several faculty and graduate students underwent extensive training in programming the room’s unique acoustic environments when it first came online. Students in the new master’s program in sound arts and industries are currently taking a course at ViSoR and designing soundscapes for scientific and artistic purposes.

“There’s great potential both to help patients and to further audiology research,” says Dhar. “We’re only just getting started with ViSoR’s many possibilities.”

Virtual Training in the SimLab


Clinical lecturer Leigh Cohen demonstrating an oral mechanism exam on the SimLab’s infant mannequin

The groundbreaking Lambert Family Simulation Lab in the Roxelyn and Richard Pepper Department of Communication Sciences and Disorders helps audiology and speech-language pathology students try their hands at diagnosis and treatment—without actually seeing human patients.

“The simulation lab offers our students cutting-edge experiences in clinical development,” says Stacy Kaplan, director of the department’s master’s program in speech, language, and learning. “The simulation units and materials provide our students with opportunities for hands-on practice of both foundational and complex patient care concepts before they even enter their externships. It has become an invaluable space for our students to develop the skills needed to provide the highest level of care.”

Made possible by a generous gift from Bill and Sheila Lambert, the SimLab is one of the only labs of its kind in an audiology or speech-language pathology program. Using the lab’s computerized mannequins (a life-sized man and a baby), students can practice diagnosing and treating conditions they’ll see when they begin their clinical training with real patients. Kaplan says that the mannequins—controlled by iPads and computers—can be set to display any number of problems, from a sudden drop in blood pressure to chronic hearing issues.

“The students recognize the important role the simulation lab plays in their development, and they feel incredibly well prepared for their clinical work,” says Kaplan. “We’re so grateful to the Lambert family for their donation. This really sets our program apart, giving students rare and exceptional opportunities to learn.”

- Cara Lockwood