5 RESULTS

Data-driven optimization of parameterized head related impulse responses for the implementation in a real-time virtual acoustic rendering framework

Fenja Schwark1, Stephan D. Ewert1, Marc René Schädler1, Volker Hohmann1,2, Giso Grimm1,21Medizinische Physik, Universität Oldenburg, and Cluster of Excellence “Hearing4all”, Oldenburg, Germany2HörTech gGmbH, Oldenburg, Germany In real-time virtual acoustic rendering, the head related directional properties of the receiver, i.e., the listener, are often modeled by convolving the signal with measured head related impulse responses (HRIRs). …

Bringing ecological validity to the technical evaluation of hearing aids

Cosima A. Ermert1, Lu Xia2, Brian Man Kai Loong2, Janina Fels1, Sébastien Santurette2,31Institute for Hearing Technology and Acoustics, RWTH Aachen University, Aachen, Germany2Centre for Applied Audiology Research, Oticon A/S, Smørum, Denmark3Department of Health Technology, Technical University of Denmark, Kgs. Lyngby, Denmark Focusing on target signals in noisy environments is a well-known challenge for hearing-impaired listeners. …

Increasing the ecological validity of speech intelligibility measures using conversational speech and comprehension-targeted questions

Martha M. Shiell1, Sergi Rotger-Griful1, Martin Skoglund1,2, Johannes Zaar1,3, Gitte Keidser11Eriksholm Research Centre, Oticon A/S, DK-3070 Snekkersten, Denmark2Department of Electrical Engineering, Linköping University, Linköping, Sweden3Department of Health Technology, Technical University of Denmark, Kgs. Lyngby, Denmark Although hearing aids can be effective at restoring sensory input, some users still struggle to benefit from their devices in …

Modelling changes in the process of audiovisual integration

Samuel Smith1,2, Christian J. Sumner1, Thom Baguley1, Paula C. Stacey11NTU Psychology, Nottingham Trent University, Nottingham, U.K.2Hearing Sciences, University of Nottingham, Nottingham, U.K. The comprehension of speech, whether for normal hearing or aided, is often supplemented by watching a talker’s facial movements. How do auditory and visual cues combine multiple sources of information to provide a …