Fenja Schwark1, Stephan D. Ewert1, Marc René Schädler1, Volker Hohmann1,2, Giso Grimm1,2
Medizinische Physik, Universität Oldenburg, and Cluster of Excellence “Hearing4all”, Oldenburg, Germany
2HörTech gGmbH, Oldenburg, Germany

In real-time virtual acoustic rendering, the head related directional properties of the receiver, i.e., the listener, are often modeled by convolving the signal with measured head related impulse responses (HRIRs). However, the computational cost of HRIR-convolution is rather high, even when implemented in the spectral domain, and interpolation needs to be applied to simulate all source directions, depending on the spatial resolution of the HRIR catalogues used. In order to reduce the computational cost in low-delay real-time virtual acoustic rendering, this study uses a parameterized digital filter model with delay lines to approximate the direction-dependent features of the head. A data-driven optimization method for the filter parameters is introduced that aims at matching the direction-dependent features of modeled and measured HRIRs using a spectral distance metric. Utilizing an objective binaural speech intelligibility model, it was shown that the speech intelligibility estimate for the optimized model approaches the estimate for the measured HRIRs. This suggests that the parameterized HRIR model may be sufficient to enable plausible spatial perception in virtual acoustic scenes. With the parameterized HRIR model a reduction of computational cost in the order of two magnitudes is possible for virtual acoustic scenes with a small number of objects. Further work will include subjective testing of the model, compared against measured HRIRs.

Acknowledgments: Funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) – Project-ID 352015383 – SFB 1330 B1 and C5.