Mengchao Zhang1, Jacques Grange1, John Culling1
1School of Psychology, Cardiff University, Cardiff, United Kingdom

Cochlear synaptopathy is a selective loss of auditory nerve fibers with low spontaneous rates (SR) after noise exposure or aging, which is thought to contribute to hidden hearing deficits, especially the ability to process suprathreshold temporal envelopes (TEs). However, evidence of cochlear synaptopathy in humans is unclear due to difficulty in documenting noise exposure history and selecting sensitive measures. The present study uses a computational model to simulate and examine the impact of cochlear synaptopathy on TE perception. Auditory nerves from different SR classes were selectively deactivated in a physiologically inspired auditory model, and then the neural signals of the model were decoded into soundwaves for perceptual evaluation. Simulated synaptopathy (deactivating low-SR fibers) was compared to a normal condition, a more severe version of cochlear synaptopathy (deactivating low- and medium-SR fibers), and a loss of high-SR fibers. TE perception was evaluated through amplitude modulation detection, speech recognition in modulated noise, and recognition of unvoiced speech in modulated noise. Overall, cochlear synaptopathy impaired TE perception, but deactivating high-SR fibers showed no significant difference from the normal condition. The severity of impact of synaptopathy differed based on the task parameters. The modulation detection threshold difference between the normal and the synaptopathy conditions decreased from about 13 dB at 16 Hz to about 9 dB at 64 Hz. For speech tasks, loss of low-SR fibers alone degraded the speech recognition threshold compared to normal conditions by about 1 dB for natural speech but about 4.6 dB for unvoiced speech. In summary, the simulation supports the theoretical role of low-SR fibers in coding suprathreshold TE and shows that sensitive TE measures for cochlear synaptopathy requires a careful selection of task. 

Acknowledgements: The study is supported through the EPSRC grant (Grant No.: EP/R010722/1).