About

Invited Speakers

Coming soon

2021 Programme

The full programme book is available for download.

  • 23 Aug
  • 24 Aug
  • 25 Aug
  • 26 Aug
  • 27 Aug
  • Podium: Neural bases
  • Parallel: Listening effort
  • Parallel: Cognitive factors
  • Parallel: Neural imaging I
  • Parallel: Hearing aid fitting
13:00 - 13:15Welcome and introduction By Torsten DauTechnical University of Denmark
13:15 - 13:45Cochlear synaptopathy in noise-induced and age-related hearing loss By M. Charles LibermanMass Eye and Ear

M. Charles Liberman1
1Eaton-Peabody Laboratories, Mass. Eye and Ear, Boston, USA

Most hearing impairment in adults arises from damage to the sensory cells and/or nerve fibers of the inner ear.  This talk will summarize recent research on animal models and human autopsy material showing that, in both noise-induced and age-related hearing loss, the synaptic terminals of cochlear nerve fibers degenerate first, leaving their peripheral targets, the inner hair cells, partially disconnected from the brain. This primary neural degeneration has little effect on hearing thresholds (the audiogram) but affects discrimination of complex sounds like speech.  Because the cell bodies and central projections of the cochlear neurons survive long after loss of their synaptic connections, there is a therapeutic window for repair, as has been shown in animal models using both local delivery, and virally mediated overexpression, of neurotrophins.  The endogenous capacity for repair is low in mice but high in guinea pigs, and the differences may provide further clues for therapeutic approaches.

Acknowledgements: Supported by grants from the NIH (R01 DC00188 and P50 DC015857)

13:45 - 14:05An encoding-decoding method for studying perception with hearing loss By Jiayue LiuDuke University

Jiayue Liu1, Josh Stohl2, Enrique Lopez-Poveda3, Tobias Overath1
1
 Department of Psychology & Neuroscience, Duke University, Durham, US
2
 North American Research Laboratory, MED-EL Corporation, Durham, US
Instituto de Neurociencias de Castilla y León, Universidad de Salamanca, Salamanca, Spain

‘Hidden hearing loss’ has inspired a wealth of research, including what and how morphological changes in the auditory periphery might cause this phenomenon. For example, the stochastic undersampling model (Lopez-Poveda & Barrios, 2013) suggests that auditory deafferentation can potentially introduce internal noise in the subsequent auditory processing stages. In this model, auditory fibers (AF) are modelled as samplers, which sample the input sound at individual stochastic rates, and the loss of AFs is mimicked by reducing the number of samplers. However, the parameters used in this model do not capture the full complexity of physiological response characteristics, thus leaving unclear the quantity of information conveyed by the AFs. In our study, half-wave rectification, refractoriness, and three types of AFs are added to the original model to explicitly model AF (type) loss within a more realistic physiological setting. In addition, an artificial-neural-network-based stimulus reconstruction is used to decode the modelled AF responses back to an audio signal (Akbari et al., 2019, Morise et al., 2016). We conducted a pure tone in noise (PTiN) detection task and a modified version of HINT (Nilsson et al., 1994) via MTurk. The behavioral stimuli were degraded using our model, with 3 levels of AF loss (0, 90, 95%). Preliminary results indicate that the PTiN threshold increases significantly with a decrease in the number of fibers, at a rate that aligns well with predictions from Oxenham (2016). For the HINT, the results only showed a significant threshold shift between the 90% and 95% AF loss conditions. In conclusion, our model combines detailed physiological response properties with the stochastic undersampling model and thereby enables more realistic artificial introductions of lesions in the peripheral auditory pathway (e.g. selective frequency loss, or fiber type loss) and thus can benefit the study of auditory pathology for improving hearing restoration devices.

14:05 - 14:20Break
14:20 - 14:50Maladaptive central auditory plasticity: A critical arbiter linking cochlear neural degeneration with suprathreshold hearing disorders By Daniel B. PolleyMass Eye and Ear

Daniel B. Polley1,2 
1
Eaton-Peabody Laboratories, Mass. Eye and Ear, Boston, USA
2Department of Otolaryngology, Harvard Medical School, Boston, USA

Hearing disorders are typically studied and treated from the perspective of wanting to make inaudible sounds audible. Yet three of the most common and debilitating adult hearing complaints reflect just the opposite problem: not what persons cannot hear, but what they cannot stop hearing. Older adults or persons with a history of noise exposure often struggle to suppress the awareness of background noise sources when listening to a target speaker, they are often assaulted by the irrepressible perception of phantom sounds (tinnitus), and they can experience moderate intensity sounds as loud, distressing, or even painful (hyperacusis). Although age, noise exposure, and hearing status are risk factors for these perceptual disorders, their connection is indirect at best, prompting much speculation about the intervening neural processes that may be more closely related. Work from our lab and others shows that an underlying root cause for each of these disorders may be found in a dialog gone wrong between cochlear primary afferent neurons and neurons in sound processing centers of the brain. Our work in animal models has shown that cochlear neural degeneration (CND) triggers a compensatory plasticity at higher stages of the central auditory pathway that often over-shoots the mark, rendering neurons hyperactive, hypersensitive, hyper-synchronized, and internally ‘noisy’. Using in situ mRNA profiling, optogenetics, single unit electrophysiology, and calcium imaging in behaving animals, I will show how CND triggers excess central gain in the auditory cortex and how this central pathophysiology directly underlies poor hearing in noise. I will also describe our ongoing efforts to develop physiological biomarkers for these maladaptive central plasticity processes in human subjects and well as interventions that improve multi-talker speech intelligibility in older adults with sensorineural hearing loss by targeting noisy processing in the brain rather than focusing on the signal transmitted from the ear.  

Acknowledgements: The National Institute on Deafness and Other Communication Disorders grants R01-DC009836 and P50-DC015857

14:50 - 15:55Parallel session

Find more details by clicking on the ‘Parallel’ tabs above.

15:55 - 16:10Break
16:10 - 16:30A physiologically-based model for pitch based on fluctuations in auditory-nerve responses: Effects of sensorineural hearing loss By Laurel H. CarneyUniversity of Rochester

Laurel H. Carney1
1
Depts. of Biomedical Engineering and Neuroscience, University of Rochester, Rochester, New York, USA

The goal of this project is to explore representations of pitch using physiological models for auditory-nerve (AN) and midbrain (inferior colliculus, IC) neurons. An established model for ‘central pitch’ (Goldstein, 1973, JASA) requires a robust neural response profile that corresponds to the spectrum of a harmonic tone complex. Representations based on auditory-nerve excitation patterns (rate vs. place profiles) change with sound level and are not robust in background noise. However, f0-related fluctuations in AN responses are robust both across levels and in noise. These peripheral fluctuations ultimately influence responses of IC neurons, for which a key property is amplitude-modulation tuning. Because IC neurons are sensitive to slow fluctuations of their inputs, the fluctuation profiles set up in the periphery map into rate-profiles across IC neurons (Carney, 2018, JARO). Thus, the population responses of IC neurons provide the input required by central pitch models. Here, the representation of periodicity pitch by model midbrain neurons will be tested for several stimuli. Estimates of pitch discrimination thresholds and pitch strength can be made based on model responses. Importantly, the fluctuation profiles in AN responses depend upon inner-ear nonlinearities that are affected by sensorineural hearing loss (SNHL). Specifically, the ‘flattening’ of fluctuations near harmonics in tone complexes, or near spectral peaks of complex sounds, is reduced when cochlear amplification and/or inner-hair-cell sensitivity is reduced. Thus, SNHL reduces the contrast in fluctuation amplitudes across AN frequency channels, and diminishes this mechanism for coding features of complex sounds. Effects of SNHL on pitch discrimination and pitch strength can thus be studied in this physiological-modeling framework. This effort builds on the pitch-discrimination modeling work in Bianchi et al. (2018, JARO), but takes advantage of the central pitch model to map physiological-model responses into decision variables for pitch-related tasks.

Acknowledgements: This work is supported by NIH-NIDCD-R01-010813.

16:30 - 17:00Diagnostic tools for differing sites of lesion By Karen P. SteelKing’s College London

Neil J. Ingham1, Clarisse H. Panganiban1, Navid Banafshe1, Christopher J. Plack2, Karen P. Steel1
1
Wolfson Centre for Age-Related Diseases, King’s College London, London SE1 1UL, UK
2Manchester Centre for Audiology and Deafness, The University of Manchester, Manchester M13 9PL, UK

Any clinical trial for hearing treatments would benefit from improved stratification of the participants according to the pathophysiology underlying their hearing loss, but this need becomes more acute as molecular or small molecule approaches are developed. Three major categories of cochlear pathology in age-related hearing loss were proposed by Schuknecht & Gacek (1993): sensory (hair cell dysfunction), metabolic (stria vascularis dysfunction) and neural (auditory neuron defects). Non-invasive methods for distinguishing hair cell from synaptic/neural defects have been proposed, although there are concerns regarding sensitivity and specificity. However, methods for identifying a strial defect are not well-established beyond audiogram shapes (Dubno et al 2013), yet these are important to detect because they will require different treatment approaches, and there is little point in treating a hair cell or a synaptic defect if the primary pathology is a dysfunctional stria. In the mouse we have the advantage of a better understanding of the underlying pathology compared with humans. We are using a set of mouse mutants with known initial sites of lesion to search for diagnostic tools based on objective electrophysiological measures. The mutants have a primary strial dysfunction and reduced endocochlear potential (S1pr2stdf), a primary inner hair cell defect (Klhl18lowf), a primary outer hair cell defect (Slc26a5tm1(EGFP/cre/ERT2)Wtsi) and a primary synaptic abnormality with swelling of synaptic boutons under inner hair cells (Wbp2tm1a). So far, we have used features of ABRs and DPOAEs to distinguish between inner and outer hair cell defects but not strial dysfunction (Ingham et al. 2020). We continue investigating ABR waveform features, frequency tuning, forward masking responses, increasing click repetition rates, tone-in-noise responses and inter-trial coherence tests as non-invasive objectives measures that have potential to be translated into a human diagnostic test.

Schuknecht HF & Gacek MR (1993). Ann. Otol. Rhinol. Laryngol. 102:1-16.
Dubno JR et al (2013) JARO 14:687-701.
Ingham NJ et al (2020). ARO abstracts 43:92.

Acknowledgements: Supported by RNID.

14:50 - 15:00Traces of pinnae-perking related to increased listening effort By Ronny HannemannWS Audiology

Andreas Schroeer1, Ronny Hannemann2, Farah Corona-Strauss1, Daniel Strauss1
1Systems Neuroscience and Neurotechnology Unit, Faculty of Medicine, Saarland University & School of Engineering, htw saar, Germany
2Audiological Research Unit, WS Audiology – Sivantos GmbH, Erlangen, Germany

Recently we demonstrated that human brains retain a circuitry for orienting the pinnae during goal-directed attention to sustained speech which is reflected in sustained electrical activity of four different muscles within the vestigial auriculo-motor system.

During voluntary effortful orienting we particularly observed an upward movement (“perking”) of the pinna that was not apparent in the reflexive automatic orienting.

In the current exploratory study, we want to examine whether the observed auriculo-motor activity-pattern are dependent from the amount of listening effort needed to follow a sustained speech signal in a complex acoustic background. We asked n=13 subjects to attend one audiobook narrated by a female speaker embedded in the context of other audiobooks played (varying in pitch and number) simultaneously.

In line with our hypothesis only the activity pattern of the muscle responsible for the “perking” in voluntary direction of attention showed differences between easy and difficult conditions whereas the other auricular muscles did not show differential patterns.

Although further research is needed the results complement the knowledge about the function of the human vestigial circuitry for orienting the pinnae.

15:00 - 15:10Baseline pupil size affects the temporal dynamics of the task evoked pupillary response in a speech in noise listening task By Helia Relaño-IborraTechnical University of Denmark

Helia Relaño-Iborra1,2, Dorothea Wendt2,3, Mihaela-Beatrice Neagu2, Abigail Anne Kressner2,4, Torsten Dau2, Per Bækgaard1
1
Department of Applied Mathematics and Computer Science, Technical University of Denmark, Kgs. Lyngby, Denmark
2Department of Health Technology, Technical University of Denmark, Kgs. Lyngby, Denmark
3Eriksholm Research Center, Oticon, Snekkersten, Denmark
4Copenhagen Hearing and Balance Center, Rigshospitalet, Copenhagen, Denmark.

Pupillometry data are commonly reported relative to a baseline value recorded in a controlled pre-task condition. Baseline correction aims to factor out the tonic response from the task-evoked (phasic) pupil response. However, a clear understanding of the factors that influence the baseline pupil size as well as the effect of baseline correction on the phasic responses is still lacking. In this study, we investigated the influence of the experimental design and the listeners’ expectation of the task difficulty on the baseline values as well as the relationship between baseline pupil size and the temporal dynamics of the pupil response after baseline correction. Data from 27 normal-hearing listeners from Wendt et al. (2018) [Hear. Res., 369, 67–78] were analyzed. The stimuli consisted of Danish HINT sentences presented in two different noise maskers at several signal-to-noise ratios (SNR). Blocks of 25 trials were used for each SNR condition, with the block order randomized across listeners. Each trial included 3 seconds of noise alone, followed by the sentence in noise. The baseline was defined as the mean pupil size during the last second of the noise-alone segment of each trial. A mixed-effects model applied to the baseline values revealed strong significant effects of block order, trial order and SNR as well as a significant effect of the interaction between SNR and block order. Additionally, we found a significant effect of the baseline on the slope, delay and curvature of the pupillary response, but not on the mean pupil size nor the peak pupil dilation after baseline correction. The results suggest that baseline correction might be adequate when reporting pupillometry results in terms of peak pupil dilation or mean pupil dilation, but not when a more complex characterization of the temporal dynamics of the response is required.

15:10 - 15:20Pupillary responses and working memory capacity as predictors of subsequent memory recall in an auditory free recall test By Andreea MiculaOticon A/S

Andreea Micula1,2, Jerker Rönnberg2, Patrycja Książek3,4, Reena Murmu Nielsen1, Dorothea Wendt4,5, Lorenz Fiedler4, Elaine Hoi Ning Ng1,2
1
Oticon A/S, Smørum, Denmark
2Department of Behavioural Sciences and Learning, Linnaeus Centre HEAD, Swedish Institute for Disability Research, Linköping University, Linköping, Sweden
3Amsterdam UMC, Vrije Universiteit Amsterdam, Otolaryngology-Head and Neck Surgery, Ear and Hearing, Amsterdam Public Health Research Institute, Amsterdam, The Netherlands
4Eriksholm Research Centre, Snekkersten, Denmark
5Hearing Systems, Hearing Systems Group, Department of Health Technology, Technical University of Denmark, Kongens Lyngby, Denmark

In the current study, an auditory free recall test was combined with pupillometry to investigate whether task-evoked pupillary responses measured during encoding can predict which items will be subsequently recalled. In addition, the effect of individual working memory capacity on subsequent memory recall was investigated. Participants with mild to moderately severe symmetrical sensorineural hearing loss (n = 21) were included. The Sentence-final Word Identification and Recall (SWIR) test was administered in a speech-babble noise. The task involves listening to lists of seven sentences, repeating the last word immediately after each sentence and recalling as many of the repeated words as possible at the end of the list. Pupillometry was recorded while the participants listened to the sentences and encoded the target words. The task-evoked peak pupil dilation (PPD) was measured. The Reading Span (RS) test was used as a measure of individual working memory capacity. The PPD and RS test score were found to be significant predictors of subsequent memory recall. Larger PPD and higher RS test scores were associated with higher likelihood of subsequent memory recall. The interaction between PPD and RS test score was not significant. The magnitude of the PPD presumably reflects the intensity of attentional processing devoted to words during encoding, which affects the likelihood of subsequent memory recall. Furthermore, individuals with higher working memory capacity are able to allocate more attentional resources during encoding, which results in a higher probability of subsequent memory recall.

15:20 - 15:30Investigating the dynamic range of the pupil response in relation to changes in the signal-to-noise ratio during a speech-in-noise task By Mihaela-Beatrice NeaguTechnical University of Denmark

Mihaela-Beatrice Neagu1, Abigail Anne Kressner1,4, Torsten Dau1, Per Bækgaard3, Helia Relaño Iborra3, Dorothea Wendt1,2
1Department of Health Technology, Technical University of Denmark, Kgs. Lyngby, Denmark
2Eriksholm Research Center, Oticon, Snekkersten, Denmark
3
Department of Applied Mathematics and Computer Science, Technical University of Denmark, Kgs. Lyngby, Denmark

4Copenhagen Hearing and Balance Center, Rigshospitalet, Copenhagen, Denmark.

The reliability of pupillometry as an indicator of listening effort (LE) has previously been shown to be higher than that of other physiological and subjective measures (e.g., NASA Task Load Index – NASA TLX) of LE. Previous studies have examined pupil dilation during a speech-in-noise test as a function of signal-to-noise ratio (SNR) on a group level, reporting an increase in pupil size with decreasing SNR (indicating an increase in LE) until very challenging SNRs are reached, after which the pupil size then decreases (indicating disengagement). However, most of these studies analyzed the effects on a group level rather than individually, and furthermore, none have actually reported the dynamic range of the pupil response with changing SNR. The present study examined the change in the pupil response for a given change in SNR (ΔSNR) at the individual listener level. Specifically, the pupil dilation of 31 normal-hearing listeners was recorded while performing a speech-in-noise test at SNRs ranging from -12 dB to 4 dB in two separate visits, while 11 of the listeners were also tested in a third visit. The dynamic range of different pupil features (peak pupil dilation, PPD, and mean pupil dilation, MPD) are subsequently analyzed as a function of ΔSNR using logistic regressions and compared to the dynamic range of a subjective measure (NASA TLX) as a reference. Additionally, this study examines the reliability of the observed dynamic range across visits. Overall, the results of this study aim to provide insights into changes in individuals’ LE by assessing the dynamic range of the pupil response for a given ΔSNR as compared to a more subjective measure of LE. Understanding these aspects is important for the development of pupillometry towards a standardized tool to assess individual LE for rehabilitation purposes.

15:30 - 15:40Eye-movement patterns of hearing-impaired listeners measure comprehension of a multitalker conversation By Martha M. Shiell Eriksholm Research Centre

Martha M. Shiell1, Teresa Cabella1, Gitte Keidser1, Diederick C. Niehorster2, Marcus Nyström2,  Martin Skoglund1,3, Simon With1, Johannes Zaar1,4, Sergi Rotger-Griful1
1Eriksholm Research Centre, Oticon A/S, DK-3070 Snekkersten, Denmark
2Humanities Lab, Lund University, Lund, Sweden
3Department of Electrical Engineering, Linköping University, Linköping, Sweden
4Department of Health Technology, Technical University of Denmark, Kgs. Lyngby, Denmark

The ability to understand speech in complex listening environments reflects an interaction of cognitive and sensory capacities that are difficult to capture with behavioural tests. The study of natural listening behaviours may lead to the development of new metrics that better reflect real-life communication abilities. To this end, we investigated the relationship between speech comprehension and eye-movements among hearing-impaired people in a challenging listening situation. While previous research has investigated the effect of background noise on listeners’ gaze patterns with single talkers, the effect of noise in multitalker conversations remains unknown. Recently we presented data exploring this question at the 180th meeting of the Acoustical Society of America. In the current presentation we will share an update on the results of this experiment with recently tested participants added to the analysis. In our experiment, participants viewed video recordings of two life-sized talkers engaged in an unscripted dialogue. Hearing loss ranged from moderate to severe. We used multiple-choice questions to measure participants’ comprehension of the conversation in multitalker babble noise at three different signal-to-noise ratios. All participants made saccades between the two talkers more frequently than the talkers’ conversational turns. This measure tended to correlate positively with participants’ comprehension scores, but the effect was significant in only one signal-to-noise ratio condition. A post-hoc investigation suggests that the intertalker saccade rate is driven by an interaction of hearing ability and conversational turn-taking events, which will be further discussed.

Acknowledgements: This work was financially supported by the Swedish Research Council (Vetenskapsrådet, VR 2017-06092 Mekanismer och behandling vid åldersrelaterad hörselnedsättninggrant).

15:40 - 15:55Panel discussion
14:50 - 15:00Revisiting auditory profiling: Can cognitive factors improve the prediction of aided speech-in-noise outcome? By Mengfan WuUniversity of Southern Denmark

Mengfan Wu1,2, Stine Christiansen1,2, Michal Feręczkowski1,2, Tobias Neher1,2
1
Institute of Clinical Research, Faculty of Health Sciences, University of Southern Denmark, Odense, Denmark
2Research Unit for ORL – Head & Neck Surgery and Audiology, Odense University Hospital, Odense, Denmark; University of Southern Denmark, Odense, Denmark

Hearing aids (HA) are the main rehabilitation treatment for age-related hearing loss. However, HA users often obtain limited benefit from their devices, particularly in noisy environments, and thus many HA candidates do not use them at all. A possible reason for this could be that current HA fittings are audiogram-based, that is, they neglect supra-threshold factors. In an earlier study, an auditory profiling method was proposed as a basis for a more personalized HA fitting approach. This method classifies HA users into four profiles that differ in terms of hearing sensitivity and supra-threshold hearing abilities. Previously, HA users belonging to these profiles showed significant differences in terms of speech recognition in noise but not subjective assessments of speech-in-noise (SIN) outcome. Besides, large individual differences within some profiles were observed. The purpose of the current study was to investigate whether cognitive factors can help explain these differences and improve aided outcome prediction. Thirty-nine older HA users completed three sets of tests in auditory abilities, cognitive abilities, and SIN perception. Principal component analyses were applied to extract the dominant sources of variance both within individual tests producing a large number of variables and within the three sets of tests. Multiple linear regression analyses performed on the extracted components showed that auditory factors were related to aided speech recognition but not subjective SIN assessments. Cognitive factors were unrelated to aided outcome. Overall, these findings do not support the idea of adding cognitive assessment in the profiling of HA candidates. 

15:00 - 15:10The influence of auditory processing skills on working memory in older adults with age-appropriate hearing By Katrien VermeireLong Island University Campus Brooklyn

Katrien Vermeire1, David M. Landsberger2
1Department of Communication Sciences and Disorders, Long Island University Campus Brooklyn, New York, New York, USA
2Department of Otolaryngology, New York University School of Medicine, New York, New York, USA

Several studies have suggested that hearing loss in the older population is independently associated with poorer cognitive functioning. However, both cross-sectional and prospective studies have reported conflicting results. An explanation might be that the way they assess hearing impairment, via an audiogram, is a poor predictor of cognitive functioning. Pure-tone audiometry is a measure of pure-tone detection and not the ability to use sounds in a meaningful way. Therefore, an auditory measure that assesses ability to use instead of detecting auditory information might be more appropriate to assess hearing ability. Suprathreshold psychoacoustic tasks, such as measures of temporal resolution, have been proposed as better measures of hearing quality.

The aim of this study was to investigate the contributions of age, hearing ability as measured by standard audiometry, and temporal acuity to cognitive functioning in older adults. Sixteen older adults (between 60 and 80 years of age) with age-appropriate hearing participated in this study. Auditory processing was investigated using the Gaps in Noise (GIN) test which focusses on the temporal processing ability and is relatively independent of audibility measures. Cognitive functioning was measured using working memory tests. Verbal working memory was tested using a Reading Span Test. A Corsi Block Tapping Test was used to test the Visuo-Spatial modality of the working memory.

Temporal processing ability (as measured by the GIN) was correlated with performance on the tests of verbal and visuo-spatial working memory. However, neither age nor audiometric hearing thresholds correlated with the working memory metrics.  Furthermore, multiple linear regressions found no additional benefit of including age or thresholds as part of the model. The data suggests that the ability to process auditory information may be more linked to cognitive function than the ability to detect auditory signals as is typically used to evaluate hearing loss.

15:10 - 15:20Streamlining experiment design in cognitive hearing science using OpenSesame By Eleonora SulasOticon Medical

Eleonora Sulas1, Pierre-Yves Hasan2, Yue Zhang1, François Patou2
1Oticon Medical, Vallauris, France
2Oticon Medical, Smørum, Denmark

Research in auditory science increasingly relies on concepts and testing paradigms from behavioural psychology and cognitive neuroscience. The evolution of auditory research towards Cognitive Hearing Science (CHS) aims to address the gaps in basic understanding of fundamental interactions between peripheral and central auditory processes and the need for evaluating new hearing interventions against meaningful cognitive and psychobehavioural outcomes.

Experimental paradigms for CHS may therefore call for the use of hybrid cognitive and psychobehavioural tests such as those relating the attentional system, working memory capacity, and executive functioning to the auditory modality. Experimentalists may also seek to relate these tests to objective measures acquired through modalities such as EEG, gazetracking, or pupillometry. Building such complex custom CHS experiments can rapidly become time-consuming and error-prone. Platform-based experimental design can help streamline the implementation of CHS experimental paradigms, promote the standardization of experiment design practices, and ensure reliability and control over timing.  We introduce a set of features built on the open-source python-based OpenSesame platform that allows the rapid implementation of custom behavioural and cognitive hearing science tests including complex multichannel audio stimuli and adaptive procedures while interfacing with various synchronous inputs/outputs I/Os, e.g., the Pupil Labs eye-tracking glasses. Our integration includes advanced audio playback capabilities with multiple loudspeakers, an adaptive procedure, compatibility with standard I/Os and their synchronization through implementation of the Lab Streaming Layer protocol. We exemplify the capabilities of this extended OpenSesame platform with an implementation of the three-alternative forced choice amplitude modulation detection test and discuss reliability and performance of the newly introduced plugins.

15:20 - 15:30Auditory processing and working memory in older and young adults with normal hearing By Vaishnavi RamadasSri Ramachandra Institute of Higher Education and Research

Vaishnavi Ramadas1, Ramya V1, Ajith Kumar U2, Sathianathan R3
1
Department of Speech, Language and Hearing Sciences, Sri Ramachandra Institute of Higher Education and Research, Chennai, India
2Department of Audiology, All India Institute of Speech and Hearing, Mysuru, India
3Department of Psychiatry, Sri Ramachandra Institute of Higher Education and Research, Chennai, India

One of the most common concerns with respect to communication in older adults is understanding of speech in the presence of noise. Studies report that older adults had poorer working memory and perception of temporal cues for perception of speech in quiet and noise when compared to those of young adults. This may result in a lower quality of life in the older population. Understanding speech in background noise is influenced by working memory. The objective of this study was to compare the binaural integration, temporal processing, auditory separation/closure and working memory skills of older adults and young adults with normal hearing and cognition. These auditory processes were assessed using dichotic digit test, duration pattern and pitch pattern tests, Gap-In-Noise test, Tamil Matrix Sentence Test in quiet and noise and Temporal Fine Structure using TFS1 software. Working memory was assessed using forward and backward span tests, operation span, N-back and running span tests. A cross-sectional study design was employed. Twenty five native Tamil speaking young adults (18 – 25 years) and 25 older adults (56 – 79 years) with hearing thresholds below 25 dBHL and MoCA scores greater than 26 were recruited. The results of this paper will throw light on the differences in auditory and cognitive skills between the older and young adults. Insight on the auditory and cognitive profiles of these two groups will aid audiologists in understanding the underlying deficits leading to difficulty in speech understanding in older adults and targeting appropriate auditory/cognitive processes during rehabilitation.

15:30 - 15:55Panel discussion
14:50 - 15:00Investigating peripheral contributions to the frequency following response using electrocochleography By Miguel Temboury GutierrezTechnical University of Denmark

Jonatan Märcher-Rørsted1, Miguel Temboury Gutierrez1, Gerard Encina-Llamas1, Jens Hjortkjær1,2, Torsten Dau1
1Hearing Systems Section, Department of Health Technology, Technical University of Denmark, DK-2800 Kgs. Lyngby, Denmark
2Danish Research Centre for Magnetic Resonance, Centre for Functional and Diagnostic Imaging and Research, Copenhagen University Hospital Hvidovre, DK-2650 Hvidovre, Denmark

Sound perception, in particular for speech and music, relies on accurate neural encoding of fast sound fluctuations. Neurons in the healthy auditory periphery and brainstem are able to phase-lock to these temporal fluctuations. Despite this being a fundamental aspect of the healthy hearing system, the limits of phase locking in humans remain unclear. Neuronal mass activity can be recorded in the form of electrical activity by electrodes placed on the scalp (i.e., electroencephalography, EEG). Phase-locked EEG responses to the carrier or the envelope of an auditory stimulus elicit a frequency following response (FFR). FFRs are reduced with increasing age. The reduction of FFR amplitude with age, particularly in listeners with clinically normal thresholds, has previously been associated with a decline in temporal processing due to desynchronization in the brainstem. However, motivated by outcomes from a recent modeling study, neural degeneration in the cochlea could account for such FFR reduction. If this would be supported experimentally, FFRs could be used as a biomarker of peripheral neural degeneration in humans. In the present study, FFRs in young and older clinically normal-hearing (NH) adults were recorded simultaneously using two electrode montages: a traditional vertical EEG montage mainly sensitive to central sources and through electrocochleography using tympanic membrane electrodes and ear canal electrodes to capture mainly peripheral sources. FFRs were recorded using tone-bursts at two frequencies (516 and 1086 Hz) and two durations (10 and 250 ms, respectively). Auditory brainstem responses (ABR) with the same electrode montages were also recorded. First results indicate that it may be possible to disentangle peripheral sources from central sources in both the FFRs and ABRs using this recording technique. If this is further validated, this technique may clarify the peripheral vs. central contributions to the reduction of FFR amplitudes in older participants.

15:00 - 15:10The effects of aging on the cortical representation of continuous speech By I.M. Dushyanthi KarunathilakeUniversity of Maryland

I..M Dushyanthi Karunathilake1, Jason L. Dunlap2, Janani Perera2, Alessandro Presacco3, Lien Decruy4, Samira Anderson2, Stefanie E. Kuchinsky5, Jonathan Z. Simon1,4,6
1
Department of Electrical and Computer Engineering, University of Maryland, College Park, USA
2Department of Hearing and Speech Sciences, University of Maryland, College Park, USA
3Department of Electrical Engineering, Universidad Nacional Autonóma de México, Mexico City, Mexico
4Institute for systems Research, University of Maryland, College Park, USA,
5Audiology and Speech Pathology Center, Walter Reed National Military Medical Center, Bethesda, USA
6Department of Biology, University of Maryland, College Park, USA 

The ability to selectively attend to speech in a noisy environment is crucial for everyday interactions, but this skill becomes more challenging with aging. Yet, the effects of aging on the neural mechanisms underlying selective attention and speech-in-noise perception are not well understood. In this magnetoencephalography (MEG) study, we investigate how continuous speech is represented in the cortex, in 18 younger and 17 older adults, while attending to one speaker and ignoring the other, at different signal-to-noise ratios. The low frequency (1-10 Hz) neural responses that track the low frequency speech stimulus envelope, both attended and unattended, were investigated using temporal response function (TRF) and envelope reconstruction. The TRFs showed three prominent peaks (M50TRF, M100TRF and M200TRF), representing distinct auditory processing stages. Compared to younger adults, older adults exhibited enhanced speech envelope tracking and TRF peak amplitudes, possibly due to several mechanisms. Envelope tracking decreased with the task difficulty and aging further affected this reduction. Integration window analysis revealed that the overrepresentation starts as early as ~50-100 ms and longer integration time windows were needed for older adults to achieve maximal reconstruction accuracy. With regard to the TRFs, M50TRF was relatively early and M200TRF was delayed in older adults, suggesting speech perception is altered from the early processing stages. Further, only in older adults, increasing task difficulty enlarged and reduced the peak amplitude of M100TRF and M200TRF respectively, but latencies were delayed for both groups. This later effect was even larger with aging, suggesting additional cortical processing is engaged in difficult listening situations. Interestingly, M200TRF amplitudes negatively correlated to the latencies in older adults, suggesting that there are other neural mechanisms contributing to M200TRF modulation. In sum, these results reveal age-related temporal processing deficits and late cortical processing (200 ms) that potentially compensate for age-related impairments in speech perception.

Acknowledgements: This work was supported by the National Institutes of Health grants P01-AG055365 and R01-DC014085. The views expressed in this abstract are those of the author and do not reflect the official policy of the Department of Army/ Navy/ Air Force, Department of Defense, or U.S. Government.

15:10 - 15:20Speech-evoked cortical responses in normal-hearing listeners and experienced hearing-aid users By Tobias NeherUniversity of Southern Denmark

Vivi Tran1, Louise Plougheld1, Pushkar Deshpande1,2, Tobias Neher1,2
1
Department of Clinical Research, Faculty of Health Sciences, University of Southern Denmark, Odense, Denmark
2Research Unit for ORL – Head & Neck Surgery and Audiology, Odense University Hospital, Odense, Denmark; University of Southern Denmark

Electrophysiological measurements can be used for investigating how speech sounds are processed in the brain. This can provide insights into the cortical processes underlying speech perception, which may improve hearing rehabilitation. The aim of the current study was to investigate how speech sounds are processed by normal-hearing listeners and experienced hearing-aid users. Fifteen young normal-hearing listeners and eight experienced hearing-aid users participated. The hearing-aid users were bilaterally fitted with Widex Evoke Fusion 440 devices according to NAL-NL2 target gains. N100, P300, N400 and Late Positive Complex (LPC) responses were elicited using the digit stimuli from the DANTALE-I material. All measurements were carried out in the presence of speech-shaped noise at 67 dB SPL. The N100 and P300 responses were elicited using an active oddball paradigm. The N400 and LPC responses were elicited using an ‘arithmetic’ paradigm based on congruent and incongruent digit sequences. While the N100, P300 and LPC latencies were comparable with literature data, the N400 response occurred approx. 100 ms earlier than in studies based on ‘linguistic’ paradigms. No group differences in terms of amplitudes or latencies of the different EEG components were found. Follow-up studies with larger and more homogeneous groups of hearing-aid users will facilitate a better understanding of how speech sounds are processed in the brains of such listeners. Performing such measurements before and after hearing-aid intervention will provide prognostic indicators with respect to treatment outcome.

15:20 - 15:30Test-retest reliability of subcortical responses to continuous speech in older hearing-impaired adults By Florine L. BachmannTechnical University of Denmark

Florine L. Bachmann1, Jens Hjortkjær1,2
1
Hearing Systems Section, Department of Health Technology, Technical University of Denmark, Kgs. Lyngby, Denmark
2Danish Research Centre for Magnetic Resonance, Centre for Functional Diagnostic Imaging and Research, Copenhagen University Hospital, Denmark

The application of EEG-based linearized encoding models for measuring neural responses to continuous speech has recently been extended from cortical to subcortical auditory processing. The subcortical response to continuous speech approximates the auditory brainstem response (ABR) to clicks and holds the potential to complement current clinical hearing assessments. This approach also enables the simultaneous assessment of subcortical and cortical responses, which could further shed light on the effects of age and hearing loss on the auditory system. However, for both clinical and research applications, it is crucial to investigate the reliability of such speech-derived subcortical responses in a clinical population. We therefore measured subcortical responses to continuous speech six times in a participant sample of 11 (5 female) older (68.45 ± 7.19) experienced bilateral hearing aid users with symmetric mild to moderate hearing loss. On different days, participants were presented with either identical or different speech materials to investigate consistency. Conventional click-ABRs and frequency-following responses (FFRs) were also assessed for comparison with the traditional methods. We discuss test-retest reliability of the latency and amplitude of wave-V components in the speech-ABR and its relation to conventional click-ABRs. We further discuss potential clinical applications, such as the evaluation of noise-reducing hearing assistive device algorithms or device fitting.

15:30 - 15:55Panel discussion
14:50 - 15:00Clinical implementation of the Better hEAring Rehabilitation (BEAR) new strategies to improve hearing aid fitting process By Oscar M CañeteTechnical University of Denmark

Oscar M Cañete1, Amalie T Stubberup2, Lotte S E Petersen3, Raul H Sanchez-Lopez1, Jens Bo Nielsen1, Katja Lund4, Rodrigo Ordoñez4, Jesper H Schmidt2, Dan D Hougaard3, Rikke Schnack-Petersen2, Michael Gaihede3, Dorte Hammershøi4, Gérard Loquet3,4
1
Hearing Systems Section, Department of Health Technology, Technical University of Denmark, Lyngby, Denmark
2Department of Oto-rhino-laryngology, Odense University Hospital, Odense, Denmark
3Department of Otolaryngology, Head and Neck Surgery and Audiology, Aalborg University Hospital, Aalborg, Denmark
4Department of Electronic Systems, Signals and Information Processing, Aalborg University, Aalborg, Denmark

Hearing aids are the most common treatment for hearing loss. Currently, hearing aid fitting depends heavily on the hearing care professional experience and patients’ feedback. In the BEAR project, the goal is to improve this process through developing new clinical methods for diagnostic and hearing aid fitting. The purpose of the present study was to clinically implement a new individualized approach to validate the BEAR strategy. Towards that, 79 adults (mean age of 69.1 years) with symmetric sensorineural hearing loss but no previous experience with hearing aids were recruited from two Danish Hospitals (Aalborg and Odense). The participants underwent four visits scheduled as follows: 1) hearing examination including basic hearing tests, 2) auditory profiling through speech perception, binaural processing, loudness perception and spectro-temporal resolution, 3) hearing aid fitting with individualized compensatory strategies, real ear measures and aided performance tests, 4) follow-up after two months with retest of the aided performance, real ear measures and hearing aid adjustments when needed. Between the last two visits participants were encouraged to complete an on-line tool for registering their daily life experiences. Results show that beyond the collection of several predictors, challenges have arisen when applying such methods in a clinical environment. Technical difficulties for example appeared due to specific IT systems deployed in each hospital. Training was necessary to run the tests and video tutorial and on-site instructions were provided. As a result no missing data were observed for profiling and aided measures while for the real ear measurements, there was a significant inter-examiner variability. This translated into problems to reach the target while fitting hearing aids with the BEAR strategy. However, as a whole, the clinical implementation of the BEAR is feasible and factors such as personal training and technical assistance must be in place to guarantee a smooth running.

Acknowledgments: This work was supported by Innovation Fund Denmark Grand Solutions 5164-00011B (BEAR project).

15:00 - 15:10Development and evaluation of a method to determine the loudness perception of hearing aid users for natural signals By Theresa JansenHörTech gGmbH

Theresa Jansen1, Laura Hartog1, Melanie Krueger1, Dirk Oetting1
1
HörTech gGmbH, Oldenburg, Germany

Complaints about loudness settings of the hearing aid after the first fit occur frequently. They are attempted to be resolved by the audiologist during the fine tuning with the help of the client’s problem description. In order to increase satisfaction a normal loudness perception should be pursued already during the first fit for binaural broadband signals.
In this study a method for measuring and assessing individual aided loudness perception with natural signals was developed and evaluated. This procedure should detect deviations from a normal loudness perception and indicate in which frequency range (low, medium, high) and level ranges (50, 65, 80 dB) a correction of the amplification is necessary. The procedure is designed for the use during the initial fitting of hearing aids, before clients test the hearing aids in real-life situations.
For this purpose, 60 natural signals with different spectral characteristics were selected. In addition to signals with the main energy in low, medium and high frequencies, signals were also selected with spectra similar to speech.
These signals were presented via loudspeakers at different levels and rated by the subjects on a scale from “not heard” to “extremely loud”.
First, the measurement was carried out with 33 normal-hearing listeners in order to define the reference range of loudness ratings. The evaluation study with hearing-impaired subjects was conducted with two groups. The first group showed a particularly high loudness summation and the second group a particularly low loudness summation, with both groups showing similar audiometric hearing losses. Loudness assessments were performed unaided and fitted with hearing aids with NAL-NL2. The results show lower-than-normal loudness perception in listeners with low loudness summation and higher-than-normal loudness perception in listeners with high loudness summation when fitted with NAL-NL2.

15:10 - 15:20Comparison of preferred number of channels of hearing aids across different age groups By Nivedha RaoAll India Institute of Speech and Hearing

Nivedha Rao1, Vikas Mysore Dwarakanath1, Prawin Kumar1
1
All India Institute of Speech and Hearing, Mysuru, India

The use of multichannel compression hearing aids would optimize the audibility across different frequencies. There are several reports in literature that have tried to explore the effect of increasing the number of channels of hearing aid on speech perception. The opinions regarding the usefulness of more or less number of channels showing improvement or deterioration in speech perception are having mixed opinion. Hence the present study aimed to see the effect of number of channels on hearing aid prescription across difference age groups. A total of 1606 individuals with age ranging from 0.3 to 97 years (mean age 45.8 years) were included in the present study who were evaluated with complete audiological evaluation procedures and were given a hearing aid trial with various hearing aids. The hearing aids were prescribed based on the trial performance and the subjective selection. The age wise data was divided into 0-18 years considered as children, 19-35 years as young adults, 36-60 years as middle aged adults, and greater than 61 years as older adults. The retrospective analysis of data revealed that children with hearing loss were preferred to have better performance with higher channel hearing aids varying from 8-9 channels to 16 channels. The young adults preferred to use the hearing aids with 6-9 channels. While middle and older adults preferred for 4-6 channel hearing aids. As the age advanced, the preferred number of channels decreased. This infers that the individuals were comfortable with the speech processing with lesser number of channels as advanced in age and found to be beneficial to them.

15:20 - 15:30Modeling daily hearing instrument from EVOTION hearing instruments with audiometric profiles from BEAR project By Niels H. PontoppidanEriksholm Research Centre

Niels H. Pontoppidan1, Jeppe H. Christensen1, Raul Sanchez-Lopez2,7, Athanasios Bibas3, Louisa Murdin4, Apostolos Economou5, Doris-Eva Bamiou6
1
Eriksholm Research Centre, Oticon A/S, Snekkersten, Denmark
2Interacoustics Research Unit, Interacoustics A/S, Lyngby, Denmark
3University of Athens, Athens, Greece
4Guys and St. Thomas NHS Trust, London, United Kingdom
5Athens Medical Group, Athens, Greece
6University College London, London, United Kingdom
7Hearing Systems Section, Dept. Health Technology, Technical Univerity of Denmark, Kgs. Lyngby

Data logging from recent hearing instruments enables deeper analysis the usage from detailed information about time, scenes, and hearing loss. This study examines logging data from 5000 days and 200 participants, where the daily sound environment is characterized the average sound pressure level (SPL), the average signal-to-noise ratio (SNR), and the
duration of usage logged that day. The participants’ hearing deficits are described by their audiograms, which were clustered into four audiometric groups (a-d) based on the aggregate low-frequency (HTLF) and high-frequency (HTHF) hearing thresholds as a simple approximation of the auditory profiles of the BEAR project. This analysis aims to explore the impact of the sound environment and people’s hearing deficits on the hearing-aid usage to better understand the situations in real life where hearing instruments are being used. This analysis employs linear mixed-effects model with random effects for degree of hearing loss, the average daily usage per subject, and fixed effects for average SPL and SNR. The main analysis shows that individuals with high degree of both HTLF HTHF thresholds (group-c) use their hearing instruments less than the average, while the groups where only a high degree of either HTLF (group-d) or HTHF (group-b) use their hearing instruments more than the average. On the side, the analysis also indicates that a louder daily sound environment leads to longer daily usage and large spread in individual usage. Finally, the analysis
confirms prior findings, as the results indicate that previous experience using hearing instruments also leads to longer duration of use. The outcomes of this analysis provide a more detailed understanding of use of hearing instruments showing how the usage for hearing instruments is influenced by type of hearing loss, previous use of hearing instruments and the daily sound environments.

Acknowledgements: This work received funding from Innovation Foundation Denmark through BEAR project and from European Union’s Horizon 2020 Research and Innovation programme under grant agreement 727521.

15:30 - 15:40Probing first-fit experiences in adult new hearing aid users By Rodrigo OrdoñezAalborg University

Rodrigo Ordoñez1, Katja Lund1, Jens Bo Nielsen2, Palle Rye1, Oscar M. Cañete2, Amalie T. Stubberup3, Lotte S. E. Petersen4, Jesper H. Schmidt3, Dan D. Hougaard4, Michael Gaihede4, Gérard Loquet3,4, Dorte Hammershøi1
1
Department of Electronic Systems, Aalborg University, Aalborg, Denmark
2Department of Health Technology, Technical University of Denmark, Lyngby, Denmark
3Department of Oto-rhino-laryngology, Odense University Hospital, Odense, Denmark
4Department of Otolaryngology, Head and Neck Surgery and Audiology, Aalborg University Hospital, Aalborg, Denmark

In order to investigate hearing-aid user experience, patients participating in a clinical trial from the Better hEAring Rehabilitation project, are given the opportunity to register their experiences with their new hearing aids during a two-month period between initial fitting and a follow-up session. Experiences are registered using an on-line system that allows users to report whether or not they have experienced a series of predefined sentences (user atoms) related to everyday use of hearing aids. These user atoms are presented individually, each requiring a response from the patient. As part of the clinical trial, patients are also administered aided-performance tests in a controlled sound environment. These tests are carried out prior to reporting experiences in the on-line system and after the initial fitting as well as at the two-month follow-up session. The present contribution presents the initial data collected with the on-line system with respect to uptake of the system, frequency of use, response rate, probabilities of positive experiences and progression over time. Furthermore, a comparison of the results from the aided performance test and the experiences registered in the on-line system from a cohort of 53 patients that participated in a pilot study of the clinical trial is presented. The aim of the data analysis is to investigate how hearing-aid patients use the system and to compare in-clinic aided performance measures to out-of-clinic self-reported experiences of use.

Acknowledgements: The work has been done as a part of the Better Hearing Rehabilitation Project funded by Innovation Fund Denmark and Partners (including Force Technology, Oticon, GN Hearing, and Widex Sivantos Audiology). Funding and collaboration are sincerely appreciated. The project number is 5164-00011B.

15:40 - 15:55Panel discussion
  • Podium: Developing system
  • Parallel: Pediatrics
  • Parallel: Hearing aid signal processing
  • Parallel: Speech perception
  • Parallel: Psychoacoustics & Perception I
13:00 - 13:05Introduction
13:05 - 13:35Predicting 9-year language ability from preschool speech recognition in noise in children using cochlear implants By Teresa Y. C. ChingNational Acoustic Laboratories

Teresa Y. C. Ching1,2, Linda Cupples2, Mark Seeto1, Vicky Zhang1,2, Carmen Kung1,2
1
National Acoustic Laboratories, Sydney, Australia
2Department of Linguistics, Macquarie University, Sydney, Australia

The presence of congenital permanent childhood hearing loss (PCHL) reduces auditory access to spectral and temporal cues in the speech signal, thereby influencing the development of auditory processing and language abilities in children. Despite early detection and intervention, weaknesses in listening in noise and language development in children with PCHL have been documented. Even though concurrent relationships between these abilities have been examined, there has been little research on how the ability to recognize speech in noise develops from preschool to school age, and on the direction of the relationship between speech recognition and language abilities. Increased knowledge about development of these abilities and their potentially predictive relationship has important implications for theoretical understanding of the mechanisms that underlie development after treatment of sensory deprivation (in this case, providing cochlear implants to children with profound PCHL); and clinical implications to guide management of children for improving outcomes. In this paper, the influence of speech recognition at age 5 years on language ability at age 9 years will be examined using cross-lagged correlation analyses of data from a group of children with profound hearing loss who were followed as part of the Longitudinal Outcomes of Children with Hearing Impairment (LOCHI) study.

Acknowledgements: The project described was partly supported by Award Number R01DC008080 from the National Institute On Deafness And Other Communication Disorders. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institute On Deafness And Other Communication Disorders or the National Institutes of Health. The project was also supported by the Commonwealth of Australia through the Office of Hearing Services and the HEARing Cooperative Reserach Centre.

13:35 - 13:55Does exposure to noise in military service affect the progression of hearing loss with increasing age? By Brian C. J. MooreUniversity of Cambridge

Brian C. J. Moore1, David A. Lowe2
1
Cambridge Hearing Group, Department of Psychology, University of Cambridge, Downing Street, Cambridge CB2 3EB, UK
2
ENT Department. James Cook University Hospital, Marton Rd, Middlesbrough, Cleveland, TS4 3BW, UK

It is commonly believed that the effects of exposure to noise cease once the exposure itself has ceased. If this is the case, exposure to noise relatively early in life, for example during military service, should not affect the subsequent progression of hearing loss. However, recent data from studies using animals suggest that noise exposure can accelerate the subsequent progression of hearing loss. In this paper I review data from published studies on the effects of noise exposure on the progression of hearing loss once noise exposure has ceased, particularly for the case of noise exposure during military service. I also present some new longitudinal data obtained from military personnel. The results are consistent with the idea that noise exposure during military service accelerates the progression of hearing loss at frequencies where the hearing loss is absent or mild at the end of military service, but has no effect on or slows the progression of hearing loss at frequencies where the hearing loss exceeds about 50 dB. Acceleration appears to occur over a wide frequency range, including 1 kHz. However, there is a need for further longitudinal studies using a larger number of subjects. Longitudinal studies are also needed to establish whether exposure to other types of sounds, for example at rock concerts, affects the subsequent progression of hearing loss.

13:55 - 14:10Break
14:10 - 14:40Development of voice perception from childhood to adulthood in normal hearing and with cochlear implants By Deniz BaşkentUniversity of Groningen

Deniz Başkent1,2
1
Department of Otorhinolaryngology/Head and Neck Surgery, University Medical Center Groningen, University of Groningen, Groningen, Netherlands
2University of Groningen, Graduate School of Medical Sciences, Research School of Behavioral and Cognitive Neurosciences, Groningen, Netherlands

Perception of a speaker’s voice is not only important for talker identification, but also for assessing their emotional state and better understanding their speech, by adapting to their speaking style, or segregating their speech from background interfering speakers. Our focus on voice perception is further motivated by our interest in users of cochlear implants (CIs), whose speech communication relies on electrically transmitted speech signals that are inherently degraded in their spectro-temporal details. Previous research with post-lingually deafened and implanted adult CI users had shown that this typical CI group seem to have difficulties in tasks related to voice perception. Therefore, to be able to introduce any improvement to this problem, we have to understand specific perceptual mechanisms related to voice perception. For this purpose, we have been collecting data on development of voice perception throughout childhood to adulthood. More specifically, in our PICKA (Perception of Indexical Cues in Kids and Adults) project we have been investigating voice cue perception, vocal gender categorization, voice emotion perception, and speech on speech perception. In these experiments, we focus on vocal pitch (F0, related to glottal pulse rate) and vocal tract length (VTL, related to formants) as these voice cues can largely affect the perceived gender of a speaker and could be manipulated separately or together from a single speaker using speech synthesis tools (e.g., STRAIGHT). Further, we have been testing various groups as different learning models, such as early deafened children and adults, implanted early or later in life, and who have to learn voice cues via their implant directly. This systematic approach has revealed many interesting observations. Namely, mechanisms related to voice cue use seem to develop slowly in childhood over many years, and the effectiveness of use of voice cues seems to differ greatly across child or adult users of CIs.

Acknowledgements: This presentation is partially based on the PhD Thesis work of Leanne Nagels, in collaboration with Petra Hendriks, Etienne Gaudrain, Debi Vickers, Christina Fuller, and Rolien Free. Thankful for funding by Center for Language Cognition Groningen and Mandema (Univ. Groningen, NL), VICI Grant 918-17-603 (ZonMw, NWO, NL), and Senior Fellowship Grant S002537/1 (MRC, UK).

14:40 - 15:45Parallel sessions

Find more details by clicking on the ‘Parallel’ tabs above.

15:45 - 16:00Break
16:00 - 16:20Effects of hearing loss on interaural time difference sensitivity at low and high frequencies By Virginia BestBoston University

Virginia Best1, H. Steven Colburn1, Lucas S. Baltzell1
1
Department of Speech, Language and Hearing Sciences, Boston University, Boston, USA

While many studies have reported a loss of sensitivity to interaural time differences (ITDs) carried in the fine-structure of low-frequency signals for listeners with hearing impairment (HI), relatively few data are available on the perception of ITDs carried in the envelope of high-frequency signals ITDs in this population. The few studies that exist found stronger effects of hearing loss at high frequencies than at low frequencies, but small subject numbers and several confounding effects prevented strong conclusions from being drawn. In the present study, we revisited this question while addressing some of the issues identified in previous studies. First, we focused on “rustle” stimuli that contain strong envelope fluctuations at high frequencies and thus have the potential to provide salient envelope ITDs. Second, we carefully equated sensation level across listeners and tested two different levels per listener, to better characterize effects of level. Third, we included young listeners in the HI group to tease apart effects of hearing loss and age. ITD discrimination thresholds were measured for 15 HI listeners and 10 listeners with normal hearing (NH). The stimuli were octave-band-wide rustle stimuli centered at 500 Hz or 4 kHz, which were presented at 20 dB or 40 dB sensation level. Broadband rustle stimuli and 500-Hz pure-tone stimuli were also tested. Overall, hearing loss had a detrimental effect on ITD discrimination. For the majority of HI listeners, their ITD deficit relative to the NH group was equivalent at low and high frequencies. For a handful of HI listeners, the deficit was strongly frequency-dependent. The results provide new data to inform binaural models that incorporate effects of hearing impairment.

Acknowledgements: This work was supported by NIH-NIDCD award DC015760.

16:20 - 16:50Auditory cortex plasticity supports social learning By Dan H. SanesNew York University

Dan H. Sanes1,2,3, Nihaad Paraouty1
1
Center for Neural Science, New York University, New York, USA
2Department of Psychology, New York University, New York, USA
3Department of Biology, New York University, New York, USA

The acquisition of new skills, including aural communication, can be facilitated when a naïve observer is exposed to a conspecific performing a well-defined behavior (i.e., social learning). Although the neural bases for auditory social learning remain uncertain, one plausible hypothesis is that social experience induces long-term changes to auditory cortex response properties, thereby facilitating the subsequent acquisition of an auditory skill. To explore this idea, we developed a social learning paradigm in which naïve Observer gerbils are exposed to a Demonstrator gerbil that is performing an amplitude modulation (AM) rate discrimination task across an opaque divider. Thus, Observers have access only to auditory cues (i.e., the AM sounds, Demonstrator vocalizations, movement-associated sounds). When exposed to a Demonstrator for five successive days, Observer gerbils subsequently acquire the AM task more rapidly than controls (Paraouty et al., 2020). We first asked whether auditory cortex activity is necessary for social learning. Auditory cortex was bilaterally inactivated in Observers during each of the five daily exposures to the Demonstrator. These Observers did not benefit from social exposure, suggesting a necessary role for auditory cortex. To determine whether neural plasticity was induced by the social experience, we recorded wirelessly from the Observer’s auditory cortex during each of the five daily exposure sessions with the Demonstrator. Auditory cortex neurons displayed a significant improvement in AM discrimination across the five days of social exposure. Furthermore, the magnitude of neural improvement correlated with an animal’s subsequent rate of task acquisition. Together, these findings suggest that auditory cortex plasticity plays a pivotal role in social learning.

14:40 - 14:50Extended high frequency hearing impairment in children: Association with cochlear function in lower frequencies and speech-in-noise recognition By Srikanta MishraUniversity of Texas Rio Grande Valley

Udit Saxena1, Hansapani Rodrigo2, Srikanta Mishra3
1
Department of Audiology & Speech Language Pathology, GMES Medical College & Hospital, Ahmedabad, India
2School of Mathematical and Statistical Sciences, The University of Texas Rio Grande Valley, Edinburg, USA

3Department of Communication Sciences & Disorders, University of Texas Rio Grande Valley, Edinburg, USA

Humans can hear up to 20 kHz. Compared to adults, children have excellent hearing in the extended high frequencies (EHFs). Emerging reports suggest that EHFs contribute to speech-in-noise recognition in children who have normal hearing in the standard frequencies (0.25 through 8 kHz) and EHFs (10-16 kHz). However, the effect of EHF hearing impairment in children remains unclear. This case-control study aimed to answer how EHF hearing loss in children is related to cochlear function in lower frequencies and speech-in-noise perception. We measured hearing thresholds in the standard frequencies and EHFs, distortion product otoacoustic emissions, and digit triplets in noise in children (n=542; 4-19 years) with clinically normal audiograms. Thirty-eight children had some degree of impairment (>15 dB HL) for at least one EHF. Children with EHF impairment had relatively higher thresholds in the standard frequencies even though they had clinically normal audiograms. Otoacoustic emissions in the 2-5 kHz region predicted EHF hearing status and were lower for EHF-impaired children than children with no EHF impairment. EHF impairment had a small but statistically significant effect on the speech recognition threshold, when age effects were adjusted using a linear mixed-effects model. There was no effect of otitis media history, although a history of pressure equalization tube surgery excluded from participation. These findings suggest that despite a normal audiogram, EHF hearing impairment is common in children and is associated with pre-clinical cochlear deficits and poor speech-in-noise recognition. The results will be discussed in detail.

14:50 - 15:00Evaluation of hearing aid benefit in children with ANSD and SNHL By Apoora Prathibha K. S.All India Institute of Speech and Hearing

Apoorva Prathibha K. S. 1 , K. V. Nisha1, Ajith U. Kumar 1
1 Department of Audiology, All India Institute of Speech and Hearing (AIISH), Naimisham Campus, Manasagangothri, Mysuru, 570006, India

The variability in rehabilitative outcomes of children with ANSD has intrigued Audiologists over past two decades. The current study aimed to compare the hearing aid outcomes in children with ANSD and SNHL. Retrospective data collected from medical reports of 24 participants, aged 1-4.9y was taken for the study. The participants were into two groups based on the patho-physiology of the deficit: ANSD (n = 12, mean age:2.43±1.21 SD) and SNHL (n = 12, mean age:2.43±1.21y SD). While the group equivalency in HA functional benefit (aided hearing aid scores within speech banana) and data at 3 evaluations (baseline, follow up-1, & follow up-2) was considered preliminary criteria for inclusion of both groups, the diagnosis of the former was based on the recommendations of Staar et al. (2000) and the latter group comprised on SNHL children who were age-matched to the former group. The hearing aid benefit was quantified as difference between unaided and aided thresholds of the better ear were obtained at three evaluations for 4 frequencies (500, 1000, 2000 and 4000 Hz). The results of two-way repeated measure ANOVA (4 frequencies × 3 evaluations) revealed significant no main effect of frequency [F(3,66)=0.12, p>0.05] and follow up [F(2,44)=0.61, p>0.05], but the significant effect of group [F(1,66)=0.12, p=0.001] and significant interaction effects [group*followup: F(2,44)=8.31, p=0.001, ηp2 =0.03; group*frequency: F(3,66)=5.17, p=0.003, ηp2 =0.19; group*frequency*follow up:F(6,132)=4.11, p=0.001, ηp2 =0.16] were seen. On post-Hoc independent t-tests, the hearing aid benefit in SNHL group was statistically higher (p<0.05) than ANSD group in 500, 1000 & 2000 Hz frequencies at baseline, and at 500 Hz in follow up 1. This disparity in hearing aid benefit (despite group equivalency age, degree and functional outcomes of hearing aid) between the groups can be traced to the neural asynchrony in ANSD, leading to variable outcomes in them.

15:00 - 15:10Further development of a Danish test material for assessing speech recognition in noise in school-age children By Shno KoiekUniversity of Southern Denmark

Signe Hjorth Fogh1,2, Shno Koiek1,2, Tobias Neher1,2
1
Institute of Clinical Research, University of Southern Denmark, Odense, Denmark
2Research Unit for ORL – Head & Neck Surgery and Audiology, Odense University Hospital, Odense, Denmark; University of Southern Denmark, Odense, Denmark

Recently, a Danish sentence material for assessing speech recognition in noise in school-age children (the ‘børneDAT’ material) was developed. To allow this material to be used clinically, age-specific normative data are required. The aim of the current study was to collect such data for participants aged 6-7, 7-8, 8-9, 9-10, 10-11, 11-12, 12-13 and 20-30 years. Another aim was to assess the test-retest reliability of the collected data. Seventy-four children and 12 adults participated. Speech recognition thresholds (SRTs) were measured twice at two different visits in each of four conditions. In the first two conditions, diotic stationary speech-shaped noise was used. The target speech was presented diotically (N0S0) or interaurally out-of-phase (N0S180). In the other two conditions, two running speech maskers were used. The target speech was presented from 0° and the two speech maskers from either 0° (co-located) or ±90° (spatially separated). In general, the SRTs decreased with higher age, with the groups <10-11 years obtaining higher SRTs than the adults. The SRTs were lowest in the N0S180 and spatially separated conditions (means:  -6.7 and −6.6 dB SNR, respectively) and highest in the co-located condition (mean: −1.8 dB SNR). The within-subject standard deviation of the measurements was smallest in the N0S0 condition (1.0 dB SNR) and largest in the spatially separated condition (1.3 dB SNR). Training effects were on the order of 1-1.5 dB. In conclusion, these results demonstrate the applicability of the børneDAT material for assessing speech recognition in noise in children as young as 6 years.

15:10 - 15:20Do school-age children with a history of recurrent otitis media show binaural speech-in-noise processing deficits? By Shno KoiekUniversity of Southern Denmark

Shno Koiek1,2, Jens Bo Nielsen3, Harvey Dillon4,5, Christian Brandt1,2, Tobias Neher1,2
1Institute of Clinical Research, University of Southern Denmark, Odense, Denmark
2Research Unit for ORL – Head & Neck Surgery and Audiology, Odense University Hospital, Odense, Denmark; University of Southern Denmark, Odense, Denmark
3Hearing Systems, Department of Health Technology, Technical University of Denmark, Lyngby, Denmark
4Department of Linguistics, Macquarie University, Sydney, Australia
5Manchester Centre for Audiology and Deafness, The University of Manchester, Manchester, United Kingdom.

Several studies have reported negative long-term effects of early-childhood otitis media (OM) on the ability to exploit binaural information for segregating competing speech signals. Whether corresponding effects also manifest themselves at lower levels of auditory processing has not been investigated systematically. The aim of the current study was to investigate the long-term effects of OM on speech recognition in stationary noise or competing speech with or without binaural differences between the target and maskers. Another aim was to investigate the effects of the individual otologic history on these results. Children aged 6-13 years either with (N = 42) or without a history of OM (controls, N = 20) participated. For measurements in stationary noise, the target speech was presented from 0° and the masker from 90°. Speech reception thresholds (SRTs) were then measured with the stimuli presented either binaurally or monaurally to the ear opposite to the noise. For measurements with competing speech, binaural SRTs were measured with the target speech presented from 0° and two speech maskers presented from either 0° or ±90°. For each set of measurements, binaural advantage scores were also calculated. To investigate the influence of the individual otologic history, an index of overall OM severity (OMS) was derived based on OM onset age, overall OM duration, and time since the last OM episode from the children’s otologic records. The OM children showed poorer monaural and binaural SRTs in stationary noise as well as poorer binaural SRTs with speech maskers from ±90° compared to the controls. The binaural advantage scores were similar between the two groups. Consequently, the hearing deficits in noise in OM children appear to arise from lower levels of auditory system. After controlling for age, the defined OMS index was not a significant predictor of the hearing in noise abilities in OM children.

15:20 - 15:45Panel discussion
14:40 - 14:50Audiovisual speech separation with deep neural networks and independent feature optimization By Jens HjortkjærTechnical University of Denmark

Chuan Wen1, Nicolai Pedersen1, Jens Hjortkjær1,2
1
Hearing Systems Section, Department of Health Technology, Technical University of Denmark, Kongens Lyngby, Denmark
2Danish Research Centre for Magnetic Resonance, Centre for Functional and Diagnostic Imaging and Research, Copenhagen University Hospital – Amager and Hvidovre, Copenhagen, Denmark

Acoustic separation of constituent speech streams from multi-talker mixtures is a challenging task for man and machine. Recent studies have shown that visual information of the speaker’s face can aid automatic speech separation systems. Audio-visual speech separation systems based on deep neural networks have achieved high speaker-independent performance with only a single microphone channel. However, large-scale datasets are required to obtain satisfactory results, which introduces a heavy computation overhead. To address this problem, we propose an audio-visual deep-learning-based speech separation framework which decouples the audio-visual fusion process from the separation model. Audio-visual features are first optimized independently using Correlational Neural Networks (CorrNet). Visual features extracted from the fusion stage are subsequently applied to the separation network, constituting ‘visual hints’ at the target speech. For two-talker mixtures, our audiovisual separation framework achieves an average performance of 8.09dB scale-invariant source-to-distortion ratio (SI-SDR) improvement. The performance rivals current state-of-the-art separation systems relying on substantially larger networks and more training data.

14:50 - 15:00Objective evaluation of dynamic range compression in adverse acoustic conditions using a data-driven distance metric By Niels OverbyTechnical University of Denmark

Niels Overby1, Torsten Dau1, Tobias May1
1Hearing Systems Section, Department of Health Technology, Technical University of Denmark, Kgs. Lyngby, Denmark

Dynamic range compression aims to restore audibility for hearing-impaired listeners and is one of the most essential building blocks in modern hearing aids. However, the choice of suitable compression parameters, such as the time constants associated with the level estimation stage, depends on the acoustic conditions, and the perceptual benefit of different parameter configurations is still controversial. Listening tests can provide an accurate assessment of the perceptual effects of compression in a limited set of acoustic conditions, but they are time-consuming and can therefore not be used to optimize the various compression parameters across experimental conditions. While several studies have attempted to link the perceptual outcomes of dynamic range compression to a set of objective metrics, there is no agreement on how to objectively quantify the effects of compression. In the current study, a data-driven distance metric was developed based on objective metrics to analyze different compression systems. This analysis included slow-acting, fast-acting, and ‘scene-aware’ compression that adaptively switched between fast- and slow-acting compression depending on the target source activity. In addition, a reference system termed ‘source-independent compression’ was considered that had access to the individual speech and noise signals. A comprehensive list of objective metrics was considered to evaluate the effect of the different compression systems in a wide variety of acoustic conditions, including both interfering noise and room reverberation. Sparse principal component analysis (PCA) was then applied to derive a compact set of interpretable features that explained the effects of compression as linear combinations of sparsely selected objective metrics. The Euclidean distance, within the reduced dimensionality representation, was used to compare the similarity between the compression systems. This newly developed distance metric allows a systematic analysis and optimization of the parameters of dynamic range compression systems by minimizing the Euclidean distance with respect to the source-independent compression system.

15:00 - 15:10Technical evaluation of impulse noise reduction in hearing aids By Hendrik HusstedtGerman Institute of Hearing Aids

Hendrik Husstedt1, Wiebke Hilgerdenaar1,2, Marlitt Frenz1, Florian Denk 1, Jürgen Tchorz3, Simone Wollermann1
1
German Institute of Hearing Aids, Lübeck, Germany
2Universität zu Lübeck, Lübeck, Germany
3Technische Hochschule Lübeck, Lübeck, Germany

Hearing aid users often report that short, impulse-like noise signals with high sound pressure levels such as slamming of a door or rattle of dishes are particularly annoying. Although the risk of presenting excessive sound levels is covered by the output limiter of hearing aids, features such as a general noise reduction or the automatic gain control are supposed to have no effect on those impulse signals. Therefore, some hearing aids provide an impulse (or transient) noise reduction that should reduce loud and short noise signals without impairing the desired signals, e.g., speech. In this work, we performed a technical evaluation of the impulse noise reduction of commercially available hearing aids of six different manufacturers. For this purpose, we made anechoic recordings of a set of impulse noises, presented them to the hearing aids attached to a head and torso simulator (KEMAR), and determined the C-weighted peak sound pressure levels at the output of the hearing aids. The hearing aids were fitted to a hearing loss of type N3 (according to IEC 60118-15), and the binaural coupling was activated. Various hearing aid configurations were considered where the general noise reduction, the automatic gain control, the output limiter, and the impulse noise reduction were activated or deactivated. Moreover, the impulse noises were presented in quiet, and within a speech pause of the international speech test signal (ISTS). These results give deep insights into the performance of the impulse noise reduction in hearing aids of multiple manufacturers, for different impulse noises, in quiet and within speech, and the interaction with other hearing aid features.

15:10 - 15:20Investigating the effects of short hearing-aid processing delays on perceived sound quality By Ellen LundorffUniversity of Southern Denmark

Ellen Lundorff1,2,3, Rasmus Skipper4, Tobias Neher1,2, Georg Stiefenhofer3
1
Institute of Clinical Research, Faculty of Health Sciences, University of Southern Denmark, Odense, Denmark
2Research Unit for ORL – Head & Neck Surgery and Audiology, Odense University Hospital, Odense, Denmark; University of Southern Denmark, Odense, Denmark
3Scientific Audiology, WS Audiology A/S, Lynge, Denmark
4Dept. of Audiology, Odense University Hospital, Odense, Denmark

Sound quality is a critical component for hearing-aid satisfaction and uptake (Kochkin, 2010). A key factor influencing sound quality is the inherent processing delay of hearing aids, which causes audible effects when the processed sound mixes with the direct (un-delayed) sound that enters the ear canal through a dome or vent of an earmold. Depending on the size of the delay, the resulting effect relates to changes in sound timbre, perception of echoes, and altered spatial qualities. Modern hearing aids have delays of approximately 5-10 ms, with 10 ms being the current ‘gold standard’ for what is deemed tolerable (Stone et al., 2008). It is still unclear, however, how short the delay needs to be for ensuring optimal sound quality for the user. This question is of interest because the latest advances in signal processing allow for much shorter delays to be realized in hearing devices. The current study therefore explored the effects of six different hearing-aid delays ranging from 0.2 to 10 ms on perceived sound quality with the help of a realistic hearing-aid simulator. Three types of stimuli and two gain settings were tested. The participants were 10 normal-hearing subjects and 20 subjects with mild-to-moderate sensorineural hearing losses. From the results, it is expected that hearing-aid delay will be inversely related to perceived sound quality, with delays <1 ms resulting in best outcome overall. Hearing loss severity, stimulus type and gain setting are expected to affect this relationship.

15:20 - 15:30(How) do transparent hearing aids impair speech intelligibility in normal hearing subjects? By Florian DenkDeutsches Hörgeräte Institut

Florian Denk1, Florian Miethling1, Jürgen Tchorz2, Hendrik Husstedt1
1
Deutsches Hörgeräte Institut GmbH, Lübeck, Germany
2Technische Hochschule Lübeck, Germany

While hearing aids provide large benefits for hearing impaired users, it has been repeatedly shown that wearing hearing aids has also detrimental effects on sound localization, spatial hearing and speech perception. Such ‘side effects’ reduce the benefit of hearing aids, and may outweigh them in potential users with a mild or moderate hearing loss, or normal-hearing users of related devices like hearables. This also holds if the hearing aids are adjusted to a transparent setting, i.e., conserve the transmission behavior of the open ear as close as possible given technical limitations like the microphone location, processing delays or self-noise. To gain further insight into the mechanisms impairing speech intelligibility with hearing aids, we equipped normal-hearing subjects with behind-the-ear hearing aids in a transparent setting, and measured the speech intelligibility in different spatial listening situations with noise and speech maskers. These situations included collocated speech and masker, and speech spatially separated from four distributed masker sources, where the direction of the speech source was either static or randomly changed between sentences. The results show generally increased speech reception thresholds in the aided condition as compared to the open ear, a reduced spatial release from masking, and a decreased advantage of the speech masker with respect to the noise masker, particularly in the condition where the speech source direction was randomly varied. Based on an analysis of the KEMAR-recorded stimuli, we discuss which of these results can be attributed to a change in energetic masking effects and disturbed binaural cues, or whether wearing the hearing aids further impaired the segregation of speech and masker on a higher cognitive level.

15:30 - 15:45Panel discussion
14:40 - 14:50Rollover effects at higher-than-normal levels in speech materials with low but not high context By Michal FeręczkowskiUniversity of Southern Denmark

Michal Feręczkowski1,2, Benedikte Degn Mikkelsen1, Tobias Neher1,2
1
Department of Clinical Research, Faculty of Health Sciences, University of Southern Denmark, Odense, Denmark
2Research Unit for ORL – Head & Neck Surgery and Audiology, Odense University Hospital, Odense, Denmark; University of Southern Denmark, Odense, Denmark

The dependence of speech intelligibility on the presentation level has traditionally been modelled with non-decreasing functions such as sigmoids, for both normal-hearing listeners and listeners with sensorineural hearing loss (SNHL). A performance decrease at high presentation levels – so-called rollover (RO) in the performance-intensity function – has traditionally been interpreted as a sign of retro-cochlear hearing loss. In some recent research studies based on speech stimuli presented at high levels, RO was also observed in young listeners with normal audiograms (NAs), possibly reflecting synaptopathy. Overall, RO measurements could therefore be a useful tool for characterizing suprathreshold hearing abilities in various listener groups. While RO has been observed in a number of studies that employed monosyllabic words or low-context sentences, reports of RO in the recognition of high-context sentences are lacking. Here, we hypothesized that RO presence in the performance-intensity function is related to the amount of context information available in the employed speech material. To test this hypothesis, 22 young NA adults without any self-reported hearing problems were tested at two presentation levels (80 and 95 dB SPL, broadband). Three speech materials were used: monosyllabic words, low-context sentences and high-context sentences. To avoid ceiling effects and upward spread of masking, all measurements were performed in stationary speech-shaped noise with the stimuli highpass-filtered at 1.4 kHz. Significant RO effects were found in the speech scores collected with the two low-context materials but not the high-context material. Overall, this suggests that low-context sentences allow for sensitive measurements of rollover effects in individual listeners.

14:50 - 15:00Individual differences in auditory continuity perception and its relationship with phonemic restoration of speech in normal hearing individuals By Srikar VijayasarathyJSS Institute of Speech and Hearing

Srikar Vijayasarathy1, Animesh Barman2
1
Department of Audiology, JSS Institute of Speech and Hearing, Mysuru, India
2Department of Audiology, All India Institute of Speech and Hearing, Mysuru, India

The study investigated illusory auditory continuity percept of the vowel /a/, and its relationship with the phonemic restoration of speech in noise in 25 individuals with normal hearing. The perception of continuity for the interrupted vowel /a/ was measured with eight different signal-to-noise ratios of speech-shaped noise (-10 t0 +4 dB SNR in 2 dB steps). Subgroups were identified using cluster analysis (Hierarchical and K-Means). Phonemic restoration in noise was measured for sentences interrupted with silence and -10 dB SNR speech-shaped noise. The correlation between continuity perception and phonemic restoration was investigated. Listeners could be classified into high and low continuity groups based on their continuity rating at poor signal-to-noise ratios. They could be further divided into two subtypes within each group based on the pattern of continuity perception change across signal-to-noise ratios. Correlational analysis indicated that those with a high continuity percept tended to benefit more from phonemic restoration. Individual differences in the dependence on top-down and bottom-up strategies can explain these findings.

Acknowledgements: We are grateful to the participants of the study for their patient co-operation.

15:00 - 15:10Spectral changes of Russian vowels pronounced in babble noise By Alisa GvozdevaSechenov Institute of Evolutionary Physiology and Biochemistry

Alexander Lunichkin1, Alisa Gvozdeva1, Larisa Zaitseva1, Elena Ogorodnikova2, Irina Andreeva1
1Laboratory of comparative sensory physiology, Sechenov Institute of Evolutionary Physiology and Biochemistry of Russian Academy of Sciences, Saint Petersburg, Russia
2Pavlov Institute of Physiology of Russian Academy of Sciences, Saint Petersburg, Russia

Lombard reflex improves quality of speech communication in noisy environments. The reflex modifies amplitude-frequency parameters of speech, thus, increasing its intelligibility. The aim was to evaluate the spectral changes characterizing the pronunciation of vowels in speech-like noise. Six female Russian native speakers without speech and hearing impairments aged 20-58 years took part in the study. Speech recordings were carried out in an anechoic chamber during single session. Recordings of nine words with [a], [i] and [u] vowels in different stressed positions were made in quiet and in babble-noise of 60 dB(A).  Before the start of the recording, the speaker adjusted the auditory feedback to correspond to its natural level. The speaker pronounced words addressing them to the experimenter to simulate a communicative situation. Spectral characteristics (F0, F1, and F2) were analyzed on stationary intervals of stressed vowels in tool Praat. Wilcoxon Matched Pairs Test was used to compare the individual spectral parameters of vowels in quiet and in babble-noise. It was found that in noise F0 significantly increased compared to quiet for all stressed vowels in all positions. The group-averaged changes amounted to 12%. We also observed a shift in the position of vowels in the F1-F2 two-dimensional vowel space. F1 increased in babble-noise compared to quiet in all cases: for [u] it was 12% (p<0.000001), for [a] – 8% (p<0.0001), for [i] – 14% (p<0.000001). F2 changed in different ways: for [a] it increased by 2% (p<0.01), for [i] it decreased by 1% (p<0.0001). For the vowel [u] F2 was stationary. Note that in noise, the scatter of individual parameters F1 and F2 was less pronounced than in quiet. The data indicates an improvement of articulatory planning and an increase in differences between lombard vowels.

Acknowledgements: The work was supported by State budget (themes no. AAAA-A18-118050790159-4, AAAA-A18-118013090245-6).

15:10 - 15:20Masking release with non-stationary speech-like stimuli By Hyojin KimTechnical University of Denmark

Hyojin Kim1, Viktorija Ratkute1, Bastian Epp1
1
Hearing Systems Section, Department of Health Technology, Technical University of Denmark, Lyngby, Denmark

Comodulated masking noise and binaural cues can facilitate detecting a target sound from noise. These cues induce a decrease in detection thresholds, quantified as comodulation masking release (CMR) and binaural masking level difference (BMLD), respectively. Previous studies showed that CMR is affected by a preceding masker, possibly due to the adaptation of the auditory system to features of the preceding masker. However, its relevance to speech perception is unclear. Here, we investigated the effect of the duration of preceding maskers on CMR and BMLD, and its ecological validity using sounds with speech-like spectro-temporal dynamics.
We hypothesized that the adaptation results from top-down processing, and both CMR and BMLD will be affected with increased preceding maskers’ length. We measured CMR and BMLD when the length of preceding maskers varied from 0 (no preceding masker) to 500 ms. We used four different maskers: the reference condition with uncorrelated masker (RR), and three maskers consist of a comodulated masker preceded by three different maskers: uncorrelated masker (RC), comodulated masker (CC), and the masker with comodulated flanking-band (FC). For BMLD, we used interaural phase difference (IPD) of pi. Results showed that CMR was more affected with longer preceding masker only for FC condition while the preceding masker did not affect BMLD. This indicates that grouping of frequencies in preceding masker has influence on following frequency grouping by comodulation.
We further evaluated the ecological validity of such grouping effect with stimuli reflecting formant changes in speech. We set three masker bands at formant frequencies F1, F2, and F3 based on CV combination: /GU/, /FU/, and /PU/. We found that the CMR was little (~2 dB) while BMLD was comparable to previous findings (~9 dB). In conclude, we suggest that there are factors that play a role in frequency grouping in speech other than comodulation.

15:20 - 15:30The role of fundamental-frequency dynamic-range contrast in competing-talker scenarios By Paolo A. MesianoTechnical University of Denmark

Paolo A. Mesiano1, Johannes Zaar1,2, Lars Bramsløw2, Helia Relaño-Iborra1,3, Torsten Dau1
1
Hearing Systems Section, Department of Health Technology, Technical University of Denmark, Kongens Lyngby, Denmark
2Eriksholm Research Centre, Snekkersten, Denmark
3Cognitive Systems Section, Department of Applied Mathematics and Computer Science, Technical University of Denmark, Kongens Lyngby, Denmark

The effects of fundamental-frequency (F0) cues on speech intelligibility in competing-talker scenarios have been investigated extensively. Most related studies have focused on the substantial intelligibility benefit induced by a difference in the average F0 between a target and an interfering voice when using non-realistic speech materials. However, recent findings suggest that the effect of an average F0 difference is rather moderate or even negligible when using more realistic speech stimuli. In contrast, it has been suggested that in everyday-life speech, the relevant F0-related cues might lie in the dynamics of the F0 trajectories and their differences between competing voices. In this respect, a contrast in F0 dynamic range between competing talkers has been shown to notably affect the intelligibility of the target speech but has not yet been investigated systematically. In this study, we explored the effect of F0-dynamic-range contrast between competing talkers in normal-hearing listeners, using Danish everyday-speech type sentences. In order to control the F0-dynamic-range contrast, we manipulated the acoustic stimuli by expanding or reducing their F0 dynamic range within the typical variability of natural speech found in the Danish language, thus maintaining the perceived naturalness of the voices. The results of this investigation are discussed and compared to previous studies with the aim to provide a better understanding of the role of F0 dynamics in competing-talker scenarios.

15:30 - 15:45Panel discussion
14:40 - 14:50Personality and age capture dissociations of subjective versus objective noise tolerance By Malte WöstmannUniversity of Lübeck

Malte Wöstmann1,2, Julia Erb1,2, Jens Kreitewolf 1,2,3,4, Jonas Obleser1,2
1
Department of Psychology, University of Lübeck, Lübeck, Germany     
2Center of Brain, Behaviour, and Metabolism, University of Lübeck, Lübeck, Germany;
3Department of Psychology, McGill University, Montreal, Quebec, Canada;
4Department of Mathematics and Statistics, McGill University, Montreal, Quebec, Canada

Acoustic noise is pervasive in human environments. Some individuals are more tolerant to noise than others. We here demonstrate the considerable explanatory potential of non-auditory determinants of subjective and objective noise tolerance, namely personality traits neuroticism (being emotionally unstable) and extraversion (being enthusiastic, outgoing). In an online study, we collected demographic information and BIG-5 personality traits in a large, age-varying sample (N = 1,103; 18–74 years). Subjective noise tolerance was assessed with established self-report scales (i.e., Weinstein Noise Sensitivity Scale, WNSS; Speech, Spatial and Qualities of Hearing Scale, SSQ) and a self-adjustment of the maximal tolerable noise level (i.e., Acceptable Noise Level, ANL). Objective noise tolerance was quantified as reception threshold for digit triplets in noise (DTT). In agreement with pre-registered hypotheses (osf.io/fgyj9), higher neuroticism and lower extraversion independently explained lower scores on all subjective noise tolerance tests, while controlling for demographic factors (e.g., age, gender, education, self-reported hearing loss). Interestingly, this pattern reversed for objective noise tolerance, such that higher neuroticism explained lower (i.e., better) speech-in-noise reception thresholds. We quantified the degree to which listeners of different age and personality profiles over- or underrated their own objective noise tolerance. Older age was associated with overrated noise tolerance, characterized by decreasing objective but largely unchanged subjective noise tolerance. Orthogonal to effects of older age, individuals scoring higher on neuroticism underrated their own noise tolerance (i.e., lower subjective than objective noise tolerance), while high-extraversion overrated it. In sum, these results help build a framework for understanding individual differences in noise tolerance and will help tailor future audiological treatment: Personality holds explanatory power for inter-individual differences in coping with acoustic noise, which is a ubiquitous source of distraction and health hazard in human environments.

Acknowledgements: The present work was supported by the International Hearing Foundation (grant to MW and JO) and an ERC Consolidator grant (ERC-CoG-646696 AUDADPT) to JO.

14:50 - 15:00Changes in informational interference over time: Differences between native and non-native speakers By Alex MephamUniversity of York

Alex Mepham1, Yifei Bi2, Sven Mattys1
1
Department of Psychology, University of York, York, UK
2College of Foreign Languages, University of Shanghai for Science and Technology, Shanghai, China

Speech-in-speech masking research has identified what is known as release from linguistic masking, i.e., better speech transcription performance against maskers in unknown than known languages. To test whether native (British) and non-native (Mandarin) speakers of English can learn to control the interference of a known language masker, we tracked their ability to transcribe English/Mandarin sentences against English or Mandarin competing talkers over the course of 50 trials. Both native and non-native listeners improved over time. Native listeners exhibited release from linguistic masking, with less masking in Mandarin than English masker conditions. The size of this effect increased over time. In contrast, non-native listeners showed no difference between the two language maskers, and this pattern was constant over time. Masker-to-target intrusion errors decreased over time for native listeners, whereas they were virtually absent for the non-native listeners. These results show that (1) Linguistic masking is worst when the masker language is known to the listener, whether that language is native or non-native, (2) Listeners show worse performance when the masker is the same language as the target speech, and (3) Native listeners become better at suppressing masker interference over time than non-native listeners, which we hypothesize results from reduced spare cognitive capacity in non-native listeners.

15:00 - 15:10Spectrotemporal modulation sensitivity and distortion product oto-acoustic emission, an explorative study By Max VæhrensAalborg University

Max Væhrens1, Rodrigo Ordoñez2
1Acoustics and Audio Technology, Aalborg University, Aalborg, Denmark
2Department of Electronic Systems, Aalborg University, Aalborg, Denmark

Loss of frequency selectivity can affect a person’s ability to understand speech in noisy environments. Therefore, being able to assess frequency resolution can be an important tool in the treatment of hearing disorders. In the present study two indirect measures of frequency selectivity are compared, sensitivity to spectrotemporal modulation (STM) and optimal ratio for distortion product oto-acoustic emissions (DPOAE).  Changes in STM modulation threshold between normal and impaired listeners have been correlated with speech intelligibility in noise and may represent deficiencies related to frequency selectivity and temporal fine structures in impaired hearing. Varying the ratio between the primary frequencies in a DPOAE measurement, an optimal primary ratio for the specific measurement parameters representing the highest DPOAE level can be obtained. The optimal DPOAE ratio decreases with increasing frequency similarly to the equivalent rectangular bandwidth for auditory filters. Changes in the optimal ratio or the lack of a clear maximum may indicate loss of frequency tuning in the basilar membrane (BM). STM sensitivity threshold and optimal DPOAE ratios were measured at test frequencies of 1 and 4 kHz on eleven volunteers with hearing considered normal. The results show no systematic relationships between the two measures for individual subjects. Based on the DPOAE results two groups were identified. Five subjects show well-defined optimal ratios and high DPOAE levels at both tested frequencies, and six subjects show low DPOAE levels at 4 kHz and no clear optimal ratio at 1 kHz. Comparison of the average STM sensitivity thresholds for these two groups only shows a statistical difference with the 4 kHz stimulus with a spectral modulation of 2 cycles per octave.

15:10 - 15:20Exploring auditory mechanisms of loudness. A modelling study on loudness-related deficits observed in different auditory profiles By Raul Sanchez-Lopez Technical University of Denmark

Raul Sanchez-Lopez1,2, Gerard Encina-Llamas1
1
Hearing Systems Section, Department of Health Technology, Technical University of Denmark, Kgs. Lyngby, Denmark
2Interacoustics Research Unit, Kgs. Lyngby, Denmark

A data-driven auditory profiling approach has been recently proposed to provide a more accurate and sensitive clinical evaluation of hearing. Such approach identified four different groups of hearing-impaired listeners that reflected different degrees of two independent types of auditory deficits: speech-intelligibility deficits and loudness perception deficits. Despite the four groups showing differences in several supra-threshold tasks, audiometric thresholds were also significantly different across the four auditory profiles. In the present study, loudness-growth functions obtained in 75 listeners with different hearing abilities and classified as belonging to one of the four auditory profiles (A-D) are presented and discussed. Furthermore, a state-of-the-art computational model of the auditory nerve was used to explore the potential peripheral pathologies that could lead to the loudness-growth functions observed in each of the four profiles. The aims of the study were 1) to evaluate the association between neuronal rate and loudness perception in the four groups of hearing-impaired listeners, and 2) to investigate the role of outer-hair-cell loss and inner-hair-cell loss on the shape of the loudness-growth functions. The experimental results showed that loudness-related deficits were associated with steeper loudness functions and increased hearing thresholds. In contrast, the loudness functions of listeners belonging to profiles with speech-related deficits were shifted towards higher levels. The model simulations were qualitatively similar to the loudness-growth functions obtained in each of the profiles, although the model was only fitted to the audiometric threshold of each profile without any auditory nerve dysfunction. This suggests that abnormal loudness-growth functions can be mainly explained by peripheral processes and that the role of high-frequency inner-hair-cell loss on the excitation patterns may dominate the abnormal loudness perception, especially at high presentation levels.

Acknowledgements: This work was partially supported by the Better hEAring Rehabilitation project (BEAR) Innovation Fund Denmark Grand Solutions 5164-00011B, and UHEAL: Uncovering Hidden Hearing Loss funded by the Novo Nordisk foundation.

15:20 - 15:30Error types in static and dynamic cocktail party listening By Hartmut MeisterUniversity of Cologne

Hartmut Meister1, Moritz Wächtler1, Josef Kessler2, Martin Walger1,3
1Jean-Uhrmacher-Institute, University of Cologne, Cologne, Germany
2Department of Neurology, University Hospital Cologne, Cologne, Germany
3Clinic of Otorhinolaryngology, Head and Neck Surgery, University of Cologne, Germany

Listening situations with competing talkers pose high demands on both the auditory system as well as cognitive abilities. These “cocktail party situations” can be “static” or “dynamic”, the latter involving the target talker changing in a possibly unpredictable manner (Brungart & Simpson, 2007; Lin & Carlile, 2015; Meister et al., 2020). Here, different attentional mechanisms may play a role. In static cocktail party listening it is assumed that focusing attention on a known target is important. In dynamic listening situations, however, it is also important to monitor several potential targets and to switch attention if the talker of interest changes (Lin & Carlile, 2019; Meister et al., 2020).
In this study we shed light on the underlying mechanisms by analyzing different error types in a listening situation with three competing talkers. Specifically, random errors (omitting or misunderstanding words) and confusion errors (mixing up target and masker) were determined that might give valuable information about the load in static and dynamic cocktail party situations. Moreover, different listener groups (young and older listeners with and without hearing loss) were considered in order to examine potential effects of age and hearing impairment. The presentation discusses how the different errors could reflect effects of auditory stream segregation and factors such as misdirecting attention or losing attentional focus and whether this depends on the listener groups.

Acknowledgements: Supported by Deutsche Forschungsgemeinschaft ME 2751/3-1.

15:30 - 15:45Panel discussion
  • Podium: Aging brain
  • Parallel: Clinical testing
  • Parallel: Synaptopathy and ANSD
  • Parallel: Neural imaging II
  • Parallel: Ecological validity and audio-visual
13:00 - 13:05Introduction
13:05 - 13:35Looking for the unhidden features of ‘hidden hearing loss’ in speech-in-noise performance By Stuart RosenUCL Speech

Stuart Rosen1, Tim Schoof1, Tim Green1
1
UCL Speech, Hearing & Phonetic Sciences, London, U.K.

Much interest surrounds the possible contribution of synaptopathy and/or neuropathy (SNpathy) to hearing difficulties that occur despite normal audiometric thresholds. We made extensive measurements in two groups of listeners with near-normal audiograms who were expected to differ greatly in the likelihood of SNpathy: 19 Y(oung) adults aged 18-25 with limited noise exposure and 23 M(iddle-aged) adults aged 44-61 with significant noise exposure. Speech reception thresholds (SRTs) were measured binaurally in speech-spectrum-shaped noise for two tasks (recognition of complex sentences and consonant identification in VCVs). To assess the use of temporal fine structure (TFS), target and masker were presented either diotically (S0N0) or with the masker in phase at the two ears and the target out of phase (SπN0). Stimuli were presented at both a high and a low level (40 and 80 dB SPL) to assess claims that deficits due to SNpathy might be more prominent at high levels. Although the relationships among various predictors and the outcome were fairly complex, generally speaking: 1) The mean performance of the Y group was always better than that of the M group, with differences ranging from 0.3 to 3.5 dB across the 8 conditions. 2) SRTs worsened across age in the M group for some conditions, meaning that the younger M listeners could be performing similarly to the Y group. 3) Greater deficits were found for SπN0 than forS0N0 conditions, implicating some deficits in the M group for processing TFS. 4) The effect of level was small, especially when comparing the two groups. 5) Group differences in SRTs were not related to a small difference in audiometric thresholds at frequencies ≤ 4 kHz. It therefore seems possible that SNpathy could be a factor in these relatively small group differences, but that these effects differ little across level.

Acknowledgements: Funded by the RNID UK.

13:35 - 13:55Pupil diameter indexes time-course shifts in older adults sustained listening effort By Regina C. CallowayInstitute for Systems Research

Regina C. Calloway1, Lien Decruy1, I. M. Dushyanthi Karunathilake2, Jason L. Dunlap3, Samira Anderson3, Jonathan Z. Simon1,2,4, Stefanie E. Kuchinsky5
1Institute for Systems Research
2Department of Electrical and Computer Engineering
3Department of Hearing and Speech Sciences
4Department of Biology, University of Maryland, College Park, U.S.A.
5Audiology and Speech Pathology Center, Walter Reed National Military Medical Center, Bethesda, U.S.A

Understanding speech in noisy environments is important in everyday life and requires effortful listening. Older adults’ listening effort may be especially affected by noise, associated with a diminished ability to effectively track a target speaker among background speakers across longer periods of time. To further understand how different listening situations influence older adults’ ability to sustain effort to meaningful speech, the present study used 60-second audiobook segments. This experiment investigated the effect of 1) different signal-to-noise ratios (SNRs: quiet, 0dB, or -6dB) and 2) number of audiobook repetitions (three times each in the two noisy conditions) on listening effort, for 13 older adults with clinically normal hearing. We hypothesized that poorer SNRs would result in increased listening effort and that increased number of repetitions would result in decreased listening effort. Participants heard the audiobook segments while pupillary measures were recorded, with larger pupil dilations indicating greater listening effort. Generalized additive mixed model (GAMM) results revealed that these older listeners showed evidence of increased listening effort in the noisy conditions and decreased listening effort between the first and second presentation. The temporal precision of pupillometry measures also indexed time-specific changes in listening effort, with increased effort at the beginning and end of the audiobook segments. Furthermore, listening effort varied nonlinearly with listeners’ subjective intelligibility ratings of each SNR block. In comparison to the quiet condition, listening effort in the noisy conditions was associated with higher self-reported intelligibility ratings, suggesting that individuals only engaged in listening to speech that was at least moderately intelligible. Taken together, our findings illustrate how SNR and repetition influence listening effort when listening to sustained, continuous speech and how effort can deviate from intelligibility in some listening conditions. Thus, this study aims to provide an ecologically valid account of older adults’ sustained listening effort.

Acknowledgements: This work was supported by the National Institutes of Health (P01- AG055365). The views expressed in this abstract are those of the author and do not reflect the official policy of the Department of Army/Navy/Air Force, Department of Defense, or U.S. Government.

13:55 - 14:10Break
14:10 - 14:40Neural speech processing in the aging auditory brain By Jonathan Z. SimonUniversity of Maryland

Jonathan Z. Simon1,2,3
1
Department of Electrical & Computer Engineering, University of Maryland, College Park, U.S.A.
2Department of Biology, University of Maryland, College Park, U.S.A.
3Institute for Systems Research, University of Maryland, College Park, U.S.A.

Compared to young adults, older adults often have increased difficulty comprehending speech, especially in challenging acoustic environments. However, previous research has surprisingly found that their cortical responses to speech demonstrate more robust tracking of the acoustic speech envelope than those of younger adults, even though the opposite result holds for subcortical responses to speech. Here we have analyzed magnetoencephalography responses to continuous narrative speech in older and younger listeners with clinically normal hearing, acquired in two separate experiments. Responses to clean speech and to speech from two simultaneous talkers were used to distinguish between bottom-up and task-related brain activity. We show multiple lines of evidence that older adults show exaggerated cortical responses compared to younger, at several distinct cortical processing stages. Exaggerated responses at early latencies are consistent with excitation/inhibition imbalance seen in animal models, whereas exaggerated responses at greater latencies (which are also strongly dependent on selective attention) are consistent with the recruitment of additional neural resources in order to aid speech comprehension. Exaggerated responses are only seen for cortical processing of slow speech features (≲ 10 Hz), however, and not at faster rates associated with pitch tracking (≳ 80 Hz). Additional insight into the cortical processing of continuous speech is gained from the analysis of sustained pupillometric measures and non-phase-locked alpha-band neural activity, simultaneously obtained during the continuous speech listening.

Acknowledgements: Funding provided by NIH (P01-AG055365 & R01-DC014085), and NSF (SMA-1734892).

14:40 - 15:45Parallel sessions

Find more details by clicking on the ‘Parallel’ tabs above.

15:45 - 16:00Break
16:00 - 16:20Amplitude modulation frequency selectivity in older listeners with normal and impaired hearing By Jonathan RegevTechnical University of Denmark

Jonathan Regev1, Johannes Zaar1,2, Helia Relaño-Iborra1,3, Torsten Dau1
1
Hearing Systems Section, Department of Health Technology, Technical University of Denmark, Kgs. Lyngby, Denmark
2Eriksholm Research Centre, Snekkersten, Denmark
3Cognitive Systems Section, Department of Applied Mathematics and Computer Science, Technical University of Denmark, Kgs. Lyngby, Denmark

The concept of a modulation filterbank has been shown to account well for psychophysical data from experiments assessing temporal envelope processing acuity in young normal-hearing (NH) listeners. Recent studies using functional imaging and physiological measurements observed a loss of modulation tuning in older listeners and acoustically-traumatized animals, suggesting that modulation frequency selectivity may be adversely affected by ageing or hearing impairment. However, behavioural evidence of reduced modulation frequency selectivity in older and/or hearing-impaired (HI) listeners has not yet been provided. The present study investigated modulation frequency selectivity in older NH and HI listeners, as compared to young NH listeners, using psychophysical paradigms. Data were collected in conditions of amplitude modulation (AM) detection, AM frequency discrimination, and modulation masking. All conditions used sinusoidal modulations applied on a sinusoidal carrier, with target modulation rates of 4, 16, 64, and 128 Hz. Masked modulation thresholds were obtained for fixed-bandwidth noise modulation maskers (bandwidth corresponding to ½ octave when on-frequency) centered at frequencies ranging from -5 to 2 octaves relative to the target modulation frequency. The results suggested a reduction in modulation frequency selectivity at all target modulation frequencies in older NH listeners as compared to the young NH group, particularly at the target modulation frequency of 4 Hz. Preliminary data indicate that modulation frequency selectivity is predominantly affected at low modulation rates in HI listeners. To quantify modulation frequency selectivity, the envelope power spectrum model of masking (EPSM) was used to derive modulation filters that account for the masking data. The differences in modulation filter shape and selectivity across listener groups are discussed and analyzed in connection to the AM detection and AM frequency discrimination data. A loss of modulation frequency selectivity, as observed in the present study, might have detrimental effects on higher-level tasks, such as speech intelligibility or stream segregation.

16:20 - 16:50Auditory cortical plasticity and temporal processing in older adulthood By Björn HerrmannRotman Research Institute

Björn Herrmann1,2, Vanessa C. Irsik3, Ingrid S. Johnsrude3,4
1
Rotman Research Institute, Baycrest, Toronto, Canada
2Department of Psychology, University of Toronto, Toronto, Canada
3Department of Psychology, University of Western Ontario, London, Canada
4School of Communication Sciences & Disorders, University of Western Ontario, London, Canada

It is increasingly clear that age-related hearing impairment is a dysfunction of the entire auditory system, from periphery to cortex. Poorer auditory peripheral function is associated with a loss of neural inhibition along the auditory pathway, including auditory cortex, that renders neurons hyperactive and hyperresponsive to sounds. This auditory-system hyperactivity may impair speech intelligibility when background sound is present. Neural hyperactivity has been extensively studied using electrophysiological recordings in non-human mammals but is less explored in humans. In this presentation, we will describe work on hyperactivity associated with aging and hearing loss in humans. Specifically, we will describe how neural synchronization to low-frequency amplitude modulations in sounds differs between younger and older adults, likely as a result of hyperexcitability, and provide behavioral data that test predictions for speech-in-noise intelligibility derived from this electrophysiological work. Neural synchronization in auditory cortex is enhanced in older compared to younger adults. Further, the propensity of neural activity to synchronize with different amplitude-modulation shapes in sounds changes with age: auditory cortex of older adults is more sensitive to damped (sharp attack) compared to ramped (gradual attack) envelope shapes, whereas younger adults show the opposite pattern. Our behavioral data, in contrast, reveal better speech intelligibility when background noise is modulated with damped compared to ramped envelope shapes in both age groups. We also present recent work demonstrating that the way amplitude-modulated background maskers affect speech intelligibility in older compared to younger adults critically depends on the naturalness of speech (disconnected sentences vs. engaging stories). We will wrap up this presentation by briefly talking about open questions and challenges related to the study of auditory-system hyperactivity in humans.

Acknowledgements: This research was supported by the Canadian Institutes of Health Research (MOP133450 to I.S. Johnsrude). BH was supported by a BrainsCAN Tier I postdoctoral fellowship (Canada First Research Excellence Fund; CFREF) and the Canada Research Chair program.

14:40 - 14:50Using objective measures of binaural hearing in the clinic – stimulus, recording, and analysis parameters By Lindsey N. Van YperUniversity of Southern Denmark

Lindsey N. Van Yper1,2, Juan Pablo Faúndez2, Jaime A. Undurraga2,3, David McAlpine2
1Institute for Clinical Research, University of Southern Denmark, Odense, Denmark
2Department of Linguistics, The Australian Hearing Hub, Macquarie University, Sydney, Australia
3Interacoustics Research Unit, Technical University of Denmark, Lyngby, Denmark

Binaural hearing – particularly the ability to process interaural time differences (ITDs) – underpins sound localization and speech perception in noise. Although ITD processing is known to be affected in various clinical populations, it is not standardly assessed in the clinic, primarily because behavioural measures of ITD are time-consuming and difficult to perform. Recent studies have therefore proposed the ‘acoustic change complex (ACC)’ and ‘interaural-phase modulation following responses (IPM-FR)’ as promising techniques for objective evaluation of ITD processing. Here, we determine the optimal stimulus, recording, and analysis parameters for clinical use of these measurements. Results show that reliable ACC and IPM-FR can be obtained from clinically suitable electrode locations (e.g. mastoids referenced to Cz). However, when using the more practical Fpz location as a reference, one may consider removing eye blinks. We also show that stimulus parameters affect IPM-FR and ACC differently, which may suggest that different neural mechanisms are involved in the generation of these responses.

Acknowledgements: This research has been funded by the Australian Government through the Australian Research Council (project number FL160100108).

14:50 - 15:00A digit-based behavioral and electrophysiological test battery for assessing different speech processing abilities By Pushkar Deshpande University of Southern Denmark

Pushkar Deshpande1,2, Christian Brandt1,2, Stefan Debener3, Tobias Neher1,2
1Institute of Clinical Research, University of Southern Denmark, Odense, Denmark
2Research Unit for ORL – Head & Neck Surgery and Audiology, Odense University Hospital, Odense, Denmark; University of Southern Denmark, Odense, Denmark
3Department of Psychology, University of Oldenburg, Oldenburg, Germany

Effective communication requires good speech perception abilities. Speech perception can be assessed with behavioral and electrophysiological methods. Relating these two types of measurements to each other can provide insights into the underlying cortical processes. The current study aimed to develop a digit-based test battery suited for eliciting different speech-evoked cortical responses and to relate the speech-evoked cortical responses to behavioral measures of speech detection, discrimination, and comprehension. Thirty young normal-hearing native Danish speakers with normal or corrected-to-normal vision participated. The digit-triplet lists from the Dantale-I speech corpus were used as stimulus material. All measurements were carried out in the presence of stationary speech-shaped noise at 67 dB(C) SPL. The behavioral measurements included speech detection thresholds (SDTs), speech recognition thresholds (SRTs), and speech comprehension scores (SCSs). For the electrophysiological measurements, multi-channel electroencephalography (EEG) recordings were performed. N100 and P300 responses were evoked using an active auditory oddball paradigm. N400 and Late Positive Complex (LPC) responses were evoked using congruent and incongruent digit sequences that were presented using audio-only or audio-visual paradigms. All EEG components could be evoked successfully. While no correlations between the SDTs and N100 responses were found, the SRTs were correlated with P300 responses (r = -0.45, p < 0.05) and the SCSs were correlated with the EEG responses to the congruent and incongruent digit sequences. Regarding the N400 and LPC responses, there were significant amplitude differences between the audio-only and audio-visual paradigms. Overall, the developed test battery was found to be usable and to produce reliable data. Follow-up studies with hearing-impaired individuals will provide further insights into the consequences of hearing loss on cortical speech processing. The audio-only and audio-visual paradigms will allow investigating similarities and differences between unimodal and bimodal speech perception.

15:00 - 15:10A combinatorial solution of the probability of stopping in threshold audiometry By Katherine N. PalandraniUniversity of Maryland

Eric C. Hoover1, Katherine N. Palandrani1
1
Department of Hearing and Speech Sciences, University of Maryland, College Park, MD, USA

In pure-tone audiometry, threshold is defined as the lowest stimulus level to meet the stopping criteria and not, as in most other psychophysical procedures, as the stimulus level corresponding to a specific probability of detection. A mathematical relationship between audiometric thresholds and the probability of detection remains elusive. This limits our ability to control the audibility of signals amplified by hearing aids and prevents equivalent thresholds from being obtained using other psychophysical methods. Our hypothesis was that the relationship could be established through an analysis of the probability of meeting the stopping criteria at a given stimulus level. A combinatorial solution was obtained for standard audiometric stopping criteria. Results showed that the probability of detection at threshold is maximized at different stimulus levels depending on the events that occur during the test. This suggests that it is possible to relate audiometric thresholds to the probability of detection, but that there are multiple solutions reflecting the multiple possible ways to satisfy the stopping criteria.

Acknowledgements: Work supported by NIH/NIDCD R01 DC015051 to Frederick Gallun.

15:10 - 15:20Gamification approach to out-of-clinic hearing diagnostics By Palle RyeAalborg University

Palle Rye1, Dorte Hammershøi1
1
Department of Electronic Systems, Aalborg University, Aalborg, Denmark

We are facing an expected increase in the number of people affected by hearing disorders and an ongoing ambition for improved efficiency, e.g. through expeditious or fewer clinical visits. Recent research suggest that classifying patients according to archetypal profile based on additional supra-threshold diagnostic measures allow a consistent treatment approach targeting the needs and preferences of the specific archetypal profile. The potential benefits of such detailed individual patient profiles are likely to necessitate more diagnostic test time for proper classification. Out-of-clinic diagnostic hearing testing proposes a tempting remedy for the above challenges. However, the lack of in-person guidance during testing may lead to unnecessary confusion or frustration and result in poor reliability proving detrimental to the goal. To counteract such insufficiencies the current study proposes a gamification approach to out-of-clinic hearing diagnostics. The gamification approach focusses on simple, effective instructions and seek to provide intuitive interaction with the diagnostic tests. As an example of this approach, tablet-based implementations of a 3I-3AFC version of Spectro-Temporal Modulation with immediate trial feedback, as well as other supra-threshold measurement methods are demonstrated. Acceptance scores for a sample of the intended patient group are reported using the System Usability Scale. Finally, results are compared to the performance without gamification of

Acknowledgements: This work was supported by Innovation Fund Denmark Grand Solutions 5164-00011B (Better hEAring Rehabilitation project), Oticon, GN Resound, Widex, and other partners (Aalborg University, University of Southern Denmark, the Technical University of Denmark, Force, Aalborg, Odense and Copenhagen University Hospitals). The funding and collaboration of all partners are sincerely acknowledged. a similar patient group.

15:20 - 15:30Maximum aided word recognition score and rollover presence at higher-than-normal speech levels predict hearing-aid outcome effectively By Michal FeręczkowskiUniversity of Southern Denmark

Michal Feręczkowski1,2, Tobias Neher1,2
1
Institute of Clinical Research, Faculty of Health Sciences, University of Southern Denmark, Odense, Denmark
2Research Unit for ORL – Head & Neck Surgery and Audiology, Odense University Hospital, Odense, Denmark; University of Southern Denmark, Odense, Denmark

In current practice, hearing aids (HAs) are typically fitted based on audiometric thresholds only, even though research suggests that suprathreshold factors play a role for HA outcome. Measurements based on stimuli with linear frequency-specific amplification (‘aided’) carried out at above-conversational levels may provide estimates of suprathreshold hearing deficits that are largely independent of the effects of audiometric thresholds. In some listeners, high presentation levels may lead to a performance decrease, i.e., rollover in the performance-intensity function. Here, we investigated potential links between word recognition scores (WRS) measured at above-conversational levels, uncomfortable levels (UCL) for narrowband noise stimuli, and aided outcome as assessed using the Hearing-in-Noise Test (HINT) as well as the International Outcome Inventory for Hearing Aids (IOI-HA). The participants were 37 experienced HA users with symmetrical, sensorineural hearing losses. Unaided and aided WRS were measured monaurally under headphones with monosyllabic words presented between the most comfortable and uncomfortable level of each participant. The aided HINT measurements were analyzed using a linear mixed-effects model with the unaided and aided WRS, rollover presence, the UCL data, pure-tone average hearing loss, and age as predictors. The IOI-HA scores were analyzed using a multiple linear regression model with the unaided and aided WRS, pure-tone average hearing loss, and age as predictors. The aided WRS data predicted the two outcomes more effectively than any other considered predictor. Additionally, rollover presence was a significant predictor of HINT outcome. Overall, these results imply that suprathreshold deficits as captured by aided WRS measurements performed at higher-than-normal levels can be useful for developing more individualized HA fitting strategies.

15:30 - 15:45Panel discussion
14:40 - 14:50Effect of non-traumatic noise exposure on unvoiced speech recognition: cochlear synaptopathy in human listeners? By Mengchao ZhangCardiff University

Mengchao Zhang1, Richard Stern2, Deborah Moncrieff3, Bharath Chandrasekaran4, Catherine Palmer4, Christopher Brown4
1
School of Psychology, Cardiff University, Cardiff, United Kingdom
2Department of Electrical and Computer Engineering, Carnegie Mellon University, Pittsburgh, United States of America

3School of Communication Sciences and Disorders, University of Memphis, Memphis, United States of America
4Department of Communication Science and Disorders, University of Pittsburgh, Pittsburgh, United States of America

Recent animal models have suggested that noise exposure can affect supra-threshold temporal envelope (TE) processing without disrupting absolute hearing thresholds due to selective auditory nerve deafferentation, a process which has been named ‘cochlear synaptopathy’. However, evidence of cochlear synaptopathy in human listeners has been inconsistent, sparking debate on the existence of the synaptopathy in humans. This study compares TE processing between young adults enrolled in a dental school and an unexposed control group, because dentistry students are exposed to drill noise that is distinctively high-frequency, and non-traumatic in nature, and the schedule of noise exposure is systematic for students across different years of enrollment. Unvoiced speech recognition in noise modulated at 16 or 32 Hz was chosen to evaluate TE processing. To limit off-frequency contributions and reduce spectral redundancy of the speech, stimuli were band-pass filtered to lower- (0.8 – 2.3 kHz) or higher-frequency regions (1.7 – 12 kHz). The results showed that the group exposed to dental noise performed more poorly than the unexposed for unvoiced speech in modulated noise. The group difference was more robust for the high-frequency speech than low-frequency speech and when the noise was modulated at 32 Hz rather than at 16 Hz. Furthermore, a small but significant difference was found between the two groups when they recognized the speech in spectrally modulated noise for lower-frequency stimuli. Meanwhile, variables such as age, years of musical training, non-dental noise exposure history, and peripheral auditory screening results did not account for a significant amount of variance of the performances. The findings suggest that human listeners with non-traumatic noise exposure could show poor TE processing as was predicted by the hypotheses related to cochlear synaptopathy and that carefully designed measures are needed when exploring cochlear synaptopathy in humans.

Acknowledgements: This study was based on part of the first author’s dissertation works and was supported through funding received from the SHRS PhD Student Award, School of Health and Rehabilitation Sciences, University of Pittsburgh.

14:50 - 15:00Investigating the role of high spontaneous rate fibers in cochlear synaptopathy By Pernille HoltegaardTechnical University of Denmark

Pernille Holtegaard1, Mercedes C. Duvig1, Teresa M. C. Gallo1 , Bastian Epp1
1
Hearing Systems Section, Department of Health Technology, Technical University of Denmark, DK-2800, Kgs. Lyngby, Denmark

Cochlear synaptopathy (CS) has been suggested to predominantly target low spontaneous rate (SR) fibers. The majority of research in CS hypothesize deficits at supra-threshold levels in accordance with this assumption. However, recent work based on auditory models suggests that the loss also includes high-SR fibers, and that these have an off-frequency contribution to coding of moderate-to-high level sounds. This study aims to investigate the integrity of low- and high-SR fibers in CS. Two outcome measures were assumed to reflect AN-fiber integrity, gap detection thresholds (GDTs) and loudness functions. Low presentation levels were assumed to reflect the integrity of on-frequency high-SR fibers, while moderate-to-high levels were assumed to reflect the integrity of low-SR fibers and off-frequency high-SR fibers. The measure of middle ear muscle reflex (MEMR) strength was used as a proxy measure for CS, and a relationship between MEMR and outcome measures was hypothesized. Listeners with high-frequency sensorineural hearing loss (SNHL) with and without noise-induced tinnitus were recruited. It was hypothesized that listeners with tinnitus, a suggested perceptual consequence of CS, would exhibit higher GDTs and shallower loudness functions compared to listeners without tinnitus. GDTs and loudness functions were measured across a range of presentation levels at low frequencies where hearing sensitivity was normal (<25 dB HL). No significant differences were observed between groups for GDTs or MEMR strength. The slope of the upper part of the loudness function at 0.25 kHz was significantly shallower in the group with tinnitus, but no difference was found at 0.5 or 1 kHz. Also, no correlations were found between MEMR strength and the two outcome measures. Overall, the results did not support the hypotheses. However, the results were confounded by small variations in hearing sensitivity, and therefore can neither confirm nor dismiss the role of high-SR fibers in CS.

Acknowledgements: William Demant Foundation.

15:00 - 15:10Individual variability in speech recognition with age: Isolating different aspects of sensorineural hearing loss By Sarineh KeshishzadehGhent University

Sarineh Keshishzadeh1, Heleen Van Der Biest2, Sarah Verhulst1
1
Dept. of Information Technology, Ghent University, 9000 Ghent, Belgium
2Dept. of Rehabilitation Sciences–Audiology, Ghent University, 9000 Ghent, Belgium

Even though age-induced cochlear synaptopathy (CS) has been demonstrated in rodent and human temporal bones, it remains challenging to determine whether this pathology causes speech intelligibility declines. Difficulties in studying the causality of this relationship in humans relate to the necessity to use indirect and non-invasive (often EEG-based) markers of CS. To study the relationship between speech intelligibility and CS in the ageing population, we adopted low- and high-pass filtered speech conditions (Flemish Matrix test in quiet and noise), alongside an extended hearing screening battery that comprised distortion-product otoacoustic emissions, auditory brainstem responses (ABRs), envelope-following responses (EFRs) and extended high-frequency thresholds (EHFTs).

A total of 69 Flemish subjects participated in this study and were divided into two groups: (i) a young control group with normal audiograms (18-25 years) and (ii) Older adults with a normal audiogram, some with complaints of tinnitus or self-reported speech intelligibility problems (45-60 years). We compared our results to a German cohort of 45 listeners that included older participants as well to study how hearing sensitivity, EHFTs and potential biomarkers of CS (EFRs and ABRs) changed as a function of age. The EFR reductions observed in the older group is consistent with age-related CS, and EFR markers were more sensitive to detect individual differences than were ABR markers. At the same time, it remains challenging to determine whether EHF hearing or CS is more important in predicting individual speech intelligibility declines. Based on the model simulations, our EFR marker was only marginally sensitive to outer hair cells deficits, and hence we believe that CS plays an important functional role even though both EHFTs and EFRs were affected by the aging process. We conclude that early markers of sensorineural hearing loss (EFRs or EHFTs) are crucial for a timely diagnosis of speech intelligibility problems with age.

Acknowledgements: This work was supported by the European Research Council (ERC) under the Horizon 2020 Research and Innovation Programme (grant agreement No 678120 RobSpear and No 899858 CochSyn).

15:10 - 15:20A probe into problems and life effects in auditory neuropathy spectrum disorder using the ICF classification By Prashanth PrabhuAll India Institute of Speech and Hearing

Gayathri Kalarikkal1, G. K. Jayasree1, Malavika Puthiyadam1 , Abdul Bahis, K. V. Nisha1 , Prashanth Prabhu1
1
Department of Audiology, All India Institute of Speech and Hearing, Mysore, India

Individuals with ANSD show multi-faceted ramifications of the disorder which spans many facets of life (emotional, personal and other health related issues) that are not readily apparent in conventional audiological assessment. The aim of the study was to determine the problems and life effects experienced by individuals diagnosed as ANSD. The study followed a cross sectional survey design with 12 adult ANSD participants. Responses to two open-ended questions were classified using the International Classification of Functioning, Disability and Health (ICF) framework. The first question of the two open ended questions for data collection concentrated on the problems related to having ANSD and the second to its life effects. All the responses collected were linked to the corresponding ICF categories according to the ICF’s established linking rules using a simplified content analysis approach (Granberg, 2015). The results of Wilcoxon signed rank test revealed that participants had experienced more difficulties in PQ (/Z/=-2.39, p=0.02, effect size r= 0.36) than LEQ. 70 responses were obtained for problem question, 48 responses for life effects question. Most of the problems and the life effects confronted by individuals with ANSD were linked to activity limitations and participant restrictions (63/119), followed by body function (46/119) and a small number of responses related to environmental factors (9/119). The frequent number of responses related to activity limitations and participation restrictions pertained to “Communicating with—receiving—spoken messages” (d310). For body functions, “hearing functions” (b230), and “general products and technology for personal use in daily living” (e1150) occurred most under environmental factors category. Person with ANSD encounters hearing problem together with life effect problems in terms of activity limitations, participation restrictions, emotional and environmental factors which may hinder their integration to the society. The study emphasizes on person-centred approach providing a holistic model of rehabilitation to the individuals with ANSD.

15:20 - 15:45Panel discussion
14:40 - 14:50Tempo-dependent neural synchronization to different music features By Kristin WeineckMax Planck Institute

Kristin Weineck1,2, Olivia Xin Wen1, Molly J. Henry2
1
Research Group “Neural and Environmental Rhythms”, Max Planck Institute for Empirical Aesthetics, Frankfurt am Main, Germany
2Institute for Cell Biology and Neuroscience, Goethe University Frankfurt am Main, Frankfurt am Main, Germany

Neural activity in the auditory system can synchronize to the rhythms of sounds, and successful synchronization improves perception. In the context of natural sounds, most previous studies have investigated neural tracking of speech, whereas only a few studies have examined neural responses to natural (polyphonic) music. Similar to speech-tracking studies, music-tracking studies often investigated neural tracking of the music amplitude envelope. We hypothesized that the envelope alone might not best capture the acoustic fluctuations in music that evoke neural synchronization. This study aimed to investigate 1) neural tracking of different music features, 2) tempo-dependence of neural tracking, and 3) the correlation between neural tracking and behavioral responses to music. We conducted an EEG study, where 37 participants listened to music segments (without vocals) at parametrically varied rates (1-4Hz). Each trial consisted of the presentation of one music stimulus (attentive listening, no movement), a partial repetition of the same stimulus (finger tapping to the beat) and behavioral music ratings (enjoyment, familiarity and beat tapping difficulty). We applied converging neural analyses based on 1) temporal response functions and 2) Reliable Components Analysis combined with stimulus–response coherence and correlation. Our results demonstrate that spectral changes in music (“spectral novelty”), as opposed to the amplitude envelope, evoke the most reliable synchronized neural response. Moreover, music with slower beat rates elicited strongest neural synchronization. We also found that neural synchronization was stronger for familiar than for unfamiliar music. Furthermore, a classifier analysis revealed that neural responses to music presented at a single tempo predicted the tempo that the participant would later tap, i.e., whether they tapped the stimulus tempo vs. double or half that rate. Overall, our results indicate that there are tempo-dependent effects on neural synchronization during natural music listening and that spectral fluctuations in music may be critical for communicating the beat.

Acknowledgements: This work was funded by the European Research Council starter grant from Molly Henry.

14:50 - 15:00Behavioral and neural correlates of intensity discrimination of masked tonal signals By Hyojin KimTechnical University of Denmark

Hyojin Kim1, Viktorija Ratkute1, Bastian Epp1
1
Hearing Systems section, Department of Health Technology, Technical University of Denmark, Lyngby, Denmark

Comodulated masking noise and interaural phase disparity (IPD) can enhance detection of masked signals. Such enhancement in detection performance, or masking release, can be quantified as decreased detection thresholds: comodulation masking release (CMR) and binaural masking level difference (BMLD). While many studies investigated CMR and BMLD, the relevance of masking release at supra-threshold levels is still unclear. Here, we used both psychoacoustic and electro-physiological measures to investigate how CMR and BMLD affect intensity discrimination at supra-threshold levels.

We designed eight masking release conditions where each condition induces different amount of CMR and BMLD. In the psychoacoustic experiment, we investigated whether the difference in the amount of masking release will affect listening at supra-threshold levels. We used intensity just-noticeable difference (JND) to quantify an increase in salience of the tone. For instance, the salience of the tone would increase when the level of the tone is increased by JND. That is, the condition with lower JND would be more salient than the one with higher JND at the same supra-threshold level. As a physiological correlate of JND, we investigated late auditory evoked potentials (LAEPs) with electroencephalography (EEG). We measured P2 at supra-threshold levels from +15 dB to + 25 dB. Our hypothesis was that the increment in P2 with increasing tone level would be inversely proportional to JND.

From the psychoacoustic experiment, we found that JNDs highly depend on the level of the tone. The JNDs were equal at the same intensity of the tone rather than at the same supra-threshold levels. The results from the EEG experiment showed an inverse correlation between the intensity JND and the amplitude of P2. In conclusion, the amplitude of P2 can reflect the salience of masked tone at supra-threshold levels.

15:00 - 15:10Musical experience partially counteracts temporal speech processing deficits in putative mild cognitive impairment: A pilot study By Caitlin N. PriceUniversity of Arkansas for Medical Sciences

Caitlin N. Price1, Gavin M. Bidelman2,3,4
1
Dept. of Audiology & Speech Pathology, University of Arkansas for Medical Sciences, Little Rock, AR USA
2Institute for Intelligent Systems, University of Memphis, Memphis, TN USA
3School of Communication Sciences & Disorders, University of Memphis, Memphis, TN USA
4Dept. of Anatomy & Neurobiology, University of Tennessee Health Sciences Center, Memphis, TN USA

Mild cognitive impairment (MCI) commonly impacts older adults resulting in more rapid cognitive and behavioral declines than typical aging. Individuals with MCI can exhibit impaired receptive speech abilities that may reflect neurophysiological changes in auditory-sensory processing prior to usual cognitive deficits. Benefits from current interventions targeting communication difficulties in MCI are limited. Yet, neuroplasticity associated with musical experience has been implicated in improving neural representations of speech and offsetting age-related declines in perception. Here, we asked whether these experience-dependent effects of musicianship might extend to aberrant aging and offer some degree of cognitive protection against MCI. During a vowel categorization task, we recorded single-channel EEGs in older adults with putative MCI to evaluate speech encoding across subcortical and cortical levels of the auditory system. Critically, listeners varied in their duration of formal musical training experience (0-21 years). Older musicians exhibited sharpened temporal precision in auditory cortical responses suggesting musical experience produces more efficient processing of acoustic features by offsetting age-related neural delays. Additionally, we found robustness of brainstem responses predicted severity of cognitive decline suggesting early speech representations are sensitive to pre-clinical stages of cognitive impairment. Our preliminary results extend prior studies by demonstrating positive benefits of musical experience in older adults with emergent cognitive impairments.

Acknowledgements: This work was supported by grants from the GRAMMY® Foundation and National Institutes of Health (NIH/NIDCD R01DC016267 and R01DC016267-03S1) awarded to G.M.B. Requests for materials should be addressed to G.M.B [gmbdlman@memphis.edu].

15:10 - 15:20Measuring human hearing with functional near-infrared spectroscopy: Test-retest reliability By Anaïs Bouchet Technical University of Denmark

Anaïs Bouchet1,2,  Abigail A. Kressner1,2, Erik F. Kjærbøl2, Maaike Van Eeckhoutte1,2
1Hearing Systems Section, Department of Health Technology, Technical University of Denmark, Lyngby, Denmark
2Copenhagen Hearing and Balance Center, Ear, Nose, and Throat (ENT) & Audiology Clinic, Rigshospitalet, Copenhagen University Hospital

Functional near-infrared spectroscopy (fNIRS) is a relatively recent imaging technique and presents many advantages for clinical use in audiology: it is silent, compatible with cochlear implants, and child-friendly, and can measure cortical activation in response to acoustic stimuli. Most studies report results at the group level, with the responses averaged across participants. However, for clinical applications, a good test-retest reliability at the individual level is required. Furthermore, a challenge when using fNIRS is to remove physiological confounds from the data, such as the heartbeat and motion artefacts. While various processing methods have been developed, the procedure to follow is not yet standardized. One promising new approach for improving the test-retest reliability is to use short-channel separations. The aim of this study was therefore to investigate individual level test-retest reliability of the fNIRS responses evoked by auditory stimulation for normal-hearing participants. The fNIRS responses of fourteen participants were recorded to silence, a speech-shaped modulated noise, a speech passage, and a music sample. In addition, the speech-shaped noise was presented at two different intensities. The stimuli were presented in a block paradigm. The same testing procedure was repeated with an interval between sessions of one week. Preliminary results show a high variability of test-retest reliability of the fNIRS response across individuals. The results will be further discussed at the conference.

15:20 - 15:45Panel discussion
14:40 - 14:50Modelling changes in the process of audiovisual integration By Christian J. SumnerNottingham Trent University

Samuel Smith1,2, Christian J. Sumner1, Thom Baguley1, Paula C. Stacey1
1
NTU Psychology, Nottingham Trent University, Nottingham, U.K.
2Hearing Sciences, University of Nottingham, Nottingham, U.K.

The comprehension of speech, whether for normal hearing or aided, is often supplemented by watching a talker’s facial movements. How do auditory and visual cues combine multiple sources of information to provide a speech benefit? Does audiovisual performance depend only on the unimodal-information, or does the integration process itself vary? Does the integration vary depending on different aspects of the stimulus, or do individuals “integrate differently”? We developed a model based on signal detection theory (SDT) which allows us to test these different possibilities quantitatively. Signal detection theory posits that performance is limited by sensory noise in the signal and internally in the nervous system. The benefits of multiple cues depend on whether internal noise occurs in unimodal processing, or occurs in later processing, after multisensory integration (Micheyl and Oxenham, 2012; J Acoust Soc Am. 131:3970). We propose a model whereby the proportions of unimodal (“early”) and post-integration (“late”) noise can be estimated from unimodal and multisensory performance. This allows us to test whether differences in multisensory performance across experimental variables are best explained by variations in the unisensory performance, or reflect a varying integration process. In previously published data (Stacey et al. 2016; Hear Res. 336:17) we found that SDT provided a good account of audiovisual speech perception, overall. However, previous models were restricted to assuming only one source of noise and neither unisensory nor multisensory noise models predicted the data quantitatively. Our new model, which combines unisensory and multisensory internal noise, fits these data precisely. Furthermore, we find that the integration process shifts towards later multisensory internal noise when temporal fine structure of speech is removed with tone-vocoding. Thus, we can quantify auditory-visual speech perception as an optimal integration of information with multiple sources of internal noise, and the integration itself varies depending on the unisensory signals.

14:50 - 15:00Increasing the ecological validity of speech intelligibility measures using conversational speech and comprehension-targeted questions By Martha M. ShiellEriksholm Research Centre

Martha M. Shiell1, Sergi Rotger-Griful1, Martin Skoglund1,2, Johannes Zaar1,3, Gitte Keidser1
1
Eriksholm Research Centre, Oticon A/S, DK-3070 Snekkersten, Denmark
2Department of Electrical Engineering, Linköping University, Linköping, Sweden
3Department of Health Technology, Technical University of Denmark, Kgs.
Lyngby, Denmark

Although hearing aids can be effective at restoring sensory input, some users still struggle to benefit from their devices in real-world communication situations. To better understand and address the needs of these users, we need test paradigms that can capture more ecologically-valid outcome measures. In this presentation, we will describe a range of recent efforts from Eriksholm Research Centre where we explored methods to develop such paradigms, and share recommendations for continued work in this direction. Our efforts spanned seven independent experiments with various groups of hearing-impaired and normal-hearing participants. Over these experiments, we advanced traditional speech intelligibility tests with two additions: (1) stimuli that reflected some real-world complexity, and (2) an accuracy measure aimed capturing the listener’s understanding of the speech material (i.e., comprehension). The stimuli were audiovisual recordings of three unscripted talkers, two of whom engaged in a Diapix task and a third that improvised a monologue. This high level of realism produced the expected challenges to experimental control, but also produced high participant engagement – to the extent that engagement may even have distracted somewhat from the experimental tasks. The accuracy measure was calculated from responses to questions on the speech content. Three styles of questions and their iterations were implemented, in which we targeted the listener’s comprehension while avoiding formulations that rewarded word-recognition strategies. We speculate that, as a side effect of this strategy, our question-response systems evoked increased cognitive abilities for reading, semantic abstraction, and working-memory. Overall, our experiences emphasize the necessity of developing stimuli that are catered to the desired task. Furthermore, they highlight the challenges that are associated with measuring comprehension via extensions of traditional speech intelligibility tests. As such, we suggest that new innovations in behavioural testing may be required for more ecologically-valid outcome measurements.

Acknowledgements: This work was financially supported by the Swedish Research Council (Vetenskapsrådet, VR 2017-06092 Mekanismer och behandling vid åldersrelaterad hörselnedsättninggrant).

15:00 - 15:10Bringing ecological validity to the technical evaluation of hearing aids By Cosima A. ErmertRWTH Aachen University

Cosima A. Ermert1, Lu Xia2, Brian Man Kai Loong2, Janina Fels1, Sébastien Santurette2,3
1
Institute for Hearing Technology and Acoustics, RWTH Aachen University, Aachen, Germany
2Centre for Applied Audiology Research, Oticon A/S, Smørum, Denmark
3Department of Health Technology, Technical University of Denmark, Kgs.
Lyngby, Denmark

Focusing on target signals in noisy environments is a well-known challenge for hearing-impaired listeners. Therefore, a main goal of hearing aid (HA) technology is to enhance target signals and attenuate background noise. An established objective measure to quantify this contrast achieved by the HA is the output signal-to-noise ratio (SNR), commonly estimated using the phase-inversion method (Hagerman and Olofsson, 2004). Output SNR measurements are typically performed in artificial lab setups where target and noise signals are played from discrete directions. However, while such setups can be controlled and modified precisely, they do not adequately represent everyday listening situations. This study investigated the potential and limitations of bringing ecological validity to SNR measurements by using ambisonic reproduction of real sound scenarios as stimuli. Measurements with HAs using different directionality and noise-reduction strategies were performed in multiple pre-recorded 3D scenes. While this allowed a comparison of how the different strategies handled real-life background sounds against specific target sounds, it also introduced challenges to the phase-inversion method. Modern HAs rely increasingly on non-linear processing to adapt to different scenarios, while the phase-inversion method is most accurate under linear processing conditions. Thus, some HA features used for, e.g., feedback management, may induce errors to SNR measurements. The present results underlined the importance of defining an acceptance criterion for the error signal in output SNR measurements. They provided insights on how HA settings must be chosen to eliminate distortions while still measuring the devices at their full potential. When this is considered, measuring HA output signals under realistic conditions can have further useful applications for objective or subjective evaluation. As the increasing complexity of signal processing algorithms is challenging established methods for assessing HAs, it is necessary to discuss which considerations must be made to ensure ecologically valid and future-proof evaluation techniques.

Acknowledgements: The authors would like to thank Jens-Christian Britze Kijne and Boris Søndersted for lab setup, Alexandros Rompelakis for ambisonic sound environment rendering, Asger Heidemann Andersen and Jacob Aderhold for guidance on SNR measurements and post analysis.

15:10 - 15:20Data-driven optimization of parameterized head related impulse responses for the implementation in a real-time virtual acoustic rendering framework By Fenja SchwarkUniversität Oldenburg

Fenja Schwark1, Stephan D. Ewert1, Marc René Schädler1, Volker Hohmann1,2, Giso Grimm1,2
1
Medizinische Physik, Universität Oldenburg, and Cluster of Excellence “Hearing4all”, Oldenburg, Germany
2HörTech gGmbH, Oldenburg, Germany

In real-time virtual acoustic rendering, the head related directional properties of the receiver, i.e., the listener, are often modeled by convolving the signal with measured head related impulse responses (HRIRs). However, the computational cost of HRIR-convolution is rather high, even when implemented in the spectral domain, and interpolation needs to be applied to simulate all source directions, depending on the spatial resolution of the HRIR catalogues used. In order to reduce the computational cost in low-delay real-time virtual acoustic rendering, this study uses a parameterized digital filter model with delay lines to approximate the direction-dependent features of the head. A data-driven optimization method for the filter parameters is introduced that aims at matching the direction-dependent features of modeled and measured HRIRs using a spectral distance metric. Utilizing an objective binaural speech intelligibility model, it was shown that the speech intelligibility estimate for the optimized model approaches the estimate for the measured HRIRs. This suggests that the parameterized HRIR model may be sufficient to enable plausible spatial perception in virtual acoustic scenes. With the parameterized HRIR model a reduction of computational cost in the order of two magnitudes is possible for virtual acoustic scenes with a small number of objects. Further work will include subjective testing of the model, compared against measured HRIRs.

Acknowledgments: Funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) – Project-ID 352015383 – SFB 1330 B1 and C5.

15:20 - 15:45Panel discussion
  • Podium: Cognitive aging
  • Parallel: Psychoacoustics & Perception II
  • Parallel: Hearing aid evalulation
  • Parallel: Auditory modelling
  • Parallel: CIs and BAHS
13:00 - 13:05Introduction
13:05 - 13:35Understanding why performance on “cognitive” communication tasks often reflects sensory integrity By Barbara Shinn-CunninghamCarnegie Mellon University

Barbara Shinn-Cunningham1
1
Neuroscience Institute, Carnegie Mellon University, Pittsburgh PA, United States

Everyday communication depends on an interplay between coding the auditory and visual signals reaching the ears and eyes and modulating the information contained in these signals through cortical networks for attention, working memory, and language processing. It is well established that aging affects both sensory coding and cognitive processes. However, teasing apart whether communication problems are due to sensory versus cognitive issues can be difficult. As an example, many middle-aged listeners have problems understanding speech in the presence of competing sound sources, even when they have no clinically quantifiable sensory deficits. Moreover, it is surprisingly difficult to identify simple psychophysical tasks that relate to difficulties understanding speech amidst competing sounds in middle-aged listeners without identifiable hearing loss. Together, this pattern of abilities seems to suggest that cognitive, rather than sensory, deficits are the root cause. We argue, however, that subtle sensory deficits (such as cochlear synaptopathy) may be to blame. Specifically, in naturalistic settings, sensory coding can impact how quickly a listener can focus on one sound source, extract information from that source, store meaning in memory, and switch attentional focus— all processes that are not exercised by simpler tasks. Thus, performance on “cognitive” tasks depends directly on sensory fidelity in complex, but not simple listening scenarios, a realization that has important implications for diagnosis of communication difficulties.

13:35 - 13:55Selective attention explains the variance in cochlear implant users’ speech in noise performance By Jae Hee LeeUniversity of Iowa

Jae Hee Lee1,, Mallory Orr1, Hwan Shim1, Inyong Choi1
1
Communication Sciences & Disorders, University of Iowa, Iowa, United States of America

Auditory selective attention is a crucial mechanism for understanding speech in everyday environments. Top-down selective attention allows expectations to enhance the neural representation of sounds collected by the auditory sensory system. As most cochlear implant (CI) users struggle to recognize speech in noise, it is imperative to understand if CI users exhibit auditory selective attention activities that involve modulation of neural responses to target speech and if such attentional ability predicts their speech-in-noise performance. Our experiment is designed to assess the strength of attentional modulation within the human auditory pathway. Both participants with normal hearing (NH) and with CI were given a pre-stimulus visual cue that directed their attention to either of two sequences in stationary background noise and asked to select a deviant syllable in the target stream. Due to the misaligned timings of the syllables in each stream, we are able to examine the event-related potential (ERP), a proxy for attention modulation, in response to each syllable provided in the stream. We hypothesized that the amplitude of ERPs would be greater when the syllable is attended if either group is capable of employing auditory selective attention and that the difference of ERP amplitude between attended and unattended trials predicts the performance in a speech-in-noise task. Our analysis showed that the amplitude of ERPs for the attended syllable was greater than that for the unattended syllable with the CI subjects, exhibiting that attention modulates CI users’ cortical responses to sounds. Moreover, the strength of attentional modulation showed a significant correlation with the same CI users’ speech-in-noise performance. The difference between ERP amplitudes for attended and unattended syllables existed but was weaker for NH subjects. These results show that the attentional modulation of cortical auditory evoked responses provides a valuable neural marker for predicting CI users’ success in real-world communications.

13:55 - 14:10Break
14:10 - 14:40Hearing loss and dementia By Frank R. LinJohns Hopkins University

Frank R. Lin1

1Johns Hopkins University

Age-related hearing loss in older adults is often perceived as being an unfortunate but relatively inconsequential part of aging. However, the broader implications of hearing loss for the health and functioning of older adults are now beginning to surface in epidemiologic studies. This lecture will discuss recent epidemiologic research demonstrating that hearing loss is independently associated with accelerated cognitive decline, incident dementia, and brain aging. Mechanisms through which hearing loss may be causally linked with cognitive decline and dementia will be discussed as well as gaps in our current scientific knowledge. Current studies investigating the impact of hearing rehabilitative interventions on reducing cognitive decline and the risk of dementia in older adults will be explained and discussed.

Objectives

  1. To describe the mechanisms through which hearing loss may be related to risk of cognitive decline and dementia in older adults
  2. To discuss the epidemiological evidence demonstrating associations of hearing loss and dementia
  3. To explain gaps in our scientific knowledge of the relationship between hearing loss and dementia
14:40 - 15:45Parallel sessions

Find more details by clicking on the ‘Parallel’ tabs above.

15:45 - 16:00Break
16:00 - 16:20Does neural tracking of continuous speech indicate active distractor suppression? By Martin OrfUniversity of Lübeck

Martin Orf1, Ronny Hannemann2, Malte Wöstmann1, Jonas Obleser1
1
Department of Psychology, University of Lübeck, Lübeck, Germany
2Audiological Research Unit, WS Audiology-Sivantos GmbH, Erlangen, Germany

A listener’s ability to deal with challenging multi-talker situations hinges on her attention resources. While the neural implementation of target enhancement is comparably well understood, processes that enable distractor suppression are less clear. Typically, distractor suppression is quantified by the difference of the behavioural or neural response to distractors versus targets. However, such a difference can be driven by either target enhancement, distractor suppression, or a combination of the two. Here, we designed a continuous speech paradigm to differentiate target enhancement (enhanced tracking of target versus neutral speech) from active distractor suppression (suppressed tracking of distractor versus neutral speech). In an electroencephalography (EEG) study, participants (N = 19) had to detect short repeats in the to-be-attended speech stream and to ignore them in the two other speech streams, while listening also to the content of the to-be-attended audio stream. The ignored speech stream was task-relevant (to-be-attended) in the previous trial and was task-irrelevant in the present trial. The neutral speech stream was always task-irrelevant. We used phase-locking of the EEG signal to speech envelopes to investigate neural tracking via the temporal response function of the brain. Behavioural detection of repeats indicated the suitability of the paradigm to separate processes of attending and ignoring. Sensitivity of behavioral responses according to Signal Detection Theory revealed that the internal separation for attended versus neutral speech was larger than for attended versus ignored speech. Neurally, the attended stream showed a significantly enhanced tracking response compared to neutral and ignored speech. Unexpectedly, neural tracking did not reveal sizeable differences for neutral versus ignored speech. In sum, the present results show that the cognitive system processes to-be-ignored speech distractors different from neutral speech. However, this is not accompanied by active distractor suppression in the neural speech tracking response.

Acknowledgements: We thank the WS Audiology – Sivantos GmbH for supporting this research.

16:20 - 16:50Plasticity in the aging brain? Interplay with performance and hypotheses By Diane LazardInstitut Pasteur

Luc Arnal1, Diane Lazard 1,2
1
Institut de l’Audition, Institut Pasteur, Paris, France
2ENT surgery department, Institut Arthur Vernes, Paris, France

Mature brains are able to adapt to acquired sensory dysfunction. In case of hearing loss and oral communication difficulties, postlingual deaf adults follow two main strategies. One relies on a left-lateralized physiological, but relatively slow analytical pathway maintained by efficient lipreading skills. The second strategy consists in engaging the right hemisphere in an accelerated, non speech-dedicated network. This reorganization is efficient and rapid thanks to direct interaction with Broca’s area bypassing regular phonological steps. It relies on accelerated reading abilities. The first strategy will be beneficial in case of hearing rehabilitation (cochlear implantation), while the second option relevant during the deprived period will turn maladaptive. This active plasticity was evidenced in adults aged less than 65 years. When cochlear implantees are more than 65 years, they significantly perform less 2 years after surgery than a control sample aged 17-40 years. The reasons are still hypothetical. Among them, the relation between severe hearing loss in aging brains and dementia, and in particular Alzheimer’s disease, is questioned. Moreover, modification of sensitivity to salient events presented at 40 Hz (i.e. belonging to the roughness range) appears a promising potential biomarker of Alzheimer’s disease. Thus combining audiological and neurophysiological screening in the presymptomatic phase of Alzheimer’s disease may help preventing cognitive decline process by personalized interventions.

Acknowledgements: La Fondation pour l’Audition supports this work at Institut de l’Audition, Institut Pasteur.

14:40 - 14:50Evaluation of two adaptive maximum likelihood methods for measuring frequency discrimination thresholds in naive participants By Moussa KousaAmerican University of Beirut

Moussa Kousa1, Julien Besle1
1
Department of Psychology, American University of Beirut, Beirut, Lebanon

We evaluated two adaptive methods for fast and reliable measurement of frequency difference limens (DLFs) at multiple base frequencies in naive participants. Our ultimate goal will be to correlate these DLF functions with cortical frequency magnification functions measured in the participants’ primary auditory cortex using fMRI. We first measured DLFs at 8 log-spaced base frequencies [0.2-8 kHz] using the maximum likelihood procedure (MLP; [1]) in 10 participants (2I2AFC task; 5-6 blocks of 30 trials per frequency). The MLP provided unreliable thresholds that varied between consecutive blocks of the same frequency, due to the MLP’s susceptibility to lapses of attention early in a block. Naive participants also found the MLP’s sudden changes in difficulty in the very first trials confusing. We then switched to the Updated Maximum Likelihood procedure (UML; [3]), which has been developed to improve on the MLP’s above weaknesses by simultaneously and adaptively estimating three parameters of the psychometric function (midpoint, slope and lapse rate of attention) and by increasing difficulty more gradually. To verify the validity of the UML thresholds, we compared them to thresholds obtained using the method of constant stimuli (CS) in 14 participants (8S2A task [2]; 8 log-spaced base frequencies between 0.2 and 8 kHz; 2 blocks of 75 trials per frequency per session; session 1: UML, sessions 2&3: UML or CS, counterbalanced across participants). Preliminary results in six participants suggest that the UML and CS procedures yield very similar thresholds, that only 75 trials are necessary to obtain reliable thresholds within a session, but that most participants’ thresholds substantially improved between sessions 1 and 2, suggesting that minimal training remains necessary to obtain accurate thresholds in naive participants

14:50 - 15:00Noise edge pitch and histogram of interpeak intervals By Václav VencovskýCzech Technical University in Prague

Václav Vencovský1
1
Department of Radioelectronics, Czech Technical University in Prague, Czech Republic

Broadband noise with a sharp falling edge in the power spectrum evokes a monaural noise edge pitch. When comparing with the pitch of a matching pure tone, the same pitch sensation is evoked for pure tone shifted slightly into the spectral region of the noise. In the frequency region between 100 Hz and 2.5 kHz, the pure tone frequency is about 2% to 9% below the noise edge frequency for lowpass filtered noise and about 2% to 20% above the noise edge frequency for highpass filtered noise [Hartmann et al. (2019) J. Acoust. Soc. Am. 145:1993-2008]. The departure from the edge frequency grows as the edge approaches low frequencies. The noise edge pitch can be explained by temporal theories calculating autocorrelation or by a place theory employing lateral inhibition. It is shown here that the noise edge pitch can be also derived from the first order intervals between successive peaks in the temporal fine structure of the noise filtered with a gammatone filterbank. The filterbank was composed of 300 filters distributed between 20 Hz and 10 kHz according to the Cam scale. Histogram of the first-order interpeak intervals reveals a hump centered around the noise edge pitch frequency. The hump is more visually apparent for LP noise than for HP noise as the edge frequency approaches low frequencies (below about 200 Hz), which corroborates experimental results in literature. At 200 Hz, ratio between the pitch frequency estimated as the mid of the hump and the edge frequency is about 1.1, which agrees with literature. These results support temporal theories of pitch perception employing first-order interspike intervals [Huang and Rinzel (2016) Front. Comput. Nerosci. 10:57(1-17)].

Acknowledgements: Supported by an internal grant at the Czech Technical University in Prague SGS20/180/OHK3/3T/13.

15:00 - 15:10Impact of background noise on methods for categorical loudness scaling By Palle RyeAalborg University

Palle Rye1, Dorte Hammershøi1
1
Department of Electronic Systems, Aalborg University, Aalborg, Denmark

The initial crucial step in hearing rehabilitation is a reliable diagnosis of hearing impairment. Pure tone threshold measurement in a quiet clinical setting is the current gold standard for this purpose. The reliability of pure tone threshold measurement depends on a quiet environment, especially when the threshold of the test subjects are near normal hearing. As a relatively novel procedure for characterizing individual hearing characteristics, ISO 16832 defines a reference method for categorical loudness scaling, where most of the presented stimuli are above the threshold by design. Here the test subject rates the subjective loudness of a set of narrowband auditory stimuli adapted during the test to cover the full dynamic range of the individual. The response is a subjective rating using an 11-point scale from not heard to extremely loud. In the outlined reference method, most of the presented stimuli are above the threshold by design, ideally with only one or two presentations characterized as not heard. The research behind the reference method suggests that the threshold can be reliably predicted by fitting an appropriate model on the collected set of responses, even when the presented set of stimuli are mainly above the threshold. The present study investigates the effects of performing categorical loudness scaling in a noisy environment. Simulations of added background noise to an existing dataset from a clinical setting illustrate the difficulties in estimating auditory thresholds and loudness growth. For test subjects with elevated thresholds, the increase in stimulus level may often be sufficiently above the environmental background noise such that little or no influence is expected. At low audiometric frequencies, however, the predicted hearing threshold is likely to deviate in excess of 5dB for a significant number of patients.

Acknowledgements: This work was supported by Innovation Fund Denmark Grand Solutions 5164-00011B (Better hEAring Rehabilitation project), Oticon, GN Resound, Widex, and other partners (Aalborg University, University of Southern Denmark, the Technical University of Denmark, Force, Aalborg, Odense and Copenhagen University Hospitals). The funding and collaboration of all partners are sincerely acknowledged.

15:10 - 15:20The influence of bilingualism and hearing loss on cognition in older adults By Katrien VermeireLong Island University

Katrien Vermeire1, Adrian Fuente2
1
Long Island University, New York City, USA
2Institut Universitaire de Geriatrie de Montreal Research Center, Montreal, Canada

Healthy aging is a priority as the proportion of older adults is drastically increasing. In developed regions the population aged 60 or over is expected to nearly double in 50 years. The prevalence of cognitive decline and dementia increase with age. Age-related hearing impairment (ARHI) is a potential contributor to cognitive decline in older adults and has been shown to relate to poor cognitive performance and dementia. Considering the high prevalence of ARHI and its impact on cognitive performance, there is a need to find protective factors against the effect of ARHI on cognitive performance.

Today, more of the world’s population is bilingual or multilingual than monolingual. Bilingualism has positive effects on cognition and bilingual older adults experience less cognitive decline. However, previous studies on the relationship between bilingualism and cognition have not controlled for ARHI and thus it cannot be determined if bilingualism has a protective effect on cognitive performance in older adults with ARHI.

The aim of this study is to determine if bilingualism offers protection against the negative effect of ARHI on cognition. We hypothesize that bilingual older adults will have better cognitive capacities than their monolingual peers with comparable hearing.

In this study, we will compare cognitive capacities between two groups of older adults (>65 years) with ARHI who do not use hearing aids. Individuals with hearing thresholds in the mild to moderate hearing loss range are considered potential participants. These individuals are to participate in working memory tests, the Reading Span Test and the Corsi Block Tapping test, to assess cognitive abilities. Results are collected and analyzed to determine whether bilingual older adults have better cognitive abilities compared to monolingual older adults with comparable hearing.

15:20 - 15:30Broadband amplification as tinnitus treatment By Mie L. JørgensenTechnical University of Denmark

Mie L. Jørgensen1,2, Petteri Hyvärinen1, Sueli Caporali2, Torsten Dau1
1Hearing Systems Section, Department of Health Technology, Technical University of Denmark, Kgs. Lyngby, Denmark
2WS Audiology, Lynge, Denmark

The objective of the study was to investigate the effect of broadband amplification (125 Hz to 10 kHz) as tinnitus treatment for subjects with high-frequency hearing loss and compare these effects with an active placebo condition using band-limited amplification (125 Hz to 3-4 kHz). The study was a double-blinded crossover study. 22 subjects with a high-frequency (≥ 3 kHz) hearing loss and chronical tinnitus were included in the study and 17 completed the full treatment protocol. Two different hearing aid treatments were provided for 3 months each: broadband amplification that provided gain in the frequency range from 125 Hz to 10 kHz and band-limited amplification that only provided gain in the low frequency range (≤ 3-4 kHz). The effect of the two treatments on tinnitus distress was evaluated with the tinnitus handicap inventory (THI) and the tinnitus functional index (TFI) questionnaires. The effect of the treatment on tinnitus loudness was evaluated with a visual analog scale (VAS) for loudness and a psychoacoustic loudness measure. Furthermore, the tinnitus annoyance was evaluated with a VAS for annoyance. A statistically significant difference was found between the two treatment groups (broadband vs. band-limited amplification) for the treatment-related change in THI and TFI with respect to baseline. Furthermore, a statistically significant difference was found between the two treatment conditions for the annoyance measure. Regarding the loudness measure, no statistically significant differences were found between the treatments, although there was a trend towards lower VAS-based loudness measure resulting from the broadband amplification. No changes were observed for the tinnitus pitch between the different conditions. Overall, the results from the present study suggest that, tinnitus patients with high-frequency hearing loss can experience a decrease in the tinnitus related distress, annoyance and loudness from high-frequency amplification.

15:30 - 15:45Panel discussion
14:40 - 14:50Comparison of an open-source hearing aid prototype with commercially available hearing aids By Lukas JürgensenUniversität zu Lübeck

Lukas Jürgensen1,2, Hendrik Husstedt2, Florian Denk2
1
Universität zu Lübeck, Lübeck, Germany
2German Institute of Hearing Aids, Lübeck, Germany

As medical devices, hearing aids compensate for a hearing loss but also include several interacting features such as directional microphones, noise reduction or feedback reduction. Detailed knowledge and access to these features would be beneficial for researchers. However, the amount of insight and control into the signal processing strategies of commercially available hearing aids is often quite limited. One approach that could possibly be used as a research hearing aid, is the open Master Hearing Aid (openMHA), which has recently been made available as an open-source tool. In combination with the Portable Hearing Lab (PHL), a portable miniature computer with the opportunity to connect realistic hearing aid headsets, the openMHA can be seen as a full hearing aid prototype. In this contribution, the frequency dependent gains, latencies, the performance of the feedback reduction and the characteristics of the hearing aid channels of the PHL equipped with BTE-RIC headsets running a standard openMHA configuration, were measured in a test box setup, and partly in a KEMAR setup as well. The results measured in the test box setup were also compared to those achieved with two technology levels of a recent commercial hearing aid series. The results show functional feedback reduction for both the prototype and the commercial hearing aids. The latencies of the prototype were 4 ms longer, and the maximum provided gain was 10 dB higher compared to the commercial devices. The results regarding the hearing aid channels show different strategies used in the tested devices without indicating a better or worse performance in either of them. All in all, the hearing aid prototype performed on a comparable level as the commercial hearing aids for the assessed metrics and thus can be viewed as a fully functional hearing aid for research purposes.

14:50 - 15:00Objective assessment of hearing aid noise reduction schemes and configurations using EEG-based auditory attention decoding By Brian K. ManOticon A/S

Brian K. Man1, Elaine H. N. Ng1,2, Emina Alickovic3,4
1Oticon A/S, Smørum, Denmark
2Department of Behavioral Sciences and Learning, Linkoping University, Linkoping, Sweden
3Eriksholm Research Centre, Oticon A/S, Snekkersten, Denmark
4Department of Electrical Engineering, Linkoping University, Linkoping, Sweden

Everyday listening situations that require listeners to selectively attend to a talker of interest in noisy environments with multiple competing talkers are among the most challenging situations encountered by hearing-impaired listeners. Such challenges become even more pronounced with increasing background noise level and may partially be overcome by adequate hearing aid (HA) amplification and noise reduction (NR) support. Using Electroencephalography (EEG), it has been demonstrated that the auditory cortex selectively represents the target talker with significantly higher fidelity than other competing talkers in normal-hearing and hearing-impaired listeners. An NR scheme in commercial HAs was also found to enhance the neural representation of the foreground and entire acoustic scene in early (<85 ms) EEG responses and to enhance neural representations of the target and masker speech and suppress the neural representation of the background noise in late (> 85ms) EEG responses. Motivated by these findings, in this review we investigate whether neural representation of speech in distinct hierarchical stages is affected by NR scheme and configuration in commercial HAs. We show that both the choice of NR scheme and configuration can significantly affect the neural speech processing. By using auditory-attention decoding methods, we show that selecting the adequate NR configuration leads to significantly better representation of the entire acoustic scene and foreground in early EEG responses and to significantly better representation of target and masker speech in late EEG responses in noisy environments. These results suggest that EEG-based auditory attention decoding methods may be sensitive to the choice of HA signal processing configurations in HA users.

15:00 - 15:10Perceived quality of acoustic transparency in a simulated hearing device: Measures and models By Kristin OhlmannUniversity of Oldenburg

Kristin Ohlmann1, Florian Denk,2 Birger Kollmeier1
1
Medical Physics, University of Oldenburg, Oldenburg, Germany
2German Institute of Hearing Aids, Lübeck, Germany

Acoustic transparency – i.e., the ability of a hearing device to “just amplify without side effects” by providing an overall transfer function to the eardrum mimicking the open ear – is a desirable feature to increase the spontaneous acceptance. However, an individual equalization filter accounting for individual ear acoustics is difficult to achieve in real hearing devices, because the required individualized acoustic transfer functions are not available for most applications unless measurements at the eardrum are performed. Furthermore, the processing latency and the leakage of external sound into the ear canal can deteriorate the perceived sound quality. We therefore assessed how sound quality is influenced by different parameters, such as using individually measured or generic data from a dummy head to compute the equalization filter, as well as different latencies or vent configurations. Also, we evaluated how a direction-independent approximation of the direction-dependent transfer function from hearing device microphone to eardrum can be obtained, which allows for a good estimation of the signal at the open ear. Based on individually measurement data, a hearing device was simulated in 10 normal hearing participants’ ears using individual binaural synthesis, allowing to freely vary all hearing device parameters. Acoustic scenes “heard” through the different virtual hearing devices were presented in a MUSHRA-like framework via headphones, and the overall sound quality was rated by the participants. Quality ratings for conditions with individualized transfer functions tend to be higher than for generic data, while latency and leakage show no large overall influence. Differences tend to be larger for complex scenes. In addition to the subjective listening tests, objective quality measures were obtained through a set of models. A good agreement between subjective and objective data was achieved using the GPSMq model. This will support the interpretation and parameter optimization for acoustic transparency in the future.

15:10 - 15:20Impact of hearing aid technology level at first fit on reported outcomes in older adults presbycusis: A randomized controlled trial By Sabina S. HoumoellerUniversity of Southern Denmark

Sabina S. Houmoeller1,2, Anne Wolff3, Li-Tang Tsai1, Sreeram K. Narayanan5, Dan D. Hougaard3,4, Michael Gaihede3,4, Tobias Neher1, Christian Godballe1,2, Jesper H. Schmidt1,2
1Research Unit for ORL – Head & Neck Surgery and Audiology, Odense University Hospital, Odense, Denmark; University of Southern Denmark, Odense, Denmark
2OPEN, Odense Patient data Explorative Network, Odense University Hospital, Odense, Denmark
3Department of Otolaryngology, Head & Neck Surgery and Audiology, Aalborg University Hospital, Aalborg, Denmark
4Department of Clinical Medicine, Aalborg University, Aalborg, Denmark
5Department of Electronic Systems, Aalborg University, Aalborg, Denmark

Independent research supporting the choice of hearing aid technology level is lacking. Thus, the main purpose of this study was to explore reported outcomes for older adults with presbycusis using premium-feature and basic-feature hearing aids. Secondly, we investigated if differences in gain prescription measured with real-ear measurements explain differences in self-reported outcomes. The study was designed as a randomized controlled trial in which 190 first-time hearing aid users (≥ 60 years) with symmetric bilateral presbycusis were allocated to either a premium-feature or basic-feature hearing aid. The randomization was stratified on age, sex, and word recognition score. The outcomes were designed to assess their perceived hearing abilities and the effectiveness of hearing aids. Two types of self-reported questionnaires were used: the International Outcome Inventory for Hearing Aids (IOI-HA) and the short form of the Speech, Spatial, and Qualities of Hearing Scale (SSQ-12) questionnaire. In addition, insertion gain at first-fit were measured for all fitted hearing aids. Premium-feature hearing aid users reported 0.7 (95%CI: 0.2;1.1) scale points higher overall SSQ-12 score per item compared to basicfeature hearing aid users. Differences in the prescribed gain at 1 and 2 kHz were observed between premium and basic hearing aids within each company but did not explain the differences in reported outcomes. No statistically significant difference in reported hearing aid effectiveness between the two levels of technology was found. Overall, this study found evidence that premium-feature devices yielded better self-reported outcomes than basicfeature devices.

Acknowledgements: This research was funded by Innovation Fund Denmark Grand Solutions 5164-00011B (‘BEAR project’), GN Hearing, Oticon and WS Audiology. The collaboration with other partners (Aalborg University, Force as well as the university hospitals in Odense, Copenhagen and Aalborg) is sincerely acknowledged.

15:20 - 15:30Timeline and preference of the hearing aid adjustments over a year of rehabilitation and relation to self-reported outcome By Sreeram K. NarayananAalborg University

Sreeram K. Narayanan1, Anne Wolff3, Sabina S. Houmoeller4,5, Li-Tang Tsai5, Dan Hougaard2,3, Michael Gaihede2,3, Jesper H. Schmidt4,5, Dorte Hammershøi1
1Department of Electronics System, Aalborg University, Aalborg, Denmark
2Department of Otolaryngology, Head and Neck Surgery, Aalborg University Hospital, Aalborg, Denmark
3Department of Clinical Medicine, Aalborg University, Aalborg, Denmark
4Department of Oto-rhino-laryngology, Odense University Hospital, Odense, Denmark
5Institute of Clinical Research, Faculty of Health Sciences, University of Southern Denmark, Odense, Denmark

Adjustments to hearing aid (HA) are customarily performed according to patients’ preference either during or after fitting. The adjustments can be either functional or peripheral. Some patients seek immediate help due to a higher degree of challenges or dissatifisfaction with HA performance. However, most do not seek help in due time, which could delay HA rehabilitation. The adjustment timeline and the type of adjustments performed can provide insight into the patients’ behavior and preferences. It would also be worthy of understanding the relationship between the time of adjustments and self-reported outcomes. In the Better hEAring Rehabilitation (BEAR) project, information about adjustments performed within the first two months of initial fit, during two months follow-up visit, and any adjustments thereafter and within one year of fitting were collected using a non-standardized questionnaire. Data from 617 HA users who answered all the questions in the abbreviated version of the Speech, Spatial, and Quality of Hearing (SSQ-12) questionnaire and the International Outcome Inventory for Hearing Aids (IOI-HA) gathered during two months follow-up visit and one-year follow-up, were included in the analysis. The users were divided into eight groups according to the time of HA adjustment. Results show that most of the patients got their hearing aid adjusted during the follow-up visit, indicating a high prevalence of intention or the need for HA adjustment when given an opportunity. This is supported by fewer people adjusting the HA after the two-month follow-up visit. Patients who had their HAs adjusted at various timelines reported significantly lower IOI-HA Factor 1, IOI-HA Factor 2, and SSQ speech domain scores than patients who had no adjustments during the first year of use with the initial fit. Thus, the study establishes the relation between HA adjustments and self-reported outcomes, emphasizing the importance of a correct initial fit and follow-up in rehabilitation.

Acknowledgements: Collaboration and support by Innovation Fund Denmark (Grand Solutions 5164-00011B); Oticon, G.N. Hearing, Widex-Sivantos Audiology and other partners (Aalborg University Hospital, Odense University Hospital, Aalborg University, Technical University of Denmark, FORCE Technology; and, Copenhagen University Hospital) is sincerely acknowledged.

15:30 - 15:45Panel discussion
14:40 - 14:50Decoding the auditory nerve for the differential diagnosis of sensorineural pathologies By Jacques GrangeCardiff University

Jacques Grange1, John Culling1 
1
School of Psychology, Cardiff University, Cardiff, United Kingdom

Pathologies underlying sensorineural hearing loss (SNHL) cannot yet all be differentially diagnosed. We are developing means of pathology discrimination with an advanced SNHL simulator. The physiologically inspired model of the auditory periphery (MAP, Meddis et al., 1986~2018) simulates stimulus encoding at the auditory nerve (AN) level through the firing patterns of 30,000 fibers arranged across 30 best frequencies (56 Hz to 8 kHz) and 3 spontaneous rates. Such encoding is then decoded/converted back into an acoustic signal to be presented to young, normally hearing listeners in psychophysical tasks. Simulator validation was obtained with a speech-in-noise intelligibility task for which simulated-normal-hearing speech reception thresholds (SRTs) were just 1 dB higher than those obtained for unprocessed stimuli. By inserting specific pathologies in the model, we believe one can reveal their psychophysical signatures. We first demonstrated the importance of efferent reflexes to the faithful coding of the temporal modulations that carry speech information. With both reflexes disabled, SRTs grew by 3-4 dB. Simulated AN rate-level functions illustrate how efferent reflexes enable the AN dynamic-range adaptation to context level that prevents information loss. While deactivating 70% of AN fibers or halving the endocochlear potential (EP) did not lead to any appreciable SRT inflation, total outer haircell (OHC) knockout led to a smaller SRT inflation than that found when efferent reflexes were disabled. Circa 90% general deafferentiation was required to reflect performance found in hearing impaired listeners, an outcome consistent with stochastic under-sampling predictions by Lopez-Poveda and Barrios (2013). Both ITD discrimination and MAA thresholds exhibited near-linear growth with the log of remaining efferent fibers. With decreasing EP or OHC count, SRTs inflated and dip-listening and F0-segregation benefits decreased. However, such benefits did not decrease with up to 96% deafferentation, despite SRT inflation. Overall, our simulator paves to way to enabling differential diagnosis of SNHL.

Acknowledgements: This work was funded by EPSRC.

14:50 - 15:00Modeling the effect of age on concurrent vowel scores for shorter durations By Harshavardhan SettibhaktiniBirla Institute of Technology and Science

Harshavardhan Settibhaktini1, Michael G. Heinz2, Ananthakrishna Chintanpalli1
1
Department of Electrical and Electronics Engineering, Birla Institute of Technology and Science, Pilani Campus, Rajasthan 333031, India.
2Department of Speech, Language and Hearing Sciences, and Weldon School of Biomedical Engineering, Purdue University, West Lafayette, Indiana 47907-2028, USA.

Listeners hear the speech sounds with varying durations in a real-world environment. A fundamental frequency (F0) difference is an important cue that younger adults with normal-hearing utilize to segregate simultaneously presented multiple speeches. Concurrent vowel identification is studied to understand the effect of F0 difference in identifying two simultaneously presented vowels. Behavioral studies on concurrent vowel identification indicate that the ability to utilize an F0 difference cue is reduced when the stimulus-duration is reduced to 50-ms from 200-ms. Using the computational modeling, Settibhaktini and Chintanpalli [2020, Speech Commun.] showed that the lower concurrent-vowel scores across F0 differences for 50-ms could be predicted by limiting the F0-guided cue for segregation in a modified Meddis and Hewitt algorithm [1992, JASA]. Behavioral studies on concurrent vowels, with longer (>200-ms) duration, indicate that older adults had difficulty in availing F0 difference cue for identification, when compared with younger adults. To understand the age effect, Settibhaktini et al [under review, JASA] developed an older normal-hearing model by cascading a physiologically realistic auditory-nerve model [Bruce et al, 2018, Hear. Res.] with a modified Meddis and Hewitt algorithm to predict the concurrent-vowel scores. The peripheral model included endocochlear-potential loss and cochlear synaptopathy. The model scores were successful in capturing the reduced scores across F0 difference as observed in older adults. The goal of the current study is to predict the concurrent-vowel scores for older adults with shorter duration. The same older normal-hearing model was used to predict the concurrent-vowel scores but with a limited F0-guided cue in a modified Meddis and Hewitt algorithm. Our preliminary results suggest that older adults had larger difficulty in availing the F0 difference cue that resulted in reduced concurrent-vowel scores, than that of younger adults. These predictions can motivate future behavioral study in shorter duration, for which the data is currently unavailable.

Acknowledgements: This work was supported by the third author’s Outstanding Potential for Excellence in Research and Academics Grant (FR/SCM/160714/EEE) and Research Initiation Grant (no. 68), awarded by BITS Pilani, Pilani campus, Rajasthan, India and by the second author’s NIH Grant (R01-DC009838).

15:00 - 15:10Simulation of the impact of simulated cochlear synaptopathy on temporal envelope perception By Mengchao ZhangCardiff University

Mengchao Zhang1, Jacques Grange1, John Culling1
1School of Psychology, Cardiff University, Cardiff, United Kingdom

Cochlear synaptopathy is a selective loss of auditory nerve fibers with low spontaneous rates (SR) after noise exposure or aging, which is thought to contribute to hidden hearing deficits, especially the ability to process suprathreshold temporal envelopes (TEs). However, evidence of cochlear synaptopathy in humans is unclear due to difficulty in documenting noise exposure history and selecting sensitive measures. The present study uses a computational model to simulate and examine the impact of cochlear synaptopathy on TE perception. Auditory nerves from different SR classes were selectively deactivated in a physiologically inspired auditory model, and then the neural signals of the model were decoded into soundwaves for perceptual evaluation. Simulated synaptopathy (deactivating low-SR fibers) was compared to a normal condition, a more severe version of cochlear synaptopathy (deactivating low- and medium-SR fibers), and a loss of high-SR fibers. TE perception was evaluated through amplitude modulation detection, speech recognition in modulated noise, and recognition of unvoiced speech in modulated noise. Overall, cochlear synaptopathy impaired TE perception, but deactivating high-SR fibers showed no significant difference from the normal condition. The severity of impact of synaptopathy differed based on the task parameters. The modulation detection threshold difference between the normal and the synaptopathy conditions decreased from about 13 dB at 16 Hz to about 9 dB at 64 Hz. For speech tasks, loss of low-SR fibers alone degraded the speech recognition threshold compared to normal conditions by about 1 dB for natural speech but about 4.6 dB for unvoiced speech. In summary, the simulation supports the theoretical role of low-SR fibers in coding suprathreshold TE and shows that sensitive TE measures for cochlear synaptopathy requires a careful selection of task. 

Acknowledgements: The study is supported through the EPSRC grant (Grant No.: EP/R010722/1). 

15:10 - 15:20AMT 1.0: The Auditory Modeling Toolbox for reproducible research By Piotr MajdakAustrian Academy of Sciences

Piotr Majdak1, Clara Hollomey1, Robert Baumgartner1
1
Acoustics Research Institute, Austrian Academy of Sciences, Vienna, Austria

The auditory modeling toolbox (AMT) is a Matlab/Octave toolbox for the development and application of auditory computational models with a particular focus on binaural hearing. The AMT aims at reproducing model predictions and at providing user-friendly access, allowing students and researchers to work with and to advance existing models. To this end, it consists of implementations of auditory models, structured in-code documentation, and the auditory data required to run the models. Model implementations can be evaluated by running so-called demonstrations which are quick presentations of a model and by starting so-called experiments aiming at reproducing results from the corresponding publications. With its version 1.0, the AMT provides a sophisticated framework including caching mechanisms, online repositories, general purpose functions, and plotting functionality, all intended to encourage the enhancement of existing models. For future contributions, the AMT offers multi-licensing of the model implementations, clear display of authorship, and citations to their authors’ publications. The AMT 1.0 includes over 50 models and is freely available as an open-source package from http://www.amtoolbox.org.

Acknowledgements: Partially funded from the European Union’s Horizon 2020 research and innovation programme, project SONICOM, under grant agreement No 101017743.

15:20 - 15:30Cooperativity and synchrony in the auditory periphery By Christopher BergevinYork University

Christopher Bergevin1 
1
Department of Physics & Astronomy, York University, Toronto, Canada

The auditory periphery is comprised of many disparate elements actively working together to allow for high sensitivity and selectivity. For example, the eardrum appears to move a fraction of the atomic diameter of hydrogen in response to incident sounds at threshold. To achieve such sensitivity, especially in light of thermal noise, biophysical cooperation amongst the elements of the ear is crucial. We argue here that a key principle governing those complex interactions is synchrony. By this we mean the dynamics associated with weakly-coupled self-sustained (i.e., active) oscillators. Here we describe a non-mammalian model, the Anolis lizard, to elucidate how synchrony manifests concurrently at different levels of the periphery. First, within a given inner ear, evidence suggests hair cells metabolically use energy to behave as limit cycle oscillators. Further, they couple together to form groups (or “clusters”) that synchronize, effectively allowing them to increase their sensitivity and selectivity to low-level sounds. Second, by virtue of direct coupling between the tympanic membranes via the interaural canal, the two active ears can also synchrnonize, possibly allowing for improvements in the localization to sounds close to threshold. In parallel, lizards offer an opportunity to explore how synchrony change across the lifespan, given the ability of hair cell regeneration to repair damage to the sensory epithelium. An overarching goal is to explore how these results elucidating cooperativity might be generalized more broadly to the mammalian auditory pathway (e.g., feedback loops in cortical networks). 

15:30 - 15:45Panel discussion
14:40 - 14:50Research on the influence of actuators excitation position on acoustic performance of bone conduction By Yu ZhaoChina University of Mining and Technology

Yu Zhao1, Yumeng Zhang1, Houguang Liu1, Xinsheng Huang2 
1
School of Mechatronic Engineering, China University of Mining and Technology, Xuzhou, P. R. China.
2Department of Otolaryngology, Zhongshan Hospital affiliated to Fudan University,
 Shanghai, P.R. China.

In order to study the effect of the actuator stimulating position on the performance of bone conduction hearing aids, the bone conduction sound transmission mechanism was studied. To facilitate this study, a coupled a finite element (FE) model of the human head and the ear was built. Firstly, a series of micro-CT images of a human head was used to establish an FE model of the human head, and the reliability of the model was verified by comparing the model-predicted mechanical impedance of the excitation position and the acceleration response of the promontory with the experimental data. Secondly, an FE model of human ear, which consists of the ear canal, the middle ear, and the spiral cochlea incorporating the cochlear third windows was also established and verified. Then, based on these two FE models, we constructed a coupled FE model of the human head and the ear via coupling the corresponding nodes, and applied the force excitation at different sites on the model’s head. Finally, through comparative analysis of the dynamic response characteristics of the basilar membrane in the coupled FE model, the effect of different excitation positions of the actuator on the hearing compensation performance of the bone conduction hearing aid was studied. The results show that the best excitation position for the bone conduction’s actuator is near the cochlea, especially the one close to the mastoid. Besides, stimulating the human head close to the cochlea would improve the bone conduction’s performance at low-mid frequencies.

Acknowledgements: This work was supported by the National Natural Science Foundation of China (grant number 51775547).

14:50 - 15:00Binaural summation of soft speech in single-sided deafness patients with cochlear implants By Francis SmithUniversity of Iowa

Francis X. Smith1, Bob McMurray2,3, Ruth Litovsky4, Inyong Choi1,3
1
Department of Otolaryngology, University of Iowa, Iowa City, United States
2Department of Psychology and Brain Sciences, University of Iowa, Iowa City, United States
3Department of Communication Sciences and Disorders, University of Iowa, Iowa City, United States
4Department of Communication Sciences and Disorders, University of Wisconsin-Madison, Madison, United States

The normal auditory system has remarkable mechanisms that integrate sounds from the two ears, enabling listeners to utilize binaural information for achieving critical tasks in their everyday communication. Such binaural benefits include localization (that depends on the computation of binaural disparity) and detecting weak sounds (based on binaural redundancy processing and summation). For patients with single sided deafness (SSD), a cochlear implant (CI) helps to improve sound localization and speech recognition in some noisy scenarios. However, it is not yet known whether CI recovers binaural summation mechanisms for detecting weak sounds in SSD listeners. In this study, the binaural benefit of soft-speech recognition was tested in six SSD CI patients and thirty normal hearing (NH) listeners. SSD CI patients were tested in a four-alternative forced choice speech recognition task using soft speech in three conditions: using only their acoustic ear, only their electrical stimulation ear, or using both ears. NH subjects performed the same speech recognition task in four conditions: one ear with normal speech, one ear with noise vocoded speech, a combination of normal speech to one ear and vocoded speech to the other, or normal speech played to both ears. Electroencephalographic recordings were obtained from all participants as they completed the task. As expected, we observed an accuracy benefit for speech recognition when listening with both ears, even when the input to one ear is CI electrical stimulation / noise vocoded compared to listening with only one ear. Surprisingly, however, the early auditory response observed in the N1P2 complex was lower in amplitude in the combined conditions compared to listening with either one or two normal acoustic inputs. This suggests that the binaural summation benefit in SSD CI users is not solely due to enhanced encoding during early auditory processing, but due to late cognitive processes.

15:00 - 15:10Temporal coherence detection predicts cochlear implant users’ speech-in-noise performance By Jean HongUniversity of Iowa Hospitals and Clinics

Jean Hong1, Phil Gander2, Joel Berger2, Timothy Griffiths3, Inyong Choi1,4
1Department of Otolaryngology – Head and Neck Surgery, University of Iowa Hospitals and Clinics, Iowa City, USA 
2Department of Neurosurgery, University of Iowa Hospitals and Clinics, Iowa City, USA 
3Biosciences Institute, Newcastle University, Newcastle upon Tyne, UK  
4Department of Communication Sciences and Disorders, University of Iowa

Most cochlear implant (CI) users struggle with understanding speech in noisy environments. This challenge may be due to difficulty in separating the target stream from the mixture of sounds. Our previous work demonstrated that figure-ground perception, the ability to detect a ‘figure’ consisting of temporally-coherent tones among less-coherent ‘background’ components, is a strong predictor for successful speech-in-noise perception in normal hearing listeners (Holmes & Griffiths, 2019). However, it is unclear if this temporal coherence-based grouping mechanism predicts CI users’ speech-in-noise ability as well. To address this question, we recruited forty-seven CI users who completed speech-in-noise and figure-ground tasks. In the figure-ground task a sound complex consisting of a figure component that was either fixed (present) or random (absent) in frequency was played simultaneously with a background component of random tones; the participant was asked to detect the presence of the figure component. To isolate the contribution of electric hearing to figure detection we band-passed the figure component from 1-8 kHz and ensured half octave separation among the coherent frequencies to reduce cochlear implant channel interaction. The speech-in-noise task consisted of single words presented in multi-talker babble (4AFC). The results from across-subject correlation analysis indicated that successful figure-ground performance was also a strong predictor for successful speech-in-noise performance among CI users. This research can provide new insights on how to better assess speech-in-noise deficits among CI users.

15:10 - 15:20Forward-masked psychophysical tuning curves via wireless Bluetooth to evaluate frequency selectivity of cochlear implant channels By Meisam ArjmandiHarvard Medical School

Meisam Arjmandi1,2, Julie Arenberg1,2 
1
Department of Otolaryngology-Head and Neck Surgery, Harvard Medical School, Boston, MA, USA
2Audiology Division, Massachusetts Eye and Ear, Boston, MA, USA

Psychophysical tuning curves (PTCs) are effective measures to evaluate frequency selectivity in listeners with cochlear implants (CIs). We are developing a fast and reliable method to measure 5-point PTCs using acoustic stimulation via Bluetooth in CI listeners. The procedure reduces the number of measurements required by eliminating the time to measure thresholds and most comfortable listening levels for each electrode. The pure tones used for masker and probe signals matched the corner frequencies of the analysis filter bands in the processor settings such that mainly one electrode is stimulated. Rather than adaptively varying the level of the masker tone, the masker level is fixed at 56 and 46 dB SPL. Using a forward masking paradigm, the masker frequency is adaptively varied across electrode/frequencies using a two-alternative, two-interval forced choice procedure and frequency thresholds are obtained for the low and high frequency sides, relative to the probe electrode. The tip of the PTC is measured by on-frequency masking, adaptively varying the masker level. Stimuli were controlled through a custom Matlab interface with the AFC software in the background (Ewert, 2013). Data with this method was compared with the gold standard, direct electrical stimulation procedure (Kreft et al., 2019). Preliminary data from two adult CI listeners showed morphologically similar PTCs between acoustic stimulation via Bluetooth and direct electrical stimulation. We characterized PTCs using the apical, basal, and mean slopes. The measures of PTCs calculated from the Bluetooth and direct stimulation procedures were highly correlated for the apical and mean slopes (Pearson r>.9, p<.5). This new tool may be useful for rapid and reliable measurement of PTCs to evaluate frequency selectivity for individual electrode sites. Implementation is for Advanced Bionics devices, but it could be modified for other manufacturers. Developing patient-specific strategies based on PTCs to determine optimal fitting is becoming more feasible.

Acknowledgements: This work was supported by the NIH National Institute on Deafness and Other Communication Disorders Grant RO1 DC012142 (JGA).

15:20 - 15:45Panel discussion
  • Podium: Rehabilitation and compensation
13:00 - 13:05Introduction
13:05 - 13:35Restoring bilateral hearing to children and adults: Spatial hearing in complex listening environments and listening effort By Ruth Y. LitovskyUniversity of Wisconsin

Ruth Y. Litovsky
1Waisman Center, University of Wisconsin, Madison, USA
2Communication Sciences and Disorders, University of Wisconsin, Madison, USA

Our work focuses on patients with bilateral deafness who are eligible to receive bilateral cochlear implants (BiCIs), and patients with single-sided deafness who receive a cochlear implant (SSD-CI) in the deaf ear. In both the BiCI and SSD-CI populations there is a potential benefit from the integration of inputs arriving from both ears. Benefits include improved ability to localize sounds and to segregate speech from background noise, compared with unilateral listening. However, patients typically perform worse than normal hearing listeners. We use several approaches to understand mechanisms driving gaps in performance. We assess their ability to process auditory cues that are most essential for spatial hearing, and the role of age and auditory experience. We also use research processors to test novel stimulation paradigms designed to restore binaural sensitivity and speech understanding in noise. Our studies provide evidence for the role of auditory plasticity in driving binaural hearing. In addition, patients report that bilateral hearing reduces their cognitive load and fatigue, but few studies have addressed this issue. Pupillometry studies and also functional near infrared spectroscopy might be used as objective tools that can provide insight into the impact of integrating inputs from two ears, whereby in some instances improved performance with two ears can be “costly” in the listening effort domain.

Acknowledgements: The work was conducted in collaboration with Lukas Suveg, Emily Burg, Tanvi Thakkar, Shelly Godar, Ellen Peng, Alan Kan and Dan Lee. NIH-NIDCD (R03DC015321 to AK and R01DC003083 to RYL), and NIH-NICHD (U54HD09256 to Waisman Center).

13:35 - 13:55Development and evaluation of hearing devices with online ratio mask computation for real-time speech enhancement By Marcos A. CantuUniversity of Oldenburg

Marcos A. Cantu1, H. Steven Colburn2, Volker Hohmann1
1
University of Oldenburg, Department of Medical Physics and Acoustics and the Cluster of Excellence Hearing4All, Oldenburg, Germany
2Boston University, Department of Biomedical Engineering, Boston, MA, USA

Interfering speech has rapid spectrotemporal fluctuations that established noise reduction algorithms have difficulty suppressing without a concomitant loss, or distortion, of binaural cues for spatial hearing. An ongoing project at the University of Oldenburg (UOL) involves development and evaluation of prototype Short-Time Target Cancellation (STTC) assistive listening devices that can enhance speech intelligibility of a target talker, and attenuate interfering talkers, while still preserving binaural cues for spatial hearing. The STTC processing computes a ratio mask (i.e., a time-varying spectral gain) that can be applied to the binaural signals at the Left and Right ears, thereby attenuating the interfering talkers. The STTC processing is causal and memoryless, with low requirements in terms of memory size and computational power, and is designed to run online in real-time without training or any a priori knowledge about the number or locations of interfering sound sources; only an assumed “look” direction is needed. The STTC processing can be used either to filter the binaural signals at the Left and Right ears or as a postfilter for adaptive beamforming. Where adaptive beamforming processing computes a complex-valued filter-vector, the STTC processing computes a real-valued time-varying spectral gain; the two approaches are compatible and our evaluation results indicate that their combination has an additive effect. Although the STTC processing, and adaptive beamforming, can be implemented with standard in-ear or behind-the-ear (BTE) hearing aid earpieces, better performance can be achieved via a small microphone array integrated into the frame of a pair of eyeglasses. Evaluation results, using simulations in virtual acoustic environments, indicate that these prototype STTC assistive listening devices can effect enhancement of a target talker, and attenuation of interfering talkers, in both anechoic space and reverberation.

Acknowledgements: This work was supported by the Cluster of Excellence EXC 2177 Hearing4All, funded by the German Research Foundation (DFG), and by NIH/NIDCD grants R01DC000100 and R01DC015429. The content herein is solely the responsibility of the authors and does not necessarily represent the official views of the NIH.

13:55 - 14:10Break
14:10 - 14:40The speech cue profile and its consequences for hearing aid processing By Pamela SouzaNorthwestern University

Pamela Souza1, Gregory Ellis1, Frederick Gallun2, Richard Wright3
1
Department of Communication Sciences and Disorders, Northwestern University, Evanston, IL, USA
2Department of Otolaryngology-Head and Neck Surgery, Oregon Health and Sciences University, Portland, OR, USA
3Department of Linguistics, University of Washington, Seattle, WA, USA

Clinical hearing aid fittings depend primarily on the pure-tone audiogram, and audiologists can choose among well-validated prescriptions to create an appropriate frequency-gain response for each listener. However, there are no definitive guidelines to guide other aspects of the hearing aid response such as compression speed and strength of digital noise reduction. This presentation describes a series of studies in which a “cue profile” test based on synthetic speech sounds is used to assess a hearing-impaired listener’s use of specific speech cues. The resulting profile quantifies how well individual listeners can utilize higher-precision spectro-temporal information, or whether they rely on lower-precision temporal (envelope) cues to consonant identification. We review consequences of different cue profiles for perception of speech with different types of signal processing (fast- vs slow-acting WDRC and strategies designed to preserve the speech envelope). Aided speech recognition is influenced by the amount of hearing loss, the cue profile, and extent of envelope modification in the signal. Listeners with more temporal-reliant cue profiles tend to have poorer aided speech recognition. Those listeners receive greatest benefit when modulation cues are preserved, compared to listeners with more spectral-reliant cue profiles. These data suggest that better understanding of how different amplification strategies interact with the listener’s auditory abilities may allow clinicians to target strategies of greatest benefit to an individual.

Acknowledgements: US National Institute on Deafness and Other Communication Disorders.

14:40 - 15:10Assessment and simulations of aided speech recognition in acoustically challenging conditions By Anna WarzybokUniversität Oldenburg

Anna Warzybok1, Florian Kramer1, David Hülsmeier1, Birger Kollmeier1
1
Medical Physics and Cluster of Excellence Hearing4all, Universität Oldenburg, Oldenburg, Germany

Precise methods in audiological diagnostics and understanding the impact of threshold and suprathreshold deficits on the audiological outcomes are crucial for the individualized and successful treatment with hearing devices. Model-based approaches may support the selection and fitting of hearing devices. This study aims at speech recognition and loudness perception of hearing-impaired listeners in acoustic conditions with increasing complexity, i.e., in well-controlled laboratory conditions like stationary masker in comparison to acoustically more complex and ecologically valid scenes like cafeteria ambience. Furthermore, in order to better understand the contribution of the individual loss in sensitivity and suprathreshold deficits to speech recognition subjective data are simulated with the framework of auditory discrimination experiments (FADE) considering different components of hearing impairment. For aided measurements, two prescription rules, pure-tone threshold-based NAL-NL2 and individual loudness perception-based trueLOUDNESS, are compared in terms of speech recognition and loudness perception. The outcomes of speech recognition measurements with hearing-impaired listeners show significant correlations of unaided speech recognition thresholds across the “simple” laboratory masking conditions. The performance in these conditions, however, show no significant correlation with performance in realistic cafeteria scenes. The benefit from hearing devices, defined as the difference in speech recognition threshold between the unaided and aided condition, differs across maskers and shows no correlation between laboratory and cafeteria maskers. While NAL-NL2 and trueLOUDNESS result in a comparable benefit in terms of speech recognition, the loudness perception is restored better with the trueLOUDNESS prescription rule. The accuracy of FADE simulations in unaided and aided conditions is highest when both components of hearing impairment (sensitivity loss and suprathreshold deficits) are accounted for. In summary, a model-based interpretation with a distinction between threshold and suprathreshold distortion component might not only be useful for diagnostic purposes but also helps to predict the benefit from a hearing device in acoustically challenging conditions.

15:10 - 15:25Break
15:25 - 15:45Model-based hearing restoration strategies for cochlear synaptopathy pathologies By Fotios DrakopoulosGhent University

With

Fotios Drakopoulos1, Viacheslav Vasilkov1, Heleen Van Der Biest2, Sarah Verhulst1
1
Dept. of Information Technology, Ghent University, 9000 Ghent, Belgium
2Dept. of Rehabilitation Sciences – Audiology, Ghent University, 9000 Ghent, Belgium

With age, our hearing ability starts to decline; communicating in noisy environments becomes challenging, and hearing faint sounds difficult. Part of this decline stems from outer-hair-cell damage, and another factor relates to synaptic damage at the auditory-nerve, i.e., cochlear synaptopathy (CS). Despite the suspected high prevalence of CS among people with self-reported hearing difficulties but normal audiograms, or those with impaired audiograms, conventional hearing-aid algorithms do not specifically compensate for the functional deficits associated with CS. Here, we present and evaluate a number of hearing restoration algorithms that maximally restore auditory-nerve coding in CS-affected peripheries. Using a biophysical model of the auditory periphery, we designed real-time signal-processing algorithms to three different CS types that operate on the time-domain signal. The algorithms preserve the stimulus envelope peaks but modify sound onsets and offsets to increase the resting periods between stimulation. We evaluated our developed algorithms in subjects with and without suspected age-related CS (N=30) to test whether they enhanced envelope-following-responses (EFRs), amplitude-modulation (AM) detection sensitivity and speech intelligibility. Volunteers with normal-hearing (NH) audiograms and ages between 18-25 (yNH) or 45-65 (oNH) y/o participated in our study and the difference between processed and unprocessed stimuli was assessed. Our data show that EFRs and perceptual AM sensitivity were enhanced in both yNH and oNH listeners when using our CS-compensation algorithms. Speech recognition in the Matrix test showed a small improvement that was not consistent across participants, with the yNH group and those with high AM detection sensitivity benefiting the most from the processed speech, suggesting that different approaches might be necessary when applying the algorithms to speech. This new type of sound processing may extend the application range of current hearing-aids and improve temporal envelope processing while leaving sound amplification unaffected.

Acknowledgements: This work was supported by the European Research Council (ERC) under the Horizon 2020 Research and Innovation Programme (grant agreement No 678120 RobSpear).

15:45 - 16:15Towards auditory profile-based hearing-aid fittings: Insights from the BEAR project By Tobias NeherUniversity of Southern Denmark

Raul Sanchez-Lopez1, Mengfan Wu2,3, Michal Fereczkowski2,3, Sébastien Santurette1,4, Torsten Dau1, Tobias Neher2,3

1Hearing Systems Section, Dept. of Health Technology, Technical University of Denmark, Kgs. Lyngby, DK
2Institute of Clinical Research, University of Southern Denmark, Odense, DK
3Research Unit for ORL – Head & Neck Surgery and Audiology, Odense
University Hospital, Odense, DK; University of Southern Denmark, Odense, DK
4Centre for Applied Audiology Research, Oticon A/S, Smørum, DK

In current clinical practice, hearing aids are typically fitted based on audiometric thresholds only, even though research suggests that suprathreshold factors play a role for aided outcome, too. In 2016, the Danish ‘Better hEAring Rehabilitation’ (BEAR) project was initiated with the overall goal of improving hearing-aid rehabilitation. A focus area in that project has been the development of a method for classifying hearing-impaired listeners into four profiles capturing distinct differences in terms of audiometric hearing loss and suprathreshold hearing abilities. Additional focus areas have been the development of an auditory profile-based fitting strategy and the investigation of aided speech-in-noise outcome in those profiles. In this contribution, we will provide an overview of these research activities. Emphasis will be placed on insights gained with respect to the characterization of individual hearing losses and the translation of the resultant findings into solutions that are implementable in clinically available hearing devices.

Acknowledgements: This research was funded by Innovation Fund Denmark Grand Solutions 5164-00011B (‘BEAR project’), GN Hearing, Oticon and WS Audiology. The collaboration with other partners (Aalborg University, Force as well as the university hospitals in Odense, Copenhagen and Aalborg) is sincerely acknowledged.

16:15 - 16:30Closing By Torsten DauTechnical University of Denmark