Loss of afferent synapses between inner hair cells and auditory-nerve (AN) fibers, called cochlear synaptopathy, is prevalent in aging ears and potentially occurs following overexposure to loud sounds (Kujawa & Liberman, 2009; Makary et al., 2011; Wu et al., 2019). Cochlear synaptopathy is widely hypothesized to cause perceptual abnormalities including impaired speech comprehension in real-world noisy listening scenarios, tinnitus, and hyperacusis (Bharadwaj et al., 2014; Hickox & Liberman, 2014; Lopez-Poveda, 2014; Lopez-Poveda & Barrios, 2013; Plack et al., 2014; Schaette & McAlpine, 2011). These problems are collectively referred to as "hidden" hearing loss because defects are thought to occur without overt hair-cell damage or associated elevation of listeners' audiometric thresholds (Makary et al., 2011). Despite intensifying research on these topics in recent years, whether AN-fiber loss causes hidden hearing loss is still actively debated.
Human studies on hidden hearing loss have so far yielded mixed findings. Several studies have successfully linked indicators of AN health or synaptopathy risk to abnormalities in speech or complex-sound perception (Liberman et al., 2016; Shehorn et al., 2020). On the other hand, many other similar studies have found no associations between synaptopathy metrics and hidden hearing loss, for poorly understood reasons (Grose et al., 2017; Johannesen et al., 2019; Le Prell et al., 2018; Marmel et al., 2020; Prendergast et al., 2019; Prendergast et al., 2017; Yeend et al., 2017). This disagreement could result in part from the frequent use of wave I of the auditory brainstem response (ABR) as a synaptopathy metric. ABR wave I is an AN-generated signal that, while effective in many animal models, has an unfavorable signal-to-noise ratio in human subjects. Furthermore, the majority of the participants in these studies were young people with normal hearing, who might not have a sufficient degree of AN loss to cause substantial hidden hearing loss, and the link between noise exposure and synaptopathy in humans remains controversial (Bramhall et al., 2019). Finally, it appears possible based on model simulations that moderate cochlear synaptopathy might cause threshold shifts that are too small to be measurable with standard psychophysical paradigms (Oxenham, 2016)
Non-human animal models provide a valuable alternative approach for investigating the hypothesis that synaptopathy causes hidden hearing loss. Animal models allow researchers to deliberately induce and quantify controlled injuries to the AN using a broad range of approaches. Operant-conditioning experiments can provide clear insight into perceptual impairment with synaptopathy, but only a handful of studies have leveraged this approach (Henry, 2022). In CBA/CaJ mice, several studies have investigated the impact of synaptopathy induced with ouabain on behavioral hearing sensitivity. Strikingly, profound >95% synaptic loss was found to have no impact on behavioral detection of tones in quiet in most animals (Chambers et al., 2016). In contrast, a follow-up study showed that slightly smaller ouabain-induced AN lesions (60-70%) impair the detection of tones in noise by ∼15 dB (Resnik & Polley, 2021). Because perceptual impairment was limited to the more demanding masked-detection task, these results in mice support the hypothesis that cochlear synaptopathy can cause substantial hidden hearing loss.
The mouse model has many strengths including the wide variety of genetic tools and imaging techniques available for investigating hearing function. Mice can also be trained to behaviorally discriminate complex sounds including modulated signals and natural vocalizations (Cai & Dent, 2020). On the other hand, the frequency range of hearing sensitivity in mice is shifted upward largely into the ultrasonic range, presenting a challenge for studies using speech stimuli. At the same time, relatively rapid age-related hearing loss in some mouse strains (Q. Zheng et al., 1999) and perhaps limited flexibility to adapt to new behavioral tasks complicate animal training and collection of behavioral data, which can take weeks or months depending on the task and the number of stimulus conditions (Kobrina et al., 2020). Outside of mice, the only other species used in published operant-conditioning studies of well isolated cochlear synaptopathy, to our knowledge, is the budgerigar (Melopsittacus undulatus). The budgerigar is a parakeet species with highly developed complex-sound processing capabilities and the capacity to mimic many sounds including human speech. A large body of animal behavioral literature going back several decades shows that budgerigars perceive many simple and complex sounds with sensitivity similar to human listeners (Dooling et al., 2000). For example, budgerigars and humans have similar behavioral thresholds for frequency discrimination of tones and vowel formants (Dent et al., 2000; Henry, Abrams, et al., 2017; Henry, Amburgey, et al., 2017), amplitude modulation detection (Dooling & Searcy, 1981; Henry et al., 2016), and tone-in-noise detection (Dooling & Saunders, 1975). Budgerigars and humans also use similar combinations of energy and envelope-based cues for tone detection in roving-level noise (Henry et al., 2020).
Recently, several studies in budgerigars have begun to evaluate the impact of cochlear synaptopathy on hearing abilities. Budgerigars with up to 70% estimated synaptopathy induced with kainic acid (KA), the analog of glutamate causes synaptopathy due to excitotoxicity (Pujol et al., 1985), show normal audiometric thresholds for detection of tones in quiet (Wong et al., 2019). These results are consistent with the findings in mice described above, as well as those of retrospective human temporal bone studies (Makary et al., 2011). Perhaps surprisingly, budgerigars with KA-induced synaptopathy also exhibit normal behavioral performance for detecting tones in roving-level noise (Henry & Abrams, 2021) and brief unmasked tones as short as 20 ms (Wong et al., 2019). The absence of impaired tone-in-noise detection in budgerigars with synaptopathy contrasts with mouse results (Resnik & Polley, 2021), for unknown reasons that require further investigation. Many factors differed between these studies beyond the choice of model species and neurotoxin (i.e., ouabain in mice; KA in budgerigars). For example, the budgerigar study used longer stimulus duration, smaller noise bandwidth, longer behavioral testing periods, and greater acclimation time of animals to AN lesions.
Following initial findings in Guinea pigs that synaptopathy might cause statistically greater loss of low spontaneous-rate AN fibers (Furman et al., 2013), most human synaptopathy investigations have focused on aspects of perception putatively linked to this neural population, such as perception of degraded speech and complex sounds at high sound levels. On the other hand, recent findings that synaptopathy is not selective for low spontaneous-rate fibers in mice (Suthakar & Liberman, 2021), and long-standing controversy over the role of different AN-fiber populations in auditory perception (Carney, 2018), highlight the need for additional research directions. The present study tested the hypothesis that cochlear synaptopathy impairs envelope-based discrimination of low-level, narrowband signals. Low-level narrowband signals minimize the spread of cochlear excitation across characteristic frequencies, a well-known phenomenon (Costalupes, 1985) that could feasibly mask the effects of synaptopathy for signals with wider bandwidth and/or higher presentation levels (Encina-Llamas et al., 2019; Johannesen et al., 2022). Stimuli were 100-Hz bands of Gaussian noise (GN), either unprocessed, or processed with a “low-noise noise” (LNN) algorithm (Pumplin, 1985) to reduce envelope fluctuations (Fig. 1), as in a previous human study (Stone et al., 2008). Possible acoustic cues for LNN discrimination include the flatter envelope and slower changes in instantaneous frequency compared to GN. Synaptopathy was expected to degrade discrimination performance due to the reduced neural population of responsive fibers encoding these cues in their discharge activity.
The present study in budgerigars quantified the effect of bilateral cochlear synaptopathy induced with KA on behavioral discrimination of LNN and GN stimuli. Behavioral performance was evaluated using operant-conditioning procedures, a single-interval two-alternative discrimination task, and two-down one-up psychophysical tracking procedures that estimated the threshold (minimum) stimulus duration required for reliable LNN-GN discrimination. Based on the reasoning outlined by Stone and colleagues in their investigation of subclinical inner hair cell lesions (Stone et al., 2008), animals with KA-induced synaptopathy were expected to perceive LNN stimuli as “more noisy” due to the smaller number of AN channels encoding the stimulus envelope in the peripheral auditory system [i.e., the damage AN essentially “under samples” the stimulus (Lopez-Poveda, 2014; Lopez-Poveda & Barrios, 2013) resulting in a noisier across-fiber representation of envelope and FM cues]. Consequently, animals in this group were expected to require a longer stimulus duration to correctly differentiate the two stimulus classes. Finally, we used decision-variable correlation (DVC) analyses of trial-by-trial behavioral responses in individual animals (Henry et al., 2020; Sebastian et al., 2017; Sebastian & Geisler, 2018) to shed light on specific processing strategies used by individual animals to perform the LNN vs. GN discrimination task and to test whether animals with cochlear synaptopathy used different acoustic cues to perform the task.
Comments (0)