Semin Hear 2022; 43(03): 137-148
DOI: 10.1055/s-0042-1756160
Review Article

Auditory Evoked Potentials in Communication Disorders: An Overview of Past, Present, and Future

Akshay R. Maggu
1   Department of Speech-Language-Hearing Sciences, Hofstra University, Hempstead, New York
› Institutsangaben
 

Abstract

This article provides a brief overview of auditory evoked potentials (AEPs) and their application in the areas of research and clinics within the field of communication disorders. The article begins with providing a historical perspective within the context of the key scientific developments that led to the emergence of numerous types of AEPs. Furthermore, the article discusses the different AEP techniques in the light of their feasibility in clinics. As AEPs, because of their versatility, find their use across disciplines, this article also discusses some of the research questions that are currently being addressed using AEP techniques in the field of communication disorders and beyond. At the end, this article summarizes the shortcomings of the existing AEP techniques and provides a general perspective toward the future directions. The article is aimed at a broad readership including (but not limited to) students, clinicians, and researchers. Overall, this article may act as a brief primer for the new AEP users, and as an overview of the progress in the field of AEPs along with future directions, for those who already use AEPs on a routine basis.


#

Auditory evoked potentials (AEPs) in humans, as the name suggests, refer to “potentials,” that is, voltages which are “evoked” via set of electrodes (predominantly from the scalp) in response to the “auditory” stimulation. Following the breakthrough discovery of α rhythms in electroencephalography (EEG),[1] with the advent of technology in the past half century, research in the area of AEPs has spurred and found its usage in the fields of (but not limited to) linguistics, psychology, neuroscience, and communication disorders. The current review (in brief) is aimed at summarizing the historical viewpoints; current clinical implications of AEPs in the field of communication disorders; AEP research investigating the questions related to speech, language, and hearing; and the contemporary AEP research that exhibits potential for use in the clinics. The text in this article may serve as a starting point for those who are beginning their journey in the field of AEPs as students, clinicians, and/or researchers, and as a snapshot on the progress of AEP field in general, for those who already possess some knowledge about AEPs.

HISTORICAL PERSPECTIVES: CLASSIFICATIONS OF AEPs

Since the discovery of AEPs, researchers have proposed several ways to classify AEPs. One of the first classifications was based on the latency of the replicable waves following the stimulus onset. AEPs were then primarily classified as early, middle, and late.[2] AEP responses that were elicited within 10 ms from the stimulus onset were classified as early responses (e.g., auditory brainstem response [ABR]; [Fig. 1A]), 10 to 50 ms were classified as middle-latency response (MLR; [Fig. 1B]), and 60 to 500 ms were classified as slow- and long-latency response (LLR; [Fig. 1C]).[3] This classification is still popular in the scientific and clinical fields. Another way to classify AEPs was based on their recording sites. While most of AEPs were recorded using vertex and mastoid or neck electrodes, certain AEPs were recorded using alternate sites such as ear canal, as in the case of electrocochleography (ECochG). AEPs have also been classified as exogenous versus endogenous potentials. Exogenous potentials are mainly those that are more sensory in nature, not dependent on the subject's level of consciousness, and are not influenced by the higher-order linguistic and cognitive processes (e.g., click-evoked ABR). On the other hand, endogenous potentials are those that are affected by the subject's level of consciousness, and are influenced by the higher-order linguistic and cognitive processes (e.g., P300).

Zoom Image
Figure 1 Human scalp–recorded auditory evoked potentials representing the different levels of the auditory nervous system: (A) auditory brainstem responses to 100 µs click stimuli collecting at intensities ranging from 90 to 10 dB in 10 dB steps at 30.1/s repetition rate. Waves I, II, III, and V are clearly visible in this individual's data. A decrease in amplitude and increase in latency of the peaks can be observed as the intensity for presentation decreases; (B) middle latency response to 500 Hz tone burst stimuli collected at 7.1/s repetition rate at 70 dB. Na, Pa, and Nb peaks in the latency range of 15 to 40 ms are clearly visible; and (C) long latency response to 500 Hz tone burst stimuli collected at 1.1/s repetition rate at 70 dB. P1, N1, P2, and N2 peaks in the latency range of 60 to 200 ms clearly visible.

While the online continuous EEG was discovered already by late 1920s, one of the key issues that the scientists faced was the presence of background noise that continued to obscure the desired EEG responses.[3] It was only after the discovery of the averaging technique to enhance the signal-to-noise ratio (SNR) of the desired EEG responses[4] that there was a surge of research studies in the field of AEPs. Following the improvement in SNR, changes in the high-pass filter settings from 50 to 150 Hz led to visualization of clear MLR waveforms with the nomenclature of the three negative and two positive peaks as No–Po–Na–Pa–Nb (see [Fig. 1B]), with generators in the thalamocortical pathways.[5] [6] [7] Around this time, there was also interest in the late AEPs which led to the discovery of P1–N1–P2–N2 waves of the LLR (see [Fig. 1C]), with generators predominantly in the cortex.[8] [9] These waves were found to be affected by attention and sleep.

While much of the work in AEPs at this time was being done using scalp electrodes, researchers from different parts of the world, independently of one another, were trying to obtain recordings from the then nonconventional sites, that is, external auditory canal,[10] cochlear promontory,[11] and earlobe.[12] They were all able to record the waves of the cochlear nerve potential that reduced in amplitude and increased in latency with reduction in stimulus intensity. Around this time, it was reported that there were some “late” waves in the recordings of the cochlear nerve potential which were too “early” to be considered to be a part of the MLR or LLR.[12] These waves were described by Jewett and colleagues[13] [14] which was later to become what we now know as the “ABR.” Soon after, studies were conducted to confirm the excellent reliability of these “Jewett bumps” which led to the beginning of widespread use of the ABR across the disciplines of audiology,[15] neurology,[16] and psychology,[17] for understanding the sound processing in the subcortical auditory nervous system. Following the work on the ABR, it was found that responses from the auditory brainstem could also replicate the acoustic waveform of the low-frequency tone-pip stimuli, which came to be known as the “frequency following response” (FFR).[18] Furthermore, studies investigating the effects of attention on the late AEPs led to the discovery of mismatch negativity (MMN) and P300 in an oddball paradigm.[19] While there have been numerous events throughout the scientific history that have placed AEP subfield where it presently stands, and discussing all the events is beyond the scope of this article, this section has tried to capture some of the key milestones in AEP history.


#

AUDITORY EVOKED POTENTIALS IN CLINICS

Use of Auditory Evoked Potentials in Diagnostics

As clinics are usually very busy catering to patients with a variety of speech, language, and hearing disorders, time is of essence when it comes to testing. A measure that is fast, reliable, and replicable is usually the one to succeed in the clinics. In audiology, the click-evoked ABR has met these expectations and is thus frequently used in clinics for the purpose of threshold estimation and anatomical site-of-lesion testing. The click-evoked ABR, when recorded at a sufficiently high intensity level (e.g., 90 dB), elicits five major peaks (I, II, III, IV, and V) that are approximately 1 ms apart from each other. Broadly, wave I originates from the auditory nerve, wave II from the cochlear nucleus, wave III from the superior olivary colliculi, wave IV from the lateral lemniscus, and wave V from the inferior colliculi.[20] [21] Waves I, III, and V have been found to be most reliable and replicable out of the five major waves of the ABR. The ABR waves, with a decrease in intensity of click stimulation, increase in latency and decrease in amplitude ([Fig. 1A]).[22] These properties of the ABR led to setting up of normative latency-intensity and amplitude-intensity functions that quickly became a part of the ABR analysis. These functions, especially those pertaining to the wave V—the most dominant wave of the ABR—are sensitive to hearing difficulties.[23] While the wave V amplitude and latency are widely used in screening and differential diagnosis of hearing loss, more complex measures such as interpeak latency ratios between waves I and III, III and V, and I and V; amplitude ratio between V and I; and the interaural wave V amplitude ratio are routinely used for investigating the site of lesion in the subcortical auditory system.[24] For example, an increased latency difference between the waves I and III with near-normative latency difference between waves III and V may be indicative of lesion at the lower brainstem and should be followed up by further neurological evaluation.

Over the years, different ABR protocols have been developed to identify lesions in the subcortical auditory nervous system. One of the shortcomings of using click stimulus in recording the ABR is that the clicks predominantly stimulate the high-frequency regions of the basilar membrane. As a result, an ABR elicited with clicks represents the neural processing of high frequencies while excluding the lower frequencies. To obtain an ABR that is a representative of a broad range of frequencies, “stacked ABR” was developed. Stacked ABR testing involves recording ABRs at different frequency bands (e.g., 500 and 8,000 Hz) via presenting clicks in conjunction with relevant high-pass masking noise (e.g., 500 Hz high-pass, 8,000 Hz high-pass). By doing this, separate ABRs are obtained for specific frequency bands corresponding to 500, 1,000, 2,000, 4,000, and 8,000 Hz. These separate frequency-specific ABRs are then “stacked” and added together to get a sum of the synchronous activity of the neurons responsible for encoding these broad range of frequencies. Stacked ABR, due to its fine-grained testing, has been found to be sensitive in detecting small auditory nerve tumors.[25]

Furthermore, a differentiation between the click-ABRs collected with a slower repetition rate (e.g., 11.1/s) and a faster repetition rate (e.g., 90.1/s) has been found to be sensitive in detecting auditory neuropathy spectrum disorder (ANSD).[26] The rationale behind this approach is that the synchrony of firing of auditory nerve fibers is challenged with a faster auditory stimulation than with a slower stimulation rate. As a result, a patient with ANSD who exhibits asynchronous firing will have vastly different ABRs for low versus high repetition rates as compared with a person with typical auditory nerve synchrony.

Although the traditional click-ABR is an excellent clinical tool to understand the basic sensory functioning of the auditory nervous system, it is not capable of evaluating the speech sound processing in the auditory system. To fill this clinical gap, FFR elicited with a 40-ms /da/ stimulus at a relatively high repetition rate (e.g., 10.9/s), also known as BioMARK (Biological Marker for Auditory Processing), was developed.[27] BioMARK, and FFRs in general, have been found to be sensitive to a variety of disorders ranging from dyslexia,[28] [29] autism,[30] [31] auditory processing disorder (APD),[32] [33] [34] [35] and concussion[36] and exhibit potential for use in the clinics.

Another early-latency AEP technique that is frequently used in the clinics is ECochG. ECochG technique can be used to measure the cochlear microphonics (CM), summating potential (SP), and the compound action potential (AP), either noninvasively using electrodes on the scalp and in the ear canal or semi-invasively by placing an electrode on the tympanic membrane. While CM is predominantly generated at the outer hair cells[37] and SP is a result of contributions from both outer and inner hair cells,[38] AP has its generators in the auditory nerve.[39] By using ratio of the SP/AP amplitude, ECochG is routinely used in the diagnosis of Meniere's disease.[40] Furthermore, the presence of a long-ringing CM (i.e., extended latency range) has been reported to be indicative of ANSD.[37]


#

Use of Auditory Evoked Potentials in Intervention

AEPs are useful not only in the diagnosis of communication disorders but also in their intervention. Electrical compound action potential (ECAP) and the electrical ABR (EABR) are regularly used in the cochlear implant clinics. ECAP measures the compound AP, an indicator of sufficient auditory nerve functioning, using the stimulation of the auditory nerve with electrical impulses via the cochlear implant.[41] EABR measures the auditory brainstem functioning in response to the electrical impulses.[42] Both ECAP and EABR can be measured intra- and postoperatively to understand the functioning and changes following usage of cochlear implant. Similarly, to understand the success of a hearing aid fitting over time, auditory steady-state response has been found to be useful.[43]

Along with the early-latency AEPs, evidence suggests that the cortical AEPs including MLR,[44] [45] [46] [47] LLR,[48] [49] [50] [51] MMN,[52] [53] [54] [55] and P300[56] can be promising in the diagnosis and intervention of communication disorders. However, the factors such as individual variability, time-demand, and need for specialized equipment and training make these techniques less appealing, at least for now, for use in the clinics.


#
#

AUDITORY EVOKED POTENTIALS IN RESEARCH

AEPs, due to their versatile characteristics, have been popularly used across the disciplines of speech, language, and hearing to address a variety of research questions.

Use of Auditory Evoked Potentials in Audiology Research

In the past decade, there has been a growing interest in investigating the ABR for establishing the biomarkers for cochlear synaptopathy, which is hypothesized to be one of the leading causes of “hidden hearing loss.”[57] Animal studies suggest that the wave I amplitude of the ABR is diminished in the cases of cochlear synaptopathy,[58] mainly due to damage to low spontaneous rate, high threshold auditory nerve fibers.[59] [60] However, in humans, the reliability of the ABR wave I from a scalp-recorded ABR is questionable, mainly due to high variability and individual differences in the ABR wave I amplitude in humans, as a result of which the ABR wave I has not been established as a definitive neural marker for “hidden hearing loss” in humans.[61] [62] [63] [64] [65] To circumvent the variability issue, attempts have been made to use the ratio of SP and wave I amplitude as a way to identify “hidden hearing loss.”[66] The rationale behind this approach was that the SP which is a cochlear potential will be unaffected in hidden hearing loss, while the wave I amplitude will be affected in the cases of hidden hearing loss, and thus the SP will act as a normalization factor for the wave I amplitude. However, SP amplitude also faces similar problems of high interindividual variability, potentially due to its low magnitude[67] resulting in low SNR. Nevertheless, attempts in finding assays for identifying hidden hearing loss in humans are ongoing and hold promise for the future.

In regard to intervention-based research, MLRs and LLRs have been found to be useful in evaluating cortical plasticity as a result of auditory training paradigms.[47] [68] In cochlear implant research, auditory neuroplasticity as a result of using cochlear implants has been investigated using the late AEPs (e.g., LLR, MMN). Overall, research findings reveal changes in the P1, N1, and P2 components of the LLR and improved detection of frequency contrasts using MMN following the use of cochlear implant. However, one of the most challenging tasks in CI-based AEP research is to eliminate the CI-induced electrical artifact in AEP recordings. While there have been recent attempts in developing techniques that could aid in removing the CI-induced artifacts,[69] [70] [71] more research is needed to bring the CI-based EEG in the mainstream research and clinics. While the CI-induced electrical artifact is a problem for the scalp-recorded acoustical stimuli-evoked potentials, electrical stimuli-evoked potentials do not usually present themselves with such a problem. For example, EABR has been used as an index of neuroplasticity in the auditory nervous system following the use of cochlear implants.[72] [73] [74]


#

Use of AEPs in Linguistics and Cognitive Neuroscience Research

Alongside their use in the area of hearing research, AEPs have been immensely useful in the research related to speech and language perception. The click-ABRs have been found to be predictive of speech and language development in children.[75] Speech-evoked FFRs have been used to investigate the experience-dependent effects of auditory experiences including musical training, bilingualism, socioeconomic status, training, language–music relationship, absolute pitch, on the brain.[76] [77] [78] [79] In general, evidence suggests that auditory experiences enhance the neural encoding of sounds, as depicted on the FFR.[76] [77] [79] [80] As the FFR is known to excellently recapitulate the acoustics of the stimulus (e.g., fundamental frequency [F0]) and is influenced by language experience, it has been used to study the processing of tone languages (e.g., Mandarin, Cantonese) in the auditory nervous system.[76] [77] [78] [79] For example, [Fig. 2] depicts a comparison of the Cantonese Tone 2 stimulus ([Fig. 2A, C]) and the corresponding FFR ([Fig. 2B, D]), and a pitch-tracking comparison of the FFR and the stimulus pitch ([Fig. 2E]), where the participant was a native speaker of Cantonese. It is worth noting how closely the pitch of the FFR tracks the pitch of the stimulus, making FFR an excellent candidate for studying the neural processing of lexical tones. In the studies pertaining to tone language processing, FFR has been utilized to investigate the linguistic sound change,[78] interactive effects of tone language and musical experience,[76] and additive effects of absolute pitch and tone language experience.[79] Furthermore, FFR has been found to be predictive of acquisition of tone languages.[81] The long-latency counterpart of the FFR known as the “cortical pitch response” that entails peaks and troughs in the latency range of 600 to 900 ms has also been found to be sensitive to tone language experience.[82] [83] [84]

Zoom Image
Figure 2 Frequency following response collected with a rising lexical tone stimulus (/ji/ T2) from Cantonese. (A) Waveform of the 175-ms stimulus. (B) Waveform of the frequency following response (FFR) with 50 ms pre-stimulus baseline followed by the FFR followed by the post-stimulus baseline; (C) power spectral density of the stimulus; (D) power spectral density of the FFR; and (E) comparison of the pitch contours of the FFR and the stimulus. In this case, the FFR pitch contour has near-perfect resemblance to the stimulus pitch contour.

Other cortical AEPs, that are recorded via oddball presentation of stimuli (e.g., MMN and P300), have been very popular in examining speech sound processing. For example, MMN and P300 have been widely used to understand native versus nonnative speech discrimination,[85] and categorical perception.[86] [Fig. 3] depicts P300 ([Fig. 3A]) and MMN ([Fig. 3B]) collected via an 80:20 (standard: deviant) oddball presentation. In this example ([Fig. 3B]), MMN is represented by the shaded region of the waveform in the latency range of 100 to 300 ms. While FFR, MMN, and P300 entail presentation of very short stimuli such as monosyllables, very late AEPs such as N400 and P600 make use of sentence-level stimuli. N400 is used for examining the semantics of a sentence.[87] A semantically incongruent sentence leads to a slow negative wave predominantly ranging from 300 to 700 ms. For example, “Peter eats bread and butter” will not elicit a N400 but “Peter eats bread and shoe” will elicit a N400 because while the former is a semantically congruent sentence, the latter is a semantically incongruent sentence. In comparison, P600, which is a very late positive wave in the latency range of 500 to 1,000 ms, elicited as a result of syntactic violations in sentences.[88] For example, violations in subject–verb agreement (e.g., “The boy *throw the ball”) may result in a P600 component.

Zoom Image
Figure 3 Auditory evoked responses elicited in 80:20 oddball paradigms: (A) P300 (labeled) in the latency range of 300 to 400 ms and (B) mismatch negativity (MMN) (shaded) in the latency range of 100 to 300 ms.

#
#

CONTEMPORARY RESEARCH AND FUTURE DIRECTIONS IN AEPs

Quite recently, there have been attempts to use AEPs to resolve some of the longstanding issues in the subfield of APD. APD is arguably one of the most intriguing and controversial topics in the field of audiology. The main issues that make APD controversial are the lack of domain specificity to the auditory domain and comorbidity with nonauditory disorders (e.g., developmental language delay).[89] [90] [91] [92] Recently, it has been argued that these issues might be a result of the use of existing behavioral test batteries for APD that require clients' attention, memory, and/or linguistic skills, and, thus, are confounded by the domains of language and cognition.[93] In other words, a client with a reduced attention span may fail on the current APD test batteries. To circumvent these shortcomings of the existing behavioral test batteries, there has been a proposal for setting up an objective test battery that contains AEPs targeting the subcortical auditory nervous system. AEP testing in this objective test battery does not require active participation from the client and, thus, limits the influences of other domains (of language and cognition) on the auditory processing testing.[93] However, more research is needed to understand the relationship of the proposed test battery and the auditory behavior.

Traditional AEP testing, though popular in both clinics and research, mostly due to its excellent reliability, entails repeated presentation of stimuli to elicit neural responses that could be averaged together to obtain meaningful, interpretable waveforms. While this approach is appealing and has endured the test of time, there are also a few demerits of this approach. First, this methodology may limit the variety of auditory stimuli that could be used for conducting research. As repeated presentation of stimuli becomes a prerequisite for this technique, it imposes a limitation on the nature (type and length) of stimuli that could be presented. For example, if subcortical representation of sentences (of several seconds in duration) is needed to be examined using this technique, there may be 2,000 presentation of sentences needed. An obvious problem with that is the time taken during the whole process which might further degrade the data quality due to subject-related factors (e.g., fatigue due to long duration). Second, the current methodology requires the stimuli to be controlled (if not fully synthesized) across a set of parameters before they can be utilized in AEP experiments. However, an obvious problem with using an artificial or a synthesized stimulus is the reduced ecological validity i.e., how well the synthesized stimulus is a representative of a natural stimulus.

To get around these problems, recently, there has been a surge of studies focusing on using machine learning approaches with EEG data collected with natural auditory stimuli.[94] [95] [96] [97] One of the approaches that could be useful includes obtaining a temporal response function, which is derived by extracting the speech features (e.g., envelope, phonetics, and semantics) from the natural stimuli, and is used in predicting the EEG response to the stimulus. Further in this approach, a Pearson's correlation (r) is calculated between the predicted and the actual obtained EEG. A higher correlation value is indicative of enhanced neural encoding of the natural stimuli.[97] [Fig. 4] depicts an example of this process.

Zoom Image
Figure 4 An encoding model where phonetics speech features are extracted that are analyzed alongside training electroencephalography (EEG) data to obtain the temporal response function, that is further employed to predict the EEG. In the final step, Pearson's r is calculated using the predicted EEG and the testing data of the recorded EEG.

Furthermore, one of the main goals in the field of communication disorders is to develop efficient assessment protocols that could accurately detect the presence of communication difficulties in a time- and cost-efficient manner. A recent study,[98] using machine learning approach (support vector machine classification) with cross validation, developed an objective method to predict communication difficulties based on the functioning of the auditory nervous system. Similar machine learning approaches have been validated in the identification of lexical tone contours in tone languages.[99] These machine learning approaches exhibit potential for future use in the field of communication disorders for a quick and accurate identification of communication disorders. However, more research is needed to bring these techniques into the mainstream clinics.


#

CONCLUSION

This article is a brief summary on the evolution of AEP technology, its current state, and the future of AEPs in the field of communication disorders. Since the advent of continuous EEG almost a century ago, there have been several landmark discoveries and inventions that have shaped the field of AEPs to its current state. This article summarizes the key milestones in the history of AEPs followed by a discussion on the clinical and research applications of the current AEP technology. While the existing AEP methodology is immensely popular and contributes to resolving some of the longstanding research questions in the area of communication disorders, there are some limiting aspects of the current methodology that are discussed in this article. Furthermore, this article touches on some of the machine learning approaches and their potential use with AEP data in developing neural markers for detecting communication disorders. Overall, this study can be useful for both beginners and regular users in the area of AEPs, as it provides them with an overview of AEP—its history, its current state, and its future directions.


#
#

CONFLICT OF INTEREST

None declared.

  • References

  • 1 Berger H. Über das Elektrenkephalogramm des Menschen. Arch Für Psychiatr Nervenkrankh 1929; 87 (01) 527-570
  • 2 Davis H. Electric response audiometry, with special reference to the vertex potentials. In: de Boer E, Connor WK, Davis H. et al., eds. Auditory System: Clinical and Special Topics. Handbook of Sensory Physiology. Springer; 1976: 85-103
  • 3 Picton TW. Human Auditory Evoked Potentials. Plural Publishing;; 2010
  • 4 Dawson GD. A summation technique for the detection of small evoked potentials. Electroencephalogr Clin Neurophysiol 1954; 6 (01) 65-84
  • 5 Mendel MI, Goldstein R. Stability of the early components of the averaged electroencephalic response. J Speech Hear Res 1969; 12 (02) 351-361
  • 6 Geisler CD, Frishkopf LS, Rosenblith WA. Extracranial responses to acoustic clicks in man. Science 1958; 128 (3333): 1210-1211
  • 7 Musiek F, Nagle S. The middle latency response: a review of findings in various central nervous system lesions. J Am Acad Audiol 2018; 29 (09) 855-867
  • 8 Davis H, Mast T, Yoshie N, Zerlin S. The slow response of the human cortex to auditory stimuli: recovery process. Electroencephalogr Clin Neurophysiol 1966; 21 (02) 105-113
  • 9 Williams HL, Tepas DI, Morlock Jr HC. Evoked responses to clicks and electroencephalographic stages of sleep in man. Science 1962; 138 (3541): 685-686
  • 10 Yoshie N, Ohashi T, Suzuki T. Non-surgical recording of auditory nerve action potentials in man. Laryngoscope 1967; 77 (01) 76-85
  • 11 Portmann M, Aran JM. Electro-cochleography. Laryngoscope 1971; 81 (06) 899-910
  • 12 Sohmer H, Feinmesser M. Cochlear action potentials recorded from the external ear in man. Ann Otol Rhinol Laryngol 1967; 76 (02) 427-435
  • 13 Jewett DL, Romano MN, Williston JS. Human auditory evoked potentials: possible brain stem components detected on the scalp. Science 1970; 167 (3924): 1517-1518
  • 14 Jewett DL, Williston JS. Auditory-evoked far fields averaged from the scalp of humans. Brain 1971; 94 (04) 681-696
  • 15 Hecox K, Galambos R. Brain stem auditory evoked responses in human infants and adults. Arch Otolaryngol 1974; 99 (01) 30-33
  • 16 Starr A, Achor J. Auditory brain stem responses in neurological disease. Arch Neurol 1975; 32 (11) 761-768
  • 17 Picton TW, Hillyard SA. Human auditory evoked potentials. II. Effects of attention. Electroencephalogr Clin Neurophysiol 1974; 36 (02) 191-199
  • 18 Moushegian G, Rupert AL, Stillman RD. Laboratory note. Scalp-recorded early responses in man to frequencies in the speech range. Electroencephalogr Clin Neurophysiol 1973; 35 (06) 665-667
  • 19 Näätänen R, Gaillard AW, Mäntysalo S. Early selective-attention effect on evoked potential reinterpreted. Acta Psychol (Amst) 1978; 42 (04) 313-329
  • 20 Melcher JR, Kiang NY. Generators of the brainstem auditory evoked potential in cat. III: Identified cell populations. Hear Res 1996; 93 (1-2): 52-71
  • 21 Melcher JR, Knudson IM, Fullerton BC, Guinan Jr JJ, Norris BE, Kiang NY. Generators of the brainstem auditory evoked potential in cat. I. An experimental approach to their identification. Hear Res 1996; 93 (1-2): 1-27
  • 22 Hood LJ. Clinical Applications of the Auditory Brainstem Response. Singular Publishing Group; 1998
  • 23 Sharma M, Bist SS, Kumar S. Age-related maturation of wave V latency of auditory brainstem response in children. J Audiol Otol 2016; 20 (02) 97-101
  • 24 Young A, Cornejo J, Spinner A. Auditory brainstem response. In: StatPearls. StatPearls Publishing; 2022. . Accessed March 10, 2022 at: http://www.ncbi.nlm.nih.gov/books/NBK564321/
  • 25 Don M, Kwong B, Tanaka C, Brackmann D, Nelson R. The stacked ABR: a sensitive and specific screening tool for detecting small acoustic tumors. Audiol Neurotol 2005; 10 (05) 274-290
  • 26 Chandan HS, Prabhu PP. Speech perception abilities in individuals with auditory neuropathy spectrum disorder with preserved temporal synchrony. J Hear Sci 2013; 3 (02) 16-21
  • 27 Skoe E, Kraus N. Auditory brain stem response to complex sounds: a tutorial. Ear Hear 2010; 31 (03) 302-324
  • 28 Kumar U, Maggu AR, Mamatha NM. Effect of noise on BioMARK in individuals with learning disability. J India Inst Speech Hear 2012; 31
  • 29 Hornickel J, Zecker SG, Bradlow AR, Kraus N. Assistive listening devices drive neuroplasticity in children with dyslexia. Proc Natl Acad Sci U S A 2012; 109 (41) 16731-16736
  • 30 Russo N, Nicol T, Trommer B, Zecker S, Kraus N. Brainstem transcription of speech is disrupted in children with autism spectrum disorders. Dev Sci 2009; 12 (04) 557-567
  • 31 Russo NM, Hornickel J, Nicol T, Zecker S, Kraus N. Biological changes in auditory function following training in children with autism spectrum disorders. Behav Brain Funct 2010; 6 (01) 60
  • 32 Banai K, Kraus N. The dynamic brainstem: implications for CAPD. In: Current Controversies in Central Auditory Processing Disorder. San Diego: Plural Publishing Inc; 2008: 269-289
  • 33 Kraus N, Anderson S. Auditory processing disorder: biological basis and treatment efficacy. In: Le Prell CG, Lobarinas E, Popper AN, Fay RR. eds. Translational Research in Audiology, Neurotology, and the Hearing Sciences. Springer Handbook of Auditory Research. Springer International Publishing; 2016: 51-80
  • 34 Rocha-Muniz CN, Befi-Lopes DM, Schochat E. Investigation of auditory processing disorder and language impairment using the speech-evoked auditory brainstem response. Hear Res 2012; 294 (1-2): 143-152
  • 35 Kumar P, Singh NK. BioMARK as electrophysiological tool for assessing children at risk for (central) auditory processing disorders without reading deficits. Hear Res 2015; 324: 54-58
  • 36 White-Schwoch T, Krizman J, McCracken K. et al. Baseline profiles of auditory, vestibular, and visual functions in youth tackle football players. Concussion 2020; 4 (04) CNC66
  • 37 Santarelli R, Scimemi P, Monte ED, Arslan E. Cochlear microphonic potential recorded by transtympanic electrocochleography in normally-hearing and hearing-impaired ears. Acta Otorhinolaryngol Ital 2006; 26 (02) 78-95
  • 38 Durrant JD, Wang J, Ding DL, Salvi RJ. Are inner or outer hair cells the source of summating potentials recorded from the round window?. J Acoust Soc Am 1998; 104 (01) 370-377
  • 39 Eggermont JJ. Electrocochleography. In: de Boer E, Connor WK, Davis H. et al., eds. Auditory System: Clinical and Special Topics. Handbook of Sensory Physiology. Springer; 1976: 625-705
  • 40 Ferraro JA, Tibbils RP. SP/AP area ratio in the diagnosis of Ménière's disease. Am J Audiol 1999; 8 (01) 21-28
  • 41 Tanamati LF, Bevilacqua MC, Costa OA. Longitudinal study of the ECAP measured in children with cochlear implants. Rev Bras Otorrinolaringol (Engl Ed) 2009; 75 (01) 90-96
  • 42 Gallégo S, Frachet B, Micheyl C, Truy E, Collet L. Cochlear implant performance and electrically-evoked auditory brain-stem response characteristics. Electroencephalogr Clin Neurophysiol 1998; 108 (06) 521-525
  • 43 Damarla VK, Manjula P. Application of ASSR in the hearing aid selection process. Aust N Z J Audiol 2007; 29 (02) 89-97
  • 44 Arehole S, Augustine LE, Simhadri R. Middle latency response in children with learning disabilities: preliminary findings. J Commun Disord 1995; 28 (01) 21-38
  • 45 Brown DD. The use of the middle latency response (MLR) for assessing low-frequency auditory thresholds. J Acoust Soc Am 1982; 71 (Suppl. 01) S99
  • 46 Kraus N, Smith DI, Reed NL, Stein LK, Cartee C. Auditory middle latency responses in children: effects of age and diagnostic category. Electroencephalogr Clin Neurophysiol 1985; 62 (05) 343-351
  • 47 Schochat E, Musiek FE, Alonso R, Ogata J. Effect of auditory training on the middle latency response in children with (central) auditory processing disorder. Braz J Med Biol Res 2010; 43 (08) 777-785
  • 48 Swink S, Stuart A. Auditory long latency responses to tonal and speech stimuli. J Speech Lang Hear Res 2012; 55 (02) 447-459
  • 49 Regaçone SF, Gução ACB, Giacheti CM, Romero ACL, Frizzo ACF. Long latency auditory evoked potentials in students with specific learning disorders. Audiol Commun Res 2014; 19: 13-18
  • 50 Dorman MF, Sharma A, Gilley P, Martin K, Roland P. Central auditory development: evidence from CAEP measurements in children fit with cochlear implants. J Commun Disord 2007; 40 (04) 284-294
  • 51 Leite RA, Wertzner HF, Gonçalves IC, Magliaro FCL, Matas CG. Auditory evoked potentials: predicting speech therapy outcomes in children with phonological disorders. Clinics (São Paulo) 2014; 69 (03) 212-218
  • 52 Holopainen IE, Korpilahti P, Juottonen K, Lang H, Sillanpää M. Attenuated auditory event-related potential (mismatch negativity) in children with developmental dysphasia. Neuropediatrics 1997; 28 (05) 253-256
  • 53 Korpilahti P, Lang HA. Auditory ERP components and mismatch negativity in dysphasic children. Electroencephalogr Clin Neurophysiol 1994; 91 (04) 256-264
  • 54 Kraus N, McGee TJ. Mismatch negativity in the assessment of central auditory function. Am J Audiol 1994; 3 (02) 39-51
  • 55 Bishop DVM. Using mismatch negativity to study central auditory processing in developmental language and literacy impairments: where are we, and where should we be going?. Psychol Bull 2007; 133 (04) 651-672
  • 56 Schochat E, Scheuer CI, Andrade ER. ABR and auditory P300 findings in children with ADHD. Arq Neuropsiquiatr 2002; 60 (3-B): 742-747
  • 57 Shi L, Chang Y, Li X, Aiken S, Liu L, Wang J. Cochlear synaptopathy and noise-induced hidden hearing loss. Neural Plast 2016; 2016: 6143164
  • 58 Kujawa SG, Liberman MC. Adding insult to injury: cochlear nerve degeneration after “temporary” noise-induced hearing loss. J Neurosci 2009; 29 (45) 14077-14085
  • 59 Liberman LD, Suzuki J, Liberman MC. Dynamics of cochlear synaptopathy after acoustic overexposure. J Assoc Res Otolaryngol 2015; 16 (02) 205-219
  • 60 Furman AC, Kujawa SG, Liberman MC. Noise-induced cochlear neuropathy is selective for fibers with low spontaneous rates. J Neurophysiol 2013; 110 (03) 577-586
  • 61 Bramhall NF. Use of the auditory brainstem response for assessment of cochlear synaptopathy in humans. J Acoust Soc Am 2021; 150 (06) 4440-4451
  • 62 Guest H, Munro KJ, Prendergast G, Plack CJ. Reliability and interrelations of seven proxy measures of cochlear synaptopathy. Hear Res 2019; 375: 34-43
  • 63 Barbee CM, James JA, Park JH. et al. Effectiveness of auditory measures for detecting hidden hearing loss and/or cochlear synaptopathy: a systematic review. Semin Hear 2018; 39 (02) 172-209
  • 64 Hickox AE, Larsen E, Heinz MG, Shinobu L, Whitton JP. Translational issues in cochlear synaptopathy. Hear Res 2017; 349: 164-171
  • 65 Guest H, Munro KJ, Prendergast G, Millman RE, Plack CJ. Impaired speech perception in noise with a normal audiogram: no evidence for cochlear synaptopathy and no relation to lifetime noise exposure. Hear Res 2018; 364: 142-151
  • 66 Liberman MC, Epstein MJ, Cleveland SS, Wang H, Maison SF. Toward a differential diagnosis of hidden hearing loss in humans. PLoS One 2016; 11 (09) e0162726
  • 67 Prendergast G, Tu W, Guest H. et al. Supra-threshold auditory brainstem response amplitudes in humans: test-retest reliability, electrode montage and noise exposure. Hear Res 2018; 364: 38-47
  • 68 Alonso R, Schochat E. The efficacy of formal auditory training in children with (central) auditory processing disorder: behavioral and electrophysiological evaluation. Rev Bras Otorrinolaringol (Engl Ed) 2009; 75 (05) 726-732
  • 69 Viola FC, De Vos M, Hine J. et al. Semi-automatic attenuation of cochlear implant artifacts for the evaluation of late auditory evoked potentials. Hear Res 2012; 284 (1-2): 6-15
  • 70 Kim K, Punte AK, Mertens G. et al. A novel method for device-related electroencephalography artifact suppression to explore cochlear implant-related cortical changes in single-sided deafness. J Neurosci Methods 2015; 255: 22-28
  • 71 Miller S, Zhang Y. Validation of the cochlear implant artifact correction tool for auditory electrophysiology. Neurosci Lett 2014; 577: 51-55
  • 72 Van Den Abbeele T, Crozat-Teissier N, Noel-Petroff N, Viala P, Frachet B, Narcy P. Neural plasticity of the auditory pathway after cochlear implantation in children. Cochlear Implants Int 2005; 6 (1, Suppl 1): 56-59
  • 73 Kral A, Tillein J. Brain plasticity under cochlear implant stimulation. Adv Otorhinolaryngol 2006; 64: 89-108
  • 74 Kileny PR. Evoked potentials in the management of patients with cochlear implants: research and clinical applications. Ear Hear 2007; 28 (2, Suppl): 124S-127S
  • 75 Chonchaiya W, Tardif T, Mai X. et al. Developmental trends in auditory processing can provide early predictions of language acquisition in young infants. Dev Sci 2013; 16 (02) 159-172
  • 76 Maggu AR, Wong PC, Antoniou M, Bones O, Liu H, Wong FC. Effects of combination of linguistic and musical pitch experience on subcortical pitch encoding. J Neurolinguist 2018; 47: 145-155
  • 77 Maggu AR, Zong W, Law V, Wong PC. Learning two tone languages enhances the brainstem encoding of lexical tones. Proc Interspeech 2018; 1437-1441
  • 78 Maggu AR, Liu F, Antoniou M, Wong PCM. Neural correlates of indicators of sound change in Cantonese: evidence from cortical and subcortical processes. Front Hum Neurosci 2016; 10: 652
  • 79 Maggu AR, Lau JCY, Waye MMY, Wong PCM. Combination of absolute pitch and tone language experience enhances lexical tone perception. Sci Rep 2021; 11 (01) 1485
  • 80 Wong PCM, Skoe E, Russo NM, Dees T, Kraus N. Musical experience shapes human brainstem encoding of linguistic pitch patterns. Nat Neurosci 2007; 10 (04) 420-422
  • 81 Novitskiy N, Maggu AR, Lai CM. et al. Early development of neural speech encoding depends on age but not native language status: evidence from lexical tone. Neurobiol Lang 2022; 3 (01) 67-86
  • 82 Krishnan A, Gandour JT, Suresh CH. Cortical pitch response components show differential sensitivity to native and nonnative pitch contours. Brain Lang 2014; 138: 51-60
  • 83 Krishnan A, Gandour JT, Ananthakrishnan S, Vijayaraghavan V. Language experience enhances early cortical pitch-dependent responses. J Neurolinguist 2015; 33: 128-148
  • 84 Krishnan A, Gandour JT, Suresh CH. Pitch processing of dynamic lexical tones in the auditory cortex is influenced by sensory and extrasensory processes. Eur J Neurosci 2015; 41 (11) 1496-1504
  • 85 Chandrasekaran B, Krishnan A, Gandour JT. Mismatch negativity to pitch contours is influenced by language experience. Brain Res 2007; 1128 (01) 148-156
  • 86 Shen G, Froud K. Electrophysiological correlates of categorical perception of lexical tones by English learners of Mandarin Chinese: an ERP study. Biling Lang Cogn 2019; 22 (02) 253-265
  • 87 Kutas M, Federmeier KD. Thirty years and counting: finding meaning in the N400 component of the event-related brain potential (ERP). Annu Rev Psychol 2011; 62: 621-647
  • 88 Regel S, Meyer L, Gunter TC. Distinguishing neurocognitive processes reflected by P600 effects: evidence from ERPs and neural oscillations. PLoS One 2014; 9 (05) e96840
  • 89 de Wit E, Visser-Bochane MI, Steenbergen B, van Dijk P, van der Schans CP, Luinge MR. Characteristics of auditory processing disorders: a systematic review. J Speech Lang Hear Res 2016; 59 (02) 384-413
  • 90 de Wit E, van Dijk P, Hanekamp S. et al. Same or different: the overlap between children with auditory processing disorders and children with other developmental disorders: a systematic review. Ear Hear 2018; 39 (01) 1-19
  • 91 Dawes P, Bishop D. Auditory processing disorder in relation to developmental disorders of language, communication and attention: a review and critique. Int J Lang Commun Disord 2009; 44 (04) 440-465
  • 92 Dawes P, Bishop DV. Psychometric profile of children with auditory processing disorder and children with dyslexia. Arch Dis Child 2010; 95 (06) 432-436
  • 93 Maggu AR, Overath T. An objective approach toward understanding auditory processing disorder. Am J Audiol 2021; 30 (03) 790-795
  • 94 Khalighinejad B, Cruzatto da Silva G, Mesgarani N. Dynamic encoding of acoustic features in neural responses to continuous speech. J Neurosci 2017; 37 (08) 2176-2185
  • 95 Broderick MP, Anderson AJ, Di Liberto GM, Crosse MJ, Lalor EC. Electrophysiological correlates of semantic dissimilarity reflect the comprehension of natural, narrative speech. Curr Biol 2018; 28 (05) 803-809.e3
  • 96 Di Liberto GM, Lalor EC. Indexing cortical entrainment to natural speech at the phonemic level: methodological considerations for applied research. Hear Res 2017; 348: 70-77
  • 97 Di Liberto GM, O'Sullivan JA, Lalor EC. Low-frequency cortical entrainment to speech reflects phoneme-level processing. Curr Biol 2015; 25 (19) 2457-2465
  • 98 Wong PCM, Lai CM, Chan PHY. et al. Neural speech encoding in infancy predicts future language and communication difficulties. Am J Speech Lang Pathol 2021; 30 (05) 2241-2250
  • 99 Xie Z, Reetzke R, Chandrasekaran B. Machine learning approaches to analyze speech-evoked neurophysiological responses. J Speech Lang Hear Res 2019; 62 (03) 587-601

Address for correspondence

Akshay R. Maggu, Ph.D.
Department of Speech-Language-Hearing Sciences, Hofstra University
110 Hofstra University, 110 Davison Hall, Hempstead, NY 11549

Publikationsverlauf

Artikel online veröffentlicht:
26. Oktober 2022

© 2022. Thieme. All rights reserved.

Thieme Medical Publishers, Inc.
333 Seventh Avenue, 18th Floor, New York, NY 10001, USA

  • References

  • 1 Berger H. Über das Elektrenkephalogramm des Menschen. Arch Für Psychiatr Nervenkrankh 1929; 87 (01) 527-570
  • 2 Davis H. Electric response audiometry, with special reference to the vertex potentials. In: de Boer E, Connor WK, Davis H. et al., eds. Auditory System: Clinical and Special Topics. Handbook of Sensory Physiology. Springer; 1976: 85-103
  • 3 Picton TW. Human Auditory Evoked Potentials. Plural Publishing;; 2010
  • 4 Dawson GD. A summation technique for the detection of small evoked potentials. Electroencephalogr Clin Neurophysiol 1954; 6 (01) 65-84
  • 5 Mendel MI, Goldstein R. Stability of the early components of the averaged electroencephalic response. J Speech Hear Res 1969; 12 (02) 351-361
  • 6 Geisler CD, Frishkopf LS, Rosenblith WA. Extracranial responses to acoustic clicks in man. Science 1958; 128 (3333): 1210-1211
  • 7 Musiek F, Nagle S. The middle latency response: a review of findings in various central nervous system lesions. J Am Acad Audiol 2018; 29 (09) 855-867
  • 8 Davis H, Mast T, Yoshie N, Zerlin S. The slow response of the human cortex to auditory stimuli: recovery process. Electroencephalogr Clin Neurophysiol 1966; 21 (02) 105-113
  • 9 Williams HL, Tepas DI, Morlock Jr HC. Evoked responses to clicks and electroencephalographic stages of sleep in man. Science 1962; 138 (3541): 685-686
  • 10 Yoshie N, Ohashi T, Suzuki T. Non-surgical recording of auditory nerve action potentials in man. Laryngoscope 1967; 77 (01) 76-85
  • 11 Portmann M, Aran JM. Electro-cochleography. Laryngoscope 1971; 81 (06) 899-910
  • 12 Sohmer H, Feinmesser M. Cochlear action potentials recorded from the external ear in man. Ann Otol Rhinol Laryngol 1967; 76 (02) 427-435
  • 13 Jewett DL, Romano MN, Williston JS. Human auditory evoked potentials: possible brain stem components detected on the scalp. Science 1970; 167 (3924): 1517-1518
  • 14 Jewett DL, Williston JS. Auditory-evoked far fields averaged from the scalp of humans. Brain 1971; 94 (04) 681-696
  • 15 Hecox K, Galambos R. Brain stem auditory evoked responses in human infants and adults. Arch Otolaryngol 1974; 99 (01) 30-33
  • 16 Starr A, Achor J. Auditory brain stem responses in neurological disease. Arch Neurol 1975; 32 (11) 761-768
  • 17 Picton TW, Hillyard SA. Human auditory evoked potentials. II. Effects of attention. Electroencephalogr Clin Neurophysiol 1974; 36 (02) 191-199
  • 18 Moushegian G, Rupert AL, Stillman RD. Laboratory note. Scalp-recorded early responses in man to frequencies in the speech range. Electroencephalogr Clin Neurophysiol 1973; 35 (06) 665-667
  • 19 Näätänen R, Gaillard AW, Mäntysalo S. Early selective-attention effect on evoked potential reinterpreted. Acta Psychol (Amst) 1978; 42 (04) 313-329
  • 20 Melcher JR, Kiang NY. Generators of the brainstem auditory evoked potential in cat. III: Identified cell populations. Hear Res 1996; 93 (1-2): 52-71
  • 21 Melcher JR, Knudson IM, Fullerton BC, Guinan Jr JJ, Norris BE, Kiang NY. Generators of the brainstem auditory evoked potential in cat. I. An experimental approach to their identification. Hear Res 1996; 93 (1-2): 1-27
  • 22 Hood LJ. Clinical Applications of the Auditory Brainstem Response. Singular Publishing Group; 1998
  • 23 Sharma M, Bist SS, Kumar S. Age-related maturation of wave V latency of auditory brainstem response in children. J Audiol Otol 2016; 20 (02) 97-101
  • 24 Young A, Cornejo J, Spinner A. Auditory brainstem response. In: StatPearls. StatPearls Publishing; 2022. . Accessed March 10, 2022 at: http://www.ncbi.nlm.nih.gov/books/NBK564321/
  • 25 Don M, Kwong B, Tanaka C, Brackmann D, Nelson R. The stacked ABR: a sensitive and specific screening tool for detecting small acoustic tumors. Audiol Neurotol 2005; 10 (05) 274-290
  • 26 Chandan HS, Prabhu PP. Speech perception abilities in individuals with auditory neuropathy spectrum disorder with preserved temporal synchrony. J Hear Sci 2013; 3 (02) 16-21
  • 27 Skoe E, Kraus N. Auditory brain stem response to complex sounds: a tutorial. Ear Hear 2010; 31 (03) 302-324
  • 28 Kumar U, Maggu AR, Mamatha NM. Effect of noise on BioMARK in individuals with learning disability. J India Inst Speech Hear 2012; 31
  • 29 Hornickel J, Zecker SG, Bradlow AR, Kraus N. Assistive listening devices drive neuroplasticity in children with dyslexia. Proc Natl Acad Sci U S A 2012; 109 (41) 16731-16736
  • 30 Russo N, Nicol T, Trommer B, Zecker S, Kraus N. Brainstem transcription of speech is disrupted in children with autism spectrum disorders. Dev Sci 2009; 12 (04) 557-567
  • 31 Russo NM, Hornickel J, Nicol T, Zecker S, Kraus N. Biological changes in auditory function following training in children with autism spectrum disorders. Behav Brain Funct 2010; 6 (01) 60
  • 32 Banai K, Kraus N. The dynamic brainstem: implications for CAPD. In: Current Controversies in Central Auditory Processing Disorder. San Diego: Plural Publishing Inc; 2008: 269-289
  • 33 Kraus N, Anderson S. Auditory processing disorder: biological basis and treatment efficacy. In: Le Prell CG, Lobarinas E, Popper AN, Fay RR. eds. Translational Research in Audiology, Neurotology, and the Hearing Sciences. Springer Handbook of Auditory Research. Springer International Publishing; 2016: 51-80
  • 34 Rocha-Muniz CN, Befi-Lopes DM, Schochat E. Investigation of auditory processing disorder and language impairment using the speech-evoked auditory brainstem response. Hear Res 2012; 294 (1-2): 143-152
  • 35 Kumar P, Singh NK. BioMARK as electrophysiological tool for assessing children at risk for (central) auditory processing disorders without reading deficits. Hear Res 2015; 324: 54-58
  • 36 White-Schwoch T, Krizman J, McCracken K. et al. Baseline profiles of auditory, vestibular, and visual functions in youth tackle football players. Concussion 2020; 4 (04) CNC66
  • 37 Santarelli R, Scimemi P, Monte ED, Arslan E. Cochlear microphonic potential recorded by transtympanic electrocochleography in normally-hearing and hearing-impaired ears. Acta Otorhinolaryngol Ital 2006; 26 (02) 78-95
  • 38 Durrant JD, Wang J, Ding DL, Salvi RJ. Are inner or outer hair cells the source of summating potentials recorded from the round window?. J Acoust Soc Am 1998; 104 (01) 370-377
  • 39 Eggermont JJ. Electrocochleography. In: de Boer E, Connor WK, Davis H. et al., eds. Auditory System: Clinical and Special Topics. Handbook of Sensory Physiology. Springer; 1976: 625-705
  • 40 Ferraro JA, Tibbils RP. SP/AP area ratio in the diagnosis of Ménière's disease. Am J Audiol 1999; 8 (01) 21-28
  • 41 Tanamati LF, Bevilacqua MC, Costa OA. Longitudinal study of the ECAP measured in children with cochlear implants. Rev Bras Otorrinolaringol (Engl Ed) 2009; 75 (01) 90-96
  • 42 Gallégo S, Frachet B, Micheyl C, Truy E, Collet L. Cochlear implant performance and electrically-evoked auditory brain-stem response characteristics. Electroencephalogr Clin Neurophysiol 1998; 108 (06) 521-525
  • 43 Damarla VK, Manjula P. Application of ASSR in the hearing aid selection process. Aust N Z J Audiol 2007; 29 (02) 89-97
  • 44 Arehole S, Augustine LE, Simhadri R. Middle latency response in children with learning disabilities: preliminary findings. J Commun Disord 1995; 28 (01) 21-38
  • 45 Brown DD. The use of the middle latency response (MLR) for assessing low-frequency auditory thresholds. J Acoust Soc Am 1982; 71 (Suppl. 01) S99
  • 46 Kraus N, Smith DI, Reed NL, Stein LK, Cartee C. Auditory middle latency responses in children: effects of age and diagnostic category. Electroencephalogr Clin Neurophysiol 1985; 62 (05) 343-351
  • 47 Schochat E, Musiek FE, Alonso R, Ogata J. Effect of auditory training on the middle latency response in children with (central) auditory processing disorder. Braz J Med Biol Res 2010; 43 (08) 777-785
  • 48 Swink S, Stuart A. Auditory long latency responses to tonal and speech stimuli. J Speech Lang Hear Res 2012; 55 (02) 447-459
  • 49 Regaçone SF, Gução ACB, Giacheti CM, Romero ACL, Frizzo ACF. Long latency auditory evoked potentials in students with specific learning disorders. Audiol Commun Res 2014; 19: 13-18
  • 50 Dorman MF, Sharma A, Gilley P, Martin K, Roland P. Central auditory development: evidence from CAEP measurements in children fit with cochlear implants. J Commun Disord 2007; 40 (04) 284-294
  • 51 Leite RA, Wertzner HF, Gonçalves IC, Magliaro FCL, Matas CG. Auditory evoked potentials: predicting speech therapy outcomes in children with phonological disorders. Clinics (São Paulo) 2014; 69 (03) 212-218
  • 52 Holopainen IE, Korpilahti P, Juottonen K, Lang H, Sillanpää M. Attenuated auditory event-related potential (mismatch negativity) in children with developmental dysphasia. Neuropediatrics 1997; 28 (05) 253-256
  • 53 Korpilahti P, Lang HA. Auditory ERP components and mismatch negativity in dysphasic children. Electroencephalogr Clin Neurophysiol 1994; 91 (04) 256-264
  • 54 Kraus N, McGee TJ. Mismatch negativity in the assessment of central auditory function. Am J Audiol 1994; 3 (02) 39-51
  • 55 Bishop DVM. Using mismatch negativity to study central auditory processing in developmental language and literacy impairments: where are we, and where should we be going?. Psychol Bull 2007; 133 (04) 651-672
  • 56 Schochat E, Scheuer CI, Andrade ER. ABR and auditory P300 findings in children with ADHD. Arq Neuropsiquiatr 2002; 60 (3-B): 742-747
  • 57 Shi L, Chang Y, Li X, Aiken S, Liu L, Wang J. Cochlear synaptopathy and noise-induced hidden hearing loss. Neural Plast 2016; 2016: 6143164
  • 58 Kujawa SG, Liberman MC. Adding insult to injury: cochlear nerve degeneration after “temporary” noise-induced hearing loss. J Neurosci 2009; 29 (45) 14077-14085
  • 59 Liberman LD, Suzuki J, Liberman MC. Dynamics of cochlear synaptopathy after acoustic overexposure. J Assoc Res Otolaryngol 2015; 16 (02) 205-219
  • 60 Furman AC, Kujawa SG, Liberman MC. Noise-induced cochlear neuropathy is selective for fibers with low spontaneous rates. J Neurophysiol 2013; 110 (03) 577-586
  • 61 Bramhall NF. Use of the auditory brainstem response for assessment of cochlear synaptopathy in humans. J Acoust Soc Am 2021; 150 (06) 4440-4451
  • 62 Guest H, Munro KJ, Prendergast G, Plack CJ. Reliability and interrelations of seven proxy measures of cochlear synaptopathy. Hear Res 2019; 375: 34-43
  • 63 Barbee CM, James JA, Park JH. et al. Effectiveness of auditory measures for detecting hidden hearing loss and/or cochlear synaptopathy: a systematic review. Semin Hear 2018; 39 (02) 172-209
  • 64 Hickox AE, Larsen E, Heinz MG, Shinobu L, Whitton JP. Translational issues in cochlear synaptopathy. Hear Res 2017; 349: 164-171
  • 65 Guest H, Munro KJ, Prendergast G, Millman RE, Plack CJ. Impaired speech perception in noise with a normal audiogram: no evidence for cochlear synaptopathy and no relation to lifetime noise exposure. Hear Res 2018; 364: 142-151
  • 66 Liberman MC, Epstein MJ, Cleveland SS, Wang H, Maison SF. Toward a differential diagnosis of hidden hearing loss in humans. PLoS One 2016; 11 (09) e0162726
  • 67 Prendergast G, Tu W, Guest H. et al. Supra-threshold auditory brainstem response amplitudes in humans: test-retest reliability, electrode montage and noise exposure. Hear Res 2018; 364: 38-47
  • 68 Alonso R, Schochat E. The efficacy of formal auditory training in children with (central) auditory processing disorder: behavioral and electrophysiological evaluation. Rev Bras Otorrinolaringol (Engl Ed) 2009; 75 (05) 726-732
  • 69 Viola FC, De Vos M, Hine J. et al. Semi-automatic attenuation of cochlear implant artifacts for the evaluation of late auditory evoked potentials. Hear Res 2012; 284 (1-2): 6-15
  • 70 Kim K, Punte AK, Mertens G. et al. A novel method for device-related electroencephalography artifact suppression to explore cochlear implant-related cortical changes in single-sided deafness. J Neurosci Methods 2015; 255: 22-28
  • 71 Miller S, Zhang Y. Validation of the cochlear implant artifact correction tool for auditory electrophysiology. Neurosci Lett 2014; 577: 51-55
  • 72 Van Den Abbeele T, Crozat-Teissier N, Noel-Petroff N, Viala P, Frachet B, Narcy P. Neural plasticity of the auditory pathway after cochlear implantation in children. Cochlear Implants Int 2005; 6 (1, Suppl 1): 56-59
  • 73 Kral A, Tillein J. Brain plasticity under cochlear implant stimulation. Adv Otorhinolaryngol 2006; 64: 89-108
  • 74 Kileny PR. Evoked potentials in the management of patients with cochlear implants: research and clinical applications. Ear Hear 2007; 28 (2, Suppl): 124S-127S
  • 75 Chonchaiya W, Tardif T, Mai X. et al. Developmental trends in auditory processing can provide early predictions of language acquisition in young infants. Dev Sci 2013; 16 (02) 159-172
  • 76 Maggu AR, Wong PC, Antoniou M, Bones O, Liu H, Wong FC. Effects of combination of linguistic and musical pitch experience on subcortical pitch encoding. J Neurolinguist 2018; 47: 145-155
  • 77 Maggu AR, Zong W, Law V, Wong PC. Learning two tone languages enhances the brainstem encoding of lexical tones. Proc Interspeech 2018; 1437-1441
  • 78 Maggu AR, Liu F, Antoniou M, Wong PCM. Neural correlates of indicators of sound change in Cantonese: evidence from cortical and subcortical processes. Front Hum Neurosci 2016; 10: 652
  • 79 Maggu AR, Lau JCY, Waye MMY, Wong PCM. Combination of absolute pitch and tone language experience enhances lexical tone perception. Sci Rep 2021; 11 (01) 1485
  • 80 Wong PCM, Skoe E, Russo NM, Dees T, Kraus N. Musical experience shapes human brainstem encoding of linguistic pitch patterns. Nat Neurosci 2007; 10 (04) 420-422
  • 81 Novitskiy N, Maggu AR, Lai CM. et al. Early development of neural speech encoding depends on age but not native language status: evidence from lexical tone. Neurobiol Lang 2022; 3 (01) 67-86
  • 82 Krishnan A, Gandour JT, Suresh CH. Cortical pitch response components show differential sensitivity to native and nonnative pitch contours. Brain Lang 2014; 138: 51-60
  • 83 Krishnan A, Gandour JT, Ananthakrishnan S, Vijayaraghavan V. Language experience enhances early cortical pitch-dependent responses. J Neurolinguist 2015; 33: 128-148
  • 84 Krishnan A, Gandour JT, Suresh CH. Pitch processing of dynamic lexical tones in the auditory cortex is influenced by sensory and extrasensory processes. Eur J Neurosci 2015; 41 (11) 1496-1504
  • 85 Chandrasekaran B, Krishnan A, Gandour JT. Mismatch negativity to pitch contours is influenced by language experience. Brain Res 2007; 1128 (01) 148-156
  • 86 Shen G, Froud K. Electrophysiological correlates of categorical perception of lexical tones by English learners of Mandarin Chinese: an ERP study. Biling Lang Cogn 2019; 22 (02) 253-265
  • 87 Kutas M, Federmeier KD. Thirty years and counting: finding meaning in the N400 component of the event-related brain potential (ERP). Annu Rev Psychol 2011; 62: 621-647
  • 88 Regel S, Meyer L, Gunter TC. Distinguishing neurocognitive processes reflected by P600 effects: evidence from ERPs and neural oscillations. PLoS One 2014; 9 (05) e96840
  • 89 de Wit E, Visser-Bochane MI, Steenbergen B, van Dijk P, van der Schans CP, Luinge MR. Characteristics of auditory processing disorders: a systematic review. J Speech Lang Hear Res 2016; 59 (02) 384-413
  • 90 de Wit E, van Dijk P, Hanekamp S. et al. Same or different: the overlap between children with auditory processing disorders and children with other developmental disorders: a systematic review. Ear Hear 2018; 39 (01) 1-19
  • 91 Dawes P, Bishop D. Auditory processing disorder in relation to developmental disorders of language, communication and attention: a review and critique. Int J Lang Commun Disord 2009; 44 (04) 440-465
  • 92 Dawes P, Bishop DV. Psychometric profile of children with auditory processing disorder and children with dyslexia. Arch Dis Child 2010; 95 (06) 432-436
  • 93 Maggu AR, Overath T. An objective approach toward understanding auditory processing disorder. Am J Audiol 2021; 30 (03) 790-795
  • 94 Khalighinejad B, Cruzatto da Silva G, Mesgarani N. Dynamic encoding of acoustic features in neural responses to continuous speech. J Neurosci 2017; 37 (08) 2176-2185
  • 95 Broderick MP, Anderson AJ, Di Liberto GM, Crosse MJ, Lalor EC. Electrophysiological correlates of semantic dissimilarity reflect the comprehension of natural, narrative speech. Curr Biol 2018; 28 (05) 803-809.e3
  • 96 Di Liberto GM, Lalor EC. Indexing cortical entrainment to natural speech at the phonemic level: methodological considerations for applied research. Hear Res 2017; 348: 70-77
  • 97 Di Liberto GM, O'Sullivan JA, Lalor EC. Low-frequency cortical entrainment to speech reflects phoneme-level processing. Curr Biol 2015; 25 (19) 2457-2465
  • 98 Wong PCM, Lai CM, Chan PHY. et al. Neural speech encoding in infancy predicts future language and communication difficulties. Am J Speech Lang Pathol 2021; 30 (05) 2241-2250
  • 99 Xie Z, Reetzke R, Chandrasekaran B. Machine learning approaches to analyze speech-evoked neurophysiological responses. J Speech Lang Hear Res 2019; 62 (03) 587-601

Zoom Image
Figure 1 Human scalp–recorded auditory evoked potentials representing the different levels of the auditory nervous system: (A) auditory brainstem responses to 100 µs click stimuli collecting at intensities ranging from 90 to 10 dB in 10 dB steps at 30.1/s repetition rate. Waves I, II, III, and V are clearly visible in this individual's data. A decrease in amplitude and increase in latency of the peaks can be observed as the intensity for presentation decreases; (B) middle latency response to 500 Hz tone burst stimuli collected at 7.1/s repetition rate at 70 dB. Na, Pa, and Nb peaks in the latency range of 15 to 40 ms are clearly visible; and (C) long latency response to 500 Hz tone burst stimuli collected at 1.1/s repetition rate at 70 dB. P1, N1, P2, and N2 peaks in the latency range of 60 to 200 ms clearly visible.
Zoom Image
Figure 2 Frequency following response collected with a rising lexical tone stimulus (/ji/ T2) from Cantonese. (A) Waveform of the 175-ms stimulus. (B) Waveform of the frequency following response (FFR) with 50 ms pre-stimulus baseline followed by the FFR followed by the post-stimulus baseline; (C) power spectral density of the stimulus; (D) power spectral density of the FFR; and (E) comparison of the pitch contours of the FFR and the stimulus. In this case, the FFR pitch contour has near-perfect resemblance to the stimulus pitch contour.
Zoom Image
Figure 3 Auditory evoked responses elicited in 80:20 oddball paradigms: (A) P300 (labeled) in the latency range of 300 to 400 ms and (B) mismatch negativity (MMN) (shaded) in the latency range of 100 to 300 ms.
Zoom Image
Figure 4 An encoding model where phonetics speech features are extracted that are analyzed alongside training electroencephalography (EEG) data to obtain the temporal response function, that is further employed to predict the EEG. In the final step, Pearson's r is calculated using the predicted EEG and the testing data of the recorded EEG.