Semin Hear 2022; 43(03): 240-250
DOI: 10.1055/s-0042-1756166
Review Article

Translational Applications of Machine Learning in Auditory Electrophysiology

Spencer Smith
1   Department of Speech, Language, and Hearing Sciences, University of Texas at Austin, Austin, Texas
› Author Affiliations
FUNDING This work was supported in part by NIH grant #K01DC017192.

Abstract

Machine learning (ML) is transforming nearly every aspect of modern life including medicine and its subfields, such as hearing science. This article presents a brief conceptual overview of selected ML approaches and describes how these techniques are being applied to outstanding problems in hearing science, with a particular focus on auditory evoked potentials (AEPs). Two vignettes are presented in which ML is used to analyze subcortical AEP data. The first vignette demonstrates how ML can be used to determine if auditory learning has influenced auditory neurophysiologic function. The second vignette demonstrates how ML analysis of AEPs may be useful in determining whether hearing devices are optimized for discriminating speech sounds.



Publication History

Article published online:
26 October 2022

© 2022. Thieme. All rights reserved.

Thieme Medical Publishers, Inc.
333 Seventh Avenue, 18th Floor, New York, NY 10001, USA

 
  • References

  • 1 Mills M. Hearing aids and the history of electronics miniaturization. IEEE Ann Hist Comput 2011; 33: 24-45
  • 2 Dillon H. Hearing Aids. Hodder Arnold; 2008
  • 3 Wilson BS, Dorman MF. Cochlear implants: a remarkable past and a brilliant future. Hear Res 2008; 242 (1-2): 3-21
  • 4 Smith P, Davis A. The benefits of using Bluetooth accessories with hearing aids. Int J Audiol 2014; 53 (10) 770-773
  • 5 Harrington P. Machine Learning in Action. Simon and Schuster; 2012
  • 6 McKearney RM, MacKinnon RC. Objective auditory brainstem response classification using machine learning. Int J Audiol 2019; 58 (04) 224-230
  • 7 Al Osman R, Al Osman H. On the use of machine learning for classifying auditory brainstem responses: a scoping review. IEEE Access 2021; 110592-110600
  • 8 Xu C, Cheng F-Y, Medina S, Smith S. Acoustic bandwidth effects on envelope following responses to simulated bimodal hearing. J Acoust Soc Am 2021; 150: A64
  • 9 Cheng F-Y, Xu C, Gold L, Smith S. Rapid enhancement of subcortical neural responses to sine-wave speech. Front Neurosci 2021; 15: 747303
  • 10 Lesica NA, Mehta N, Manjaly JG, Deng L, Wilson BS, Zeng F-G. Harnessing the power of artificial intelligence to transform hearing healthcare and research. Nat Mach Intell 2021; 3: 840-849
  • 11 Wang D. Deep learning reinvents the hearing aid: finally, wearers of hearing aids can pick out a voice in a crowded room. IEEE Spectr 2017; 54 (03) 32-37
  • 12 Alamdari N, Lobarinas E, Kehtarnavaz N. Personalization of hearing aid compression by human-in-the-loop deep reinforcement learning. IEEE Access 2020; 8: 203503-203515
  • 13 Mehra R, Brimijoin O, Robinson P, Lunner T. Potential of augmented reality platforms to improve individual hearing aids and to support more ecologically valid research. Ear Hear 2020; 41 (Suppl. 01) 140S-146S
  • 14 Fiedler L, Wöstmann M, Graversen C, Brandmeyer A, Lunner T, Obleser J. Single-channel in-ear-EEG detects the focus of auditory attention to concurrent tone streams and mixed speech. J Neural Eng 2017; 14 (03) 036020
  • 15 An W.W., Pei A., Noyce A.L., Shinn-Cunningham B.. (2020, July). Decoding auditory attention from single-trial EEG for a high-efficiency brain-computer interface. In 2020 42nd Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC) (pp. 3456–3459). IEEE
  • 16 Jordan MI, Mitchell TM. Machine learning: trends, perspectives, and prospects. Science 2015; 349 (6245): 255-260
  • 17 Berry MW, Mohamed A, Yap BW. Supervised and unsupervised learning for data science. Springer; 2019
  • 18 Friedman N, Geiger D, Goldszmidt M. Bayesian network classifiers. Mach Learn 1997; 29: 131-163
  • 19 Azar AT, El-Metwally SM. Decision tree classifiers for automated medical diagnosis. Neural Comput Appl 2013; 23: 2387-2403
  • 20 Pisner DA, Schnyer DM. Support vector machine. In: Machine Learning. Elsevier; 2020: 101-121
  • 21 Bianco MJ, Gerstoft P, Traer J. et al. Machine learning in acoustics: theory and applications. J Acoust Soc Am 2019; 146 (05) 3590-3628
  • 22 Llanos F, Xie Z, Chandrasekaran B. Hidden Markov modeling of frequency-following responses to Mandarin lexical tones. J Neurosci Methods 2017; 291: 101-112
  • 23 Zhang C, Ma Y. Ensemble Machine Learning: Methods and Applications. Springer; 2012
  • 24 Yi HG, Xie Z, Reetzke R, Dimakis AG, Chandrasekaran B. Vowel decoding from single-trial speech-evoked electrophysiological responses: a feature-based machine learning approach. Brain Behav 2017; 7 (06) e00665
  • 25 Yegnanarayana B. Artificial Neural Networks. PHI Learning Pvt. Ltd.; 2009
  • 26 Priddy KL, Keller PE. Artificial Neural Networks: An Introduction. Vol 68. SPIE Press; 2005
  • 27 Kamath U, Liu J, Whitaker J. Deep Learning for NLP and Speech Recognition. Vol 84. Springer; 2019
  • 28 Park G, Cho W, Kim K-S, Lee S. Speech enhancement for hearing aids with deep learning on environmental noises. Appl Sci (Basel) 2020; 10: 6077
  • 29 Balling L.W., Mølgaard L.L., Townend O., Nielsen J.B.B.. (2021, August). The Collaboration between Hearing Aid Users and Artificial Intelligence to Optimize Sound. . In Seminars in Hearing (Vol. 42, No. 03, pp. 282–294). Thieme Medical Publishers, Inc..
  • 30 Lai YH, Tsao Y, Lu X. et al. Deep learning–based noise reduction approach to improve speech intelligibility for cochlear implant recipients. Ear Hear 2018; 39 (04) 795-809
  • 31 Barlow N, Purdy SC, Sharma M, Giles E, Narne V. The effect of short-term auditory training on speech in noise perception and cortical auditory evoked potentials in adults with cochlear implants. Thieme Medical Publishers; 2016: 84-98
  • 32 Anderson S, Kraus N. Auditory training: evidence for neural plasticity in older adults. Perspect Hear Hear Disord Res Res Diagn 2013; 17: 37-57
  • 33 Stacey PC, Summerfield AQ. Effectiveness of computer-based auditory training in improving the perception of noise-vocoded speech. J Acoust Soc Am 2007; 121 (5, Pt 1): 2923-2935
  • 34 Sadeghian A, Dajani HR, Chan ADC. Classification of speech-evoked brainstem responses to English vowels. Speech Commun 2015; 68: 69-84
  • 35 Xie Z, Reetzke R, Chandrasekaran B. Machine learning approaches to analyze speech-evoked neurophysiological responses. J Speech Lang Hear Res 2019; 62 (03) 587-601
  • 36 Xie Z, Reetzke R, Chandrasekaran B. Taking attention away from the auditory modality: context-dependent effects on early sensory encoding of speech. Neuroscience 2018; 384: 64-75
  • 37 Cristianini N, Shawe-Taylor J. An Introduction to Support Vector Machines and Other Kernel-Based Learning Methods. Cambridge University Press; 2000
  • 38 Phipson B, Smyth GK. Permutation P-values should never be zero: calculating exact P-values when permutations are randomly drawn. Stat Appl Genet Mol Biol 2010; 9: e39
  • 39 Dorman MF, Gifford RH. Combining acoustic and electric stimulation in the service of speech recognition. Int J Audiol 2010; 49 (12) 912-919
  • 40 Sheffield SW, Gifford RH. The benefits of bimodal hearing: effect of frequency region and acoustic bandwidth. Audiol Neurotol 2014; 19 (03) 151-163
  • 41 Potts LG, Skinner MW, Litovsky RA, Strube MJ, Kuk F. Recognition and localization of speech by adult cochlear implant recipients wearing a digital hearing aid in the nonimplanted ear (bimodal hearing). J Am Acad Audiol 2009; 20 (06) 353-373
  • 42 D'Onofrio KL, Caldwell M, Limb C, Smith S, Kessler DM, Gifford RH. Musical emotion perception in bimodal patients: relative weighting of musical mode and tempo cues. Front Neurosci 2020; 14: 114
  • 43 D'Onofrio K, Smith S, Kessler D, Williams G, Gifford R. Musical emotion perception in bimodal patients: relationship between bimodal benefit and neural representation of temporal fine structure using Rhodes piano stimuli. J Acoust Soc Am 2019; 145: 1877
  • 44 D'Onofrio KL, Gifford RH. Bimodal benefit for music perception: effect of acoustic bandwidth. J Speech Lang Hear Res 2021; 64 (04) 1341-1353
  • 45 Kessler DM, Ananthakrishnan S, Smith SB, D'Onofrio K, Gifford RH. Frequency following response and speech recognition benefit for combining a cochlear implant and contralateral hearing aid. Trends Hear 2020; 24: 2331216520902001
  • 46 Hillenbrand J, Getty LA, Clark MJ, Wheeler K. Acoustic characteristics of American English vowels. J Acoust Soc Am 1995; 97 (5, Pt 1): 3099-3111