CC BY-NC-ND 4.0 · Int Arch Otorhinolaryngol 2023; 27(02): e203-e210
DOI: 10.1055/s-0043-1761167
Original Research

The Effects of Monaural Stimulation on Frequency-Following Responses in Adults Who Can Sing in Tune and Those Who Cannot

1   Department of Audiology, Albert Einstein Instituto Israelita de Ensino e Pesquisa, São Paulo, SP, Brazil
2   Department of Electrophysiology, Centro de Estudos da Voz (CEV), São Paulo, SP, Brazil
,
3   Department of Hearing, Centro de Estudos da Voz (CEV), São Paulo, SP, Brazil
,
4   Department of Voice, Centro de Estudos da Voz (CEV), São Paulo, SP, Brazil
,
Francine Honorio
4   Department of Voice, Centro de Estudos da Voz (CEV), São Paulo, SP, Brazil
,
5   Department of Estatistical, Instituto de pesquisa Eldorado, Campinas, SP, Brazil
,
6   Department of Hearing, Institute of Physiology and Pathology of Hearing, Warsaw, Poland
7   Department of Heart Failure and Cardiac Rehabilitation, Univeristy of Warsaw, Poland
8   Department of Otolaryngology, Institute of Sensory Organs, Warsaw, Poland
,
8   Department of Otolaryngology, Institute of Sensory Organs, Warsaw, Poland
9   Department of Hearing, Institute of Physiology and Pathology of Hearing, World Hearing Center, Kajetany, Poland
10   Department of Hearing, Center of Hearing and Speech, Kajetany, Poland
,
4   Department of Voice, Centro de Estudos da Voz (CEV), São Paulo, SP, Brazil
11   Department of Voice, Universidade Federal de São Paulo, São Paulo, Brazil
› Author Affiliations
 

Abstract

Introduction Musicians have an advantage over non-musicians in detecting, perceiving, and processing nonverbal (i.e., environmental sounds, tones and others) and verbal sounds (i.e., consonant, vowel, phrases and others) as well as instrumental sounds. In contrast to the high skill of musicians, there is another group of people who are tone-deaf and have difficulty in distinguishing musical sounds or singing in tune. These sounds can originate in different ways, such as a musical instrument, orchestra, or the human voice.

Objective The objective of the present work is to study frequency-following responses (FFRs) in individuals who can sing in-tune and those who sing off-tune.

Methods Electrophysiological responses were recorded in 37 individuals divided in two groups: (i) control group (CG) with professional musicians, and (ii) experimental group (EG) with non-musicians.

Results There was homogeneity between the two groups regarding age and gender. The CG had more homogeneous responses in the latency of the FFRs waves when responses between the right and left ears were compared to those of the EG.

Conclusion This study showed that monaural stimulation (right or left) in an FFR test is useful for demonstrating impairment of speech perception in individuals who sing off tune. The response of the left ear appears to present more subtlety and reliability when identifying the coding of speech sound in individuals who sing off tune.


#

Introduction

It is well known that musicians have an advantage over non-musicians in detecting, perceiving, and processing sounds. Recent research has shown that this improved ability in sound processing occurs both for nonverbal and verbal sounds as well as instrumental sounds.[1] [2] [3] In contrast to the high skill of musicians, there is another group of people who are tone-deaf and have difficulty in distinguishing musical sounds or singing in tune. These sounds can originate in different ways, such as a musical instrument, orchestra, or the human voice.[4] [5] [6] [7]

Singing off tune may be due to a lack of exposure to music. Although it is extremely difficult to say why certain individuals cannot sing in tune, there are two probable factors at work, namely difficulties in sound perception and/or vocal production.[8] However, other authors emphasize other potential problems, such as memory and language.[9] Musicians seem to have the ability to process and perceive sounds, while individuals who sing off tune have an impaired skill in these areas.[10]

Off-tune singing is usually assessed by vocal emission techniques, which assume there is a gap in function along the auditory trajectory. Based on this assumption, individuals who sing off tune should really have their hearing and vocal abilities monitored. One way to evaluate and monitor a person's synchronized neural activity in response to sounds is through noninvasive electrophysiological testing. Among the different types of electrophysiological procedures, we highlight the frequency-following response (FFR), which reflects the phase-locked activity of neural populations in the rostal brainstem and can track the fundamental frequency of a sound and its harmonics. Clinically, FFR responses are highly replicable, both within and across individuals.[11] [12]

To perceive music, the right hemisphere is predominantly involved. The right hemisphere processes the prosodic and melodic characteristics of a sound, different to language processing, which largely involves the left hemisphere.[13] It would, therefore, be interesting to see if there is any difference in FFR responses between the right and left ears when they are monaurally stimulated.

Studies on the detection of neurophysiological changes resulting from the processing of speech sounds in individuals who sing off tune are scarce.[14] Our study group hypothesized that there could be a difference in FFR responses between the ears, with better performance in the right ear of individuals who sing in tune compared with those who sing off tune. Thus, the objective of the present work is to study FFR responses in individuals who can sing in tune and those who sing off tune. We used monaural stimulation (sounds supplied to the right and left ears separately) to try and understand how speech sounds are coded in subcortical regions of these two sorts of people.


#

Materials and Methods

Statement of Ethics

This study was approved by the committee for ethics in research under protocol number 1.191.303 at the CAEE: 41305515.9.0000.5511. Informed consent for research was obtained in writing from all participants after an explanation of the nature, purpose, and expected results of the study.


#

Participants

A total of 37 individuals participated in this study, 20 female and 17 male, aged between 20 and 57 years who were attended at an institute for voice treatment. The subjects were divided into two groups according to the inclusion criteria described below.

The control group (CG) consisted of 17 professional musicians (10 females and 7 males) who could sing in tune, and the experimental group (EG) consisted of 20 nonmusicians (10 females and 10 males) who sang off tune. It is important to clarify that for the purpose of the present study, professional musicians were defined as individuals with musical experience who lived off of music as a job, while nonmusicians were individuals without musical experience and whose work was not related to music. To be included, all subjects should have: (i) air conduction threshold below 20 dB HL for octaves from 0.25 to 8 kHz and bone conduction thresholds below 15 dB HL for octaves between 0.5 to 4 kHz; (ii) type A tympanogram with compliance between 0.3 and 1.3 mmhos and pressure between –100 daPa and +200 daPa associated with the presence of ipsilateral and contralateral acoustic reflexes in both ears; (iii) click auditory brainstem response (ABR) with waves I, III, and V present and with interpeak intervals I to III, III to V, and I to V within normal standards in both ears; no syndromic hearing impairment; and no current or prior psychiatric disorder. The hearing and pitch-matching assessment were performed in the institute for voice treatment.

Besides that, the CG was composed of professional musicians without tuning anomalies as confirmed by administration of a pitch-matching test, and the EG was composed of individuals with no musical ability with errors in tuning ability also confirmed by administration of a pitch-matching test.


#

Procedures

Audiological Evaluation

  • Audiometric evaluation was performed via air conduction at 0.25 to 8 kHz and bone conduction at 0.5 a 4 kHz. Auditory threshold was considered to be normal if up to 15 dB for bone conduction and up to 20 dB for air conduction according to the classification of Davis and Silverman.[15] Testing was performed using an Interacoustics AC 40 audiometer (Grason-Stadler, Eden Prairie, USA).

  • Speech recognition threshold (SRT). A list of disyllables was adopted, and the final result was the intensity at which the participant scored 50% of the words presented.

  • Speech recognition index (SRI) was tested at 40 dB above the mean tonal threshold of 0.5, 1, and 2 kHz using a list of monosyllabic words, and it was considered normal if the percentage of correct answers was between 88 and 100%.

  • Immittanciometry (tympanometry and acoustic reflex). Tympanometry was performed with a 226 Hz probe tone. Ipsilateral and contralateral acoustic reflexes were probed at frequencies of 0.5 to 4 kHz. Normal subjects presented a peak maximum compliance at atmospheric pressure (0 daPa) and an equivalent volume of 0.3 to 1.3 mL according to the proposal of Jerger (1970).[16] Immittanciometry was performed using the Interacoustics AT 235h Impedance Audiometer (Grason-Stadler, Eden Prairie, USA). All equipment was calibrated according to ISO-389 and IEC-645 standards. Subjects who had normal responses in the basic audiological evaluation were then tested by auditory electrophysiology.


#

Pitch-matching Tests

A pitch-matching test was administered individually to the participants in a quiet environment, with sound stimuli presented under free-field conditions at a self-selected normal loudness. In task 1, the individual had to listen to an isolated musical tone and then immediately repeat it vocally, a task repeated with five different tones. In task 2, the individual had to listen to a 3-tone sequence and then immediately repeat the sequence vocally, a task again repeated using five different sequences. The vocal reproductions were directly captured into a portable computer by means of a head-mounted microphone that had a flat frequency response; it was placed at 45° and 5 cm away from the mouth of the participant. The samples were recorded using Sound Forge software version 4.5c and imported into Vocalgrama 1.8i (CTS Informática, Pato Branco, PR, Brazil).


#

Pitch-matching Tests

All voice samples were subjected to computerized acoustic analysis by means of the Vocalgrama (CTS informatica, Paraná, Brazil) software. Vocalgrama uses autocorrelation to determine F0; a filter, available on the software, was used to reduce artifacts. The frequency of an individual's vocal imitation was compared with the frequency of the original tone. A correct match was considered to be when the reproduction had the same fundamental frequency as the original to within a semitone ([Fig. 1a]), and the individual was then considered to have accurate pitch-matching. In cases in which the vocal imitation and the original tone had different frequencies the match was considered wrong ([Fig. 1b]). Participants who were able to sing correct sequences of tones with 100% accuracy were considered as able to sing in tune. However, when participants were unable to correctly repeat the sequences, they were considered as singing off tune.[17] The fundamental frequency extraction was performed offline.

Zoom Image
Fig. 1 Example of the computerized acoustic evaluation. (A) Correct tuning in the pitch-matching test and (B) incorrect tuning in the pitch-matching test (Vocalgrama 1.8i – CTS Informática).

#

Electrophysiological Evaluation

Electrophysiological evaluation was conducted using the Biologic Navigator Pro equipment (Natus, Middleton, USA) in an acoustically prepared soundproof and electrically shielded room. Subjects were seated in a reclining chair in a comfortable position. The skin of the subject's scalp was cleaned with abrasive paste before fixing the electrodes in place with conductive paste and adhesive tape. Impedance was kept below 3 kΩ and the interelectrode impedance was lower than 2 kΩ. The electrodes were positioned according to the 10 to 20 system, that is, active electrode at the apex (Cz), reference electrode on the ipsilateral mastoid, and ground electrode on the contralateral mastoid.[18] The right ears were assessed separately. Acquisition parameters of the (a) Click-ABR (equipment Biologic Navigator Pro (Natus, Middleton, USA), click, 0,1 millisecond [duration], rarefaction [polarity], 80 dB nHL [intensity], 19.3 [rate], 2,000 [scans], replication 2x, 10 to 1,500 [filter], 10.66 ms [window] and insert ER-3A), and (b) FFR (equipment Biologic Navigator Pro, speech, 40 ms [duration], alternation [polarity], 80 dB SPL [intensity], 10.9 [rate], 3,000 [scans], replication 2x, 10 to 200 [filter], 85.33 ms [window], and insert ER-3A). During testing, subjects were instructed to keep their eyes closed to avoid artifacts. If necessary, changes were made to the position of the subject to ensure stable recording conditions. Runs containing more than 10% artifacts were repeated.

All analyses were performed offline, and the response waveforms were visually identified and manually marked by an audiologist who was blinded to each participant's age, gender, and group (CG or EG).

The ABR responses were recorded on the right and left ears separately at 80 dBnHL. Two waveforms were collected to verify reproducibility. The presence and absolute latencies of waves I, III, and V to 80 dBnHL were analyzed, as well as the interpeak intervals I to III, III to V, and I to V, according to the normality criteria proposed by the Navigatorpro Biologic system (Natus, Middleton, USA).

The analysis was performed in the time domain. Latency and amplitude values of the seven waves elicited by the syllable /da/ (V, A, C, D, E, F and O) were based on the analysis criteria of previous published studies.[14] [19] In case a wave was not detected, it was described as absent and the data for this wave was not analyzed. In addition, analysis of the VA complex was done, involving: (i) slope of the VA complex (μV/ms), which is related to the temporal synchronization of the response generators; and (ii) area of the VA complex (μV × ms), which is related to the amount of activity contributing to generation of the wave.[19]


#
#

Statistical Analysis

To compare the two groups (tuned and tuneless) across each wave, the analysis of variance (ANOVA) procedure was used, testing the age and gender variables as well as their interactions. The variables group, gender, and age were fixed with two levels each. The ANOVA used the Fisher-Snedecor distribution to determine whether there was a significant difference among groups or their interactions. Since the FFR response presents seven wave peaks, which are also related to each other, the p-values from the ANOVA analysis were adjusted for multiple comparisons using the false discovery rate (FDR) correction. To test the homogeneity of the sample, the Pearson chi squared test was applied. The level of significance was set at 5% (p ≤ 0.05). The statistical analyses were conducted using the R-project facility (www.r-project.org).


#
#

Results

[Table 1] shows a statistical description of the demographic data based on the variables age and gender of individuals who could sing in tune and those who could not. There was homogeneity between the two groups regarding age and gender.

Table 1

Statistical description of demographic data based on the variables age and gender between groups

Group

Gender

Age

Male

Female

N

Mean

SD

Off-tune

7

10

20

30.10

11.00

In-tune

10

10

17

35.41

5.66

P -value

0.84

0.07

Abbreviation: SD, standard deviation.


Key: Fisher exact test for count data.


In the analyses of the responses of latency and amplitude between the right and left ears there was no statistically significant difference, but important information could be analyzed with the wave distribution waves. [Fig. 2] shows the distribution of the latency values of waves V, A, C, D, E, F, and O between the right and left ears in the studied groups. The scatter plot for latency compares the right and left ears and it shows a fairly homogeneous distribution of all waves in the group of individuals who could sing in tune, whereas in the group of individuals who sang off tune, there was a greater dispersion in latency of all FFR waves. In the same group, there was also a gradual increase in the standard deviation associated with waves with higher latency.

Zoom Image
Fig. 2 Comparison of latency values and ears between groups.

[Table 2] displays a comparison between the CG and EG in terms of FFR latency (ms) measured in the right ear. There was a statistically significant difference in the latencies of waves A (p = 0.045), C (p = 0.002), D (p = 0.030), and F (p = 0.046) between the groups.

Table 2

Comparison between in-tune and off-tune individuals in terms of frequency-following response latency measured in the right ear

WAVES

In-tune

Off-tune

Diff in the mean

P-value

Mean

Median

SD

Mean

Median

SD

V

6.56

6.62

0.28

6.67

6.70

0.30

-0.11

0.311

A

7.54

7.62

0.31

7.79

7.87

0.38

-0.25

0.045*

C

18.06

18.12

0.51

18.77

18.62

0.72

-0.72

0.002*

D

22.21

22.20

0.63

22.94

22.58

1.25

-0.74

0.030*

E

30.80

30.70

0.30

31.83

31.03

2.27

-1.03

0.065

F

39.15

39.20

0.30

40.22

39.36

2.19

-1.07

0.046*

O

47.94

47.95

0.58

49.33

48.20

3.32

-1.39

0.080

Slope

0.28

0.26

0.07

0.23

0.24

0.10

0.04

0.133

Area

0.26

0.25

0.09

0.27

0.26

0.10

0.00

0.830

Abbreviation: SD, standard deviation.


F-test from ANOVA p-value definition. *p-value < 0.05.


[Table 3] displays the comparison between the CG and EG in terms of FFR latency (ms) measured in the left ear. There was a statistically significant difference in the latencies of waves C (p = 0.04), D (p = 0.02), E (p = 0.017), F (p = 0.01), and O (p = 0.018).

Table 3

Comparison between in-tune and off-tune individuals in terms of frequency-following response latency measured in the left ear

WAVES

In-tune

Off-tune

Diff in the mean

P-value

Mean

Median

SD

Mean

Median

SD

V

6.66

6.70

0.33

6.99

6.78

1.11

-0.33

0.267

A

7.74

7.78

0.36

8.03

7.83

1.22

-0.29

0.382

C

18.44

18.37

0.53

19.13

18.87

1.15

-0.69

0.04*

D

22.60

22.45

0.58

23.46

23.33

1.31

-0.86

0.02*

E

30.98

30.95

0.44

32.40

31.95

2.33

-1.42

0.017*

F

39.37

39.36

0.43

41.75

39.53

3.43

-2.37

0.01*

O

48.28

48.36

0.39

50.39

48.45

3.49

-2.11

0.018*

Abbreviation: SD, Standard Deviation.


F-test from ANOVA p-value definition. *p-value < 0.05.


[Fig. 3] shows the left-right distribution of amplitude values of waves V, A, C, D, E, F, and O. The scatter plot shows similar responses in the lower left quadrant, although there is more scatter in the EG.

Zoom Image
Fig. 3 Comparison of amplitudes values between groups and ears.

[Table 4] compares the CG and EG regarding FFR amplitude (µV) in the right ear. There was no statistically significant difference between the groups.

Table 4

Comparison between in-tune and off-tune individuals in terms of frequency-following response amplitude measured in the right ear

WAVES

In-tune

Off-tune

Diff in the mean

P-value

Mean

Median

SD

Mean

Median

SD

V

0.10

0.09

0.04

0.08

0.08

0.06

0.02

0.198

A

0.16

0.18

0.05

0.16

0.16

0.05

0.00

0.981

C

0.09

0.07

0.09

0.08

0.05

0.09

0.01

0.830

D

0.07

0.06

0.04

0.11

0.08

0.08

-0.04

0.087

E

0.17

0.18

0.05

0.18

0.17

0.07

-0.01

0.754

F

0.18

0.16

0.08

0.14

0.12

0.11

0.03

0.391

O

0.11

0.10

0.06

0.17

0.12

0.23

-0.06

0.334

Abbreviation: SD, standard deviation.


F-test from ANOVA p-value definition.


[Table 5] compares the CG and EG regarding FFR amplitude (µV) in the left ear. There was a statistically significant difference in amplitude values only for wave O (p = 0.047).

Table 5

Comparison between in-tune and off-tune individuals in terms of frequency-following response amplitude measured in the left ear

WAVES

In-tune

Off-tune

Diff in the mean

P-value

Mean

Median

SD

Mean

Median

SD

V

0.08

0.06

0.04

0.08

0.07

0.04

0.00

0.934

A

0.16

0.16

0.04

0.13

0.14

0.05

0.03

0.071

C

0.05

0.03

0.05

0.08

0.06

0.09

-0.03

0.174

D

0.10

0.10

0.07

0.10

0.09

0.08

0.00

0.844

E

0.15

0.14

0.05

0.17

0.15

0.10

-0.02

0.389

F

0.12

0.13

0.07

0.16

0.15

0.11

-0.04

0.219

O

0.11

0.11

0.05

0.17

0.13

0.10

-0.06

0.047*

Abbreviation: SD, standard deviation.


F-test from ANOVA p-value definition. *p-value < 0.05.



#

Discussion

In the analysis of the sample characterization, considering gender (male and female) and age group, the groups were homogeneous. The CG was composed of 17 participants (male = 7, female = 10; 28–49 years), and the EG was composed of 20 participants (male = 10, female = 10; 20–57 years).

The scatter plot for latency showed that the CG had more homogeneous responses in latency of the FFR waves when responses between the right and left ears were compared. In the EG, there were a greater number of individuals with wide variations in latency values between ears, mainly for waves C (which represents the transition between the consonant and the vowel) and O (which represents the end of the vowel).

The data above corroborate the hypothesis made by our study group, based on previous studies, that there could be a difference in FFR responses between the ears, with better performance in the right ear of individuals who could sing in tune compared with those who could not. An earlier work found that there were better neuronal responses in the right hemispheres of musicians compared with those of nonmusicians when complex musical stimuli were used to elicit FFRs.[20] This relates to the greater sensitivity of musicians in recognizing and discriminating the frequency of tones due to the greater number of specialized neurons involved in the task of tonotopic organization.[21] However, in the present study, we again used a complex stimulus, but this time a verbal stimulus was used (speech). This type of stimulus carries an important linguistic load that is preferentially processed in the left hemisphere.

When comparing latencies in the right ear between groups, there was a statistically significant difference in the values of waves A, C, D, and F between the CGa and the EG. Waves A, C, and F are considered the most stable peaks in FFR responses.[22] [23] The EG, therefore, had impaired sound processing speed throughout all waves of the FFR compared with the CG; this might be explained by an impaired perception of rapid changes in the time domain, in addition to having a more limited neural representation of harmonics.

In the left ear, a statistically significant difference was observed in the latency of waves C, D, E, F, and O in the EG compared with the CG. The vowel coding region, also called the sustained portion, reflects encoding of the fundamental frequency and the harmonic structure of complex stimuli and has midbrain origins. Research has shown that musicians have a greater subcortical representation of speech syllables than non-musicians, giving musicians a neural advantage in distinguishing complex sounds, including speech, even under adverse conditions.[24] In the present study, we ascertained that there was a correlation between detuning and the processing of auditory information, showing that individuals who sing off tune, in addition to having problems in vocal production, also have problems processing speech, presumably through inefficient processing of neurons in the subcortical and cortical regions.

It is interesting to note that the vast majority of studies were performed by evaluating the FFR with monaural stimulation only in the right ear. What can be explained by the advantage of the right ear and, therefore, of the left hemisphere in the processing of information related to the processing of verbal sounds.[14] [19] [20] [25] However, there are some researchers who have analyzed the FFR assessments considering the monaural responses of the right and left ears separately. The researchers reported that the monaural responses of the right ear seem to be similar to those obtained in the left ear, however, each point out that the studies were carried out with individuals considered typical, that is, without speech, language and/or communication complaints. [26] [27] [28] Thus, the present study analyzed the responses of the right and left ears (monaural stimulation), showing that the responses obtained in the left ear seem to be important in the process of differential diagnosis of individuals who sing in tune from those who do not sing in tune. Therefore, our study calls attention to care in carrying out and analyzing the FFR evaluation in individuals with pathologies, that is, in certain pathologies the evaluations must be performed through monaural stimulation in the right and left ears. The evaluation of the FFR may allow us to understand how the encoding process of speech sounds occurs in different pathological conditions within the communicative process.

The brain is divided into two hemispheres, right and left, with each one being responsible for different functions. There is an important relationship between brain hemisphere and auditory processing. The vast majority of people has the left hemisphere/right ear (LH/RE) as the area responsible for understanding speech and language. The information received in the right ear is transferred directly to the left hemisphere (language-dominant hemisphere). However, the right hemisphere/left ear (RH/LE) has an important function in the processing of specific aspects, such as melody, pitch, prosody, and intonation, which are essential for understanding speech. The information that arrives in the left ear is initially processed in the right hemisphere, and, with the help of the corpus callosum, this information is forwarded to the left hemisphere.[29] So, the present study allows us to understand that musicians present a more efficient and robust processing in the right ear due their musical experience. This way, probably, there is a decrease in the time of neural transmission of auditory information since the ability to significantly decode speech elements is a complex task that involves multiple stages of neural processing to reach the auditory cortex.

Regarding the amplitude values, significant statistical differences were observed only in the O wave in the left ear. The amplitude parameters seem to vary widely, and there is little accuracy in distinguishing individuals who can sing in tune from those who cannot. A recent study observed FFR responses only in the right ear of individuals who sing off tune; it also showed that the amplitude values do not seem to be effective in identifying poor vocal tuning.[14] These findings corroborate previous FFR studies, which also highlighted that the amplitude measures are not very reliable in distinguishing between normal and pathological individuals.[30] [31]

Only one study has been found in the literature associating FFR and tune.[14] This study revealed a difference between individuals who can sing in tune and those who cannot in neural processing on FFR response using the syllable /da/. However, the analysis was exclusive to the responses of the right ear. The authors suggested that an individual with a good knowledge and experience of music is more likely to have developed efficient processing of language. Another interesting point highlighted was that the brainstem seemed to have an active role in the neural decoding of sounds, besides that, the musical experience and sound stimulation throughout life could improve skills across the entire auditory trajectory.[14]

Frequency-following response is a type of neurophysiological evaluation that allows the investigation and monitoring of the coding of speech sounds in the brainstem, subcortical, and cortical regions. The present study demonstrates that individuals who sing off tune have a deficit in the processing of sound information, which may be one reason that their vocal tuning is also negatively affected. These individuals seem to have a weaker neural network for the perception of speech sounds compared with the responses of individuals who are able to sing in tune.


#

Limitation and Future Research

In the present study, there was a predominance of females in the EG. Further studies should include an equal number of males and females as well as a large number of individuals in both groups. Furthermore, new research on this topic should continue to be regarded as a useful measurement to evaluate and monitor individuals who sing off tune.


#

Conclusion

The present study showed that monaural stimulation (right or left) in an FFR test is useful for demonstrating impairment of speech perception in individuals who sing off tune. Alterations were observed (i) in the right ear, where the latencies of waves A, C, D, and F were associated with normal values of amplitude; and (ii) in the left ear, where alterations in the latencies of waves C, D, E, F, and O were associated with amplitude values of wave O. The response of the left ear appears to present more subtlety and reliability when identifying the coding of speech sound in individuals who sing off tune.


#
#

Conflict of Interests

The authors declare that there is no conflict of interests

  • References

  • 1 Zuk J, Benjamin C, Kenyon A, Gaab N. Behavioral and neural correlates of executive functioning in musicians and non-musicians. PLoS One 2014; 9 (06) e99868
  • 2 Strait DL, Slater J, O'Connell S, Kraus N. Music training relates to the development of neural mechanisms of selective auditory attention. Dev Cogn Neurosci 2015; 12: 94-104
  • 3 Strait DL, Kraus N. Biological impact of auditory expertise across the life span: musicians as a model of auditory learning. Hear Res 2014; 308: 109-121
  • 4 Strait DL, Parbery-Clark A, O'Connell S, Kraus N. Biological impact of preschool music classes on processing speech in noise. Dev Cogn Neurosci 2013; 6: 51-60
  • 5 Houaiss A. Dicionário da língua portuguesa. Rio de Janeiro: Objetiva; 2001
  • 6 Sobreira S. Desafinação vocal. Rio de Janeiro: Musimed; 2003
  • 7 Moura Fd. Análise do processamento auditivo em cantores afinados e desafinados. São Paulo: Centro Universitário das Faculdades Metropolitanas Unidas; 2008
  • 8 Mawhinney T. Tone-deafness and low musical abilities - an investigation of prevalence. characteristics and tractability. Kingston: Queen's University; 1986
  • 9 Heresniak M. The care and training of adult bluebirds: teaching the singing impaired. J. Singing 2004; 61 (01) 9-25 set-oct.
  • 10 Ishii C, Arashiro PM, Desgualdo L. Ordering and temporal resolution in professional singers and in well tuned and out of tune amateur singers. PróFono 2006; 18 (03) 285-292 DOI: 10.1590/S0104-56872006000300008.
  • 11 Song JH, Nicol T, Kraus N. Test-retest reliability of the speech-evoked auditory brainstem response. Clin Neurophysiol 2011; 122 (02) 346-355
  • 12 Song JH, Nicol T, Kraus N. Reply to Test-retest reliability of the speech-evoked ABR is supported by tests of covariance. Clin Neurophysiol 2011; 122 (09) 1893-1895
  • 13 Kimura D. Cerebral dominance and the perception of verbal stimuli. Canadian Journal of Psychology 1961; 15: 166-171 http://dx.doi.org/10.1037/h0083219
  • 14 Sanfins MD, Gielou I, Madazio G, Honorio F, Bordin T, Skarzynska MB, Behlau M. Frequency following response in adults who can or cannot sign in tune. Journal of Hearing Science 2020; 10 (03) 58-67 https://doi.org/10.17430/JHS.2020.10.3.6
  • 15 Davis H. Silverman RS. Hearing and deafness. Nova York: NY: Rinehart & Winston; 1970
  • 16 Jerger J. Clinical experience with impedance audiometry. Arch Otolaryngol 1970; 92 (04) 311-324
  • 17 Moreti F, Pereira LD, Gielow I. Pitch-matching scanning: comparison of musicians and non-musicians' performance. J Soc Bras Fonoaudiol 2012; 24 (04) 368-373
  • 18 Jasper HH. The ten-twenty system of the International Federation. Electroencephalogr Clin Neurophysiol 1958; 10: 371-375
  • 19 Skoe E, Kraus N. Auditory brain stem response to complex sounds: a tutorial. Ear Hear 2010; 31 (03) 302-324
  • 20 Siedenberg R, Goodin DS, Aminoff MJ, Rowley HA, Roberts TP. Comparison of late components in simultaneously recorded event-related electrical potentials and event-related magnetic fields. Electroencephalogr Clin Neurophysiol 1996; 99 (02) 191-197
  • 21 Woods DL, Alho K, Algazi A. Intermodal selective attention: evidence for processing in tonotopic auditory fields. Psychophysiology 1993; 30 (03) 287-295
  • 22 Russo N, Nicol T, Musacchia G, Kraus N. Brainstem responses to speech syllables. Clin Neurophysiol 2004; 115 (09) 2021-2030
  • 23 Russo NM, Nicol TG, Zecker SG, Hayes EA, Kraus N. Auditory training improves neural timing in the human brainstem. Behav Brain Res 2005; 156 (01) 95-103
  • 24 Parbery-Clark A, Anderson S, Hittner E, Kraus N. Musical experience strengthens the neural representation of sounds important for communication in middle-aged adults. Frontiers in Aging Neuroscience 4, (2012). Accessed January 112023. Doi: org/10.3389/fnagi.2012.00030
  • 25 Sanfins MD, Hatzopoulos S, Donadon C. et al. An analysis of the parameters used in speech ABR assessment protocols. J Int Adv Otol 2018; 14 (01) 100-105 DOI: 10.5152/IAO/2018.3574.
  • 26 Vander Werff KR, Burns KS. Brain stem responses to speech in younger and older adults. Ear Hear 2011; 32 (02) 168-180
  • 27 Akhoun I, Moulin A, Jeanvoine A. et al. Speech auditory brainstem response (speech ABR) characteristics depending on recording conditions, and hearing status: an experimental parametric study. J Neurosci Methods 2008; 175 (02) 196-205
  • 28 Sanfins MD, Borges LR, Ubiali T. et al. Speech-evoked brainstem response in normal adolescent and children speakers of Brazilian Portuguese. Int J Pediatr Otorhinolaryngol 2016; 90: 12-19
  • 29 Knecht S, Dräger B, Deppe M. et al. Handedness and hemispheric language dominance in healthy humans. Brain 2000; 123 (Pt 12): 2512-2518
  • 30 Sanfins MD, Borges LR, Donadon C, Hatzopoulos S, Skarzynski PH, Colella-Santos MF. Electrophysiological responses to speech stimuli in children with otitis media. J Hear Sci 2017; 7 (04) 9-19
  • 31 Colella-Santos MF, Donadon C, Sanfins MD, Borges LR. Otitis media: long-term effect on central auditory nervous system. BioMed Research International. 2019, Article ID 8930904, 10 pages. https://doi.org/10.1155/2019/8930904

Address for correspondence

Milaine Dominici Sanfins, Post doc
Department of Audiology, Albert Einstein Instituto Israelita de Ensino e Pesquisa
São Paulo, SP, Brazil, CEP: 05432-010

Publication History

Received: 03 December 2020

Accepted: 20 May 2021

Article published online:
08 February 2023

© 2023. Fundação Otorrinolaringologia. This is an open access article published by Thieme under the terms of the Creative Commons Attribution-NonDerivative-NonCommercial License, permitting copying and reproduction so long as the original work is given appropriate credit. Contents may not be used for commercial purposes, or adapted, remixed, transformed or built upon. (https://creativecommons.org/licenses/by-nc-nd/4.0/)

Thieme Revinter Publicações Ltda.
Rua do Matoso 170, Rio de Janeiro, RJ, CEP 20270-135, Brazil

  • References

  • 1 Zuk J, Benjamin C, Kenyon A, Gaab N. Behavioral and neural correlates of executive functioning in musicians and non-musicians. PLoS One 2014; 9 (06) e99868
  • 2 Strait DL, Slater J, O'Connell S, Kraus N. Music training relates to the development of neural mechanisms of selective auditory attention. Dev Cogn Neurosci 2015; 12: 94-104
  • 3 Strait DL, Kraus N. Biological impact of auditory expertise across the life span: musicians as a model of auditory learning. Hear Res 2014; 308: 109-121
  • 4 Strait DL, Parbery-Clark A, O'Connell S, Kraus N. Biological impact of preschool music classes on processing speech in noise. Dev Cogn Neurosci 2013; 6: 51-60
  • 5 Houaiss A. Dicionário da língua portuguesa. Rio de Janeiro: Objetiva; 2001
  • 6 Sobreira S. Desafinação vocal. Rio de Janeiro: Musimed; 2003
  • 7 Moura Fd. Análise do processamento auditivo em cantores afinados e desafinados. São Paulo: Centro Universitário das Faculdades Metropolitanas Unidas; 2008
  • 8 Mawhinney T. Tone-deafness and low musical abilities - an investigation of prevalence. characteristics and tractability. Kingston: Queen's University; 1986
  • 9 Heresniak M. The care and training of adult bluebirds: teaching the singing impaired. J. Singing 2004; 61 (01) 9-25 set-oct.
  • 10 Ishii C, Arashiro PM, Desgualdo L. Ordering and temporal resolution in professional singers and in well tuned and out of tune amateur singers. PróFono 2006; 18 (03) 285-292 DOI: 10.1590/S0104-56872006000300008.
  • 11 Song JH, Nicol T, Kraus N. Test-retest reliability of the speech-evoked auditory brainstem response. Clin Neurophysiol 2011; 122 (02) 346-355
  • 12 Song JH, Nicol T, Kraus N. Reply to Test-retest reliability of the speech-evoked ABR is supported by tests of covariance. Clin Neurophysiol 2011; 122 (09) 1893-1895
  • 13 Kimura D. Cerebral dominance and the perception of verbal stimuli. Canadian Journal of Psychology 1961; 15: 166-171 http://dx.doi.org/10.1037/h0083219
  • 14 Sanfins MD, Gielou I, Madazio G, Honorio F, Bordin T, Skarzynska MB, Behlau M. Frequency following response in adults who can or cannot sign in tune. Journal of Hearing Science 2020; 10 (03) 58-67 https://doi.org/10.17430/JHS.2020.10.3.6
  • 15 Davis H. Silverman RS. Hearing and deafness. Nova York: NY: Rinehart & Winston; 1970
  • 16 Jerger J. Clinical experience with impedance audiometry. Arch Otolaryngol 1970; 92 (04) 311-324
  • 17 Moreti F, Pereira LD, Gielow I. Pitch-matching scanning: comparison of musicians and non-musicians' performance. J Soc Bras Fonoaudiol 2012; 24 (04) 368-373
  • 18 Jasper HH. The ten-twenty system of the International Federation. Electroencephalogr Clin Neurophysiol 1958; 10: 371-375
  • 19 Skoe E, Kraus N. Auditory brain stem response to complex sounds: a tutorial. Ear Hear 2010; 31 (03) 302-324
  • 20 Siedenberg R, Goodin DS, Aminoff MJ, Rowley HA, Roberts TP. Comparison of late components in simultaneously recorded event-related electrical potentials and event-related magnetic fields. Electroencephalogr Clin Neurophysiol 1996; 99 (02) 191-197
  • 21 Woods DL, Alho K, Algazi A. Intermodal selective attention: evidence for processing in tonotopic auditory fields. Psychophysiology 1993; 30 (03) 287-295
  • 22 Russo N, Nicol T, Musacchia G, Kraus N. Brainstem responses to speech syllables. Clin Neurophysiol 2004; 115 (09) 2021-2030
  • 23 Russo NM, Nicol TG, Zecker SG, Hayes EA, Kraus N. Auditory training improves neural timing in the human brainstem. Behav Brain Res 2005; 156 (01) 95-103
  • 24 Parbery-Clark A, Anderson S, Hittner E, Kraus N. Musical experience strengthens the neural representation of sounds important for communication in middle-aged adults. Frontiers in Aging Neuroscience 4, (2012). Accessed January 112023. Doi: org/10.3389/fnagi.2012.00030
  • 25 Sanfins MD, Hatzopoulos S, Donadon C. et al. An analysis of the parameters used in speech ABR assessment protocols. J Int Adv Otol 2018; 14 (01) 100-105 DOI: 10.5152/IAO/2018.3574.
  • 26 Vander Werff KR, Burns KS. Brain stem responses to speech in younger and older adults. Ear Hear 2011; 32 (02) 168-180
  • 27 Akhoun I, Moulin A, Jeanvoine A. et al. Speech auditory brainstem response (speech ABR) characteristics depending on recording conditions, and hearing status: an experimental parametric study. J Neurosci Methods 2008; 175 (02) 196-205
  • 28 Sanfins MD, Borges LR, Ubiali T. et al. Speech-evoked brainstem response in normal adolescent and children speakers of Brazilian Portuguese. Int J Pediatr Otorhinolaryngol 2016; 90: 12-19
  • 29 Knecht S, Dräger B, Deppe M. et al. Handedness and hemispheric language dominance in healthy humans. Brain 2000; 123 (Pt 12): 2512-2518
  • 30 Sanfins MD, Borges LR, Donadon C, Hatzopoulos S, Skarzynski PH, Colella-Santos MF. Electrophysiological responses to speech stimuli in children with otitis media. J Hear Sci 2017; 7 (04) 9-19
  • 31 Colella-Santos MF, Donadon C, Sanfins MD, Borges LR. Otitis media: long-term effect on central auditory nervous system. BioMed Research International. 2019, Article ID 8930904, 10 pages. https://doi.org/10.1155/2019/8930904

Zoom Image
Fig. 1 Example of the computerized acoustic evaluation. (A) Correct tuning in the pitch-matching test and (B) incorrect tuning in the pitch-matching test (Vocalgrama 1.8i – CTS Informática).
Zoom Image
Fig. 2 Comparison of latency values and ears between groups.
Zoom Image
Fig. 3 Comparison of amplitudes values between groups and ears.