J Am Acad Audiol 2018; 29(04): 279-291
DOI: 10.3766/jaaa.16114
Articles
Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.

Using a Digital Language Processor to Quantify the Auditory Environment and the Effect of Hearing Aids for Adults with Hearing Loss

Kelsey E. Klein
*   Department of Communication Sciences and Disorders, University of Iowa, Iowa City, IA
,
Yu-Hsiang Wu
*   Department of Communication Sciences and Disorders, University of Iowa, Iowa City, IA
,
Elizabeth Stangl
*   Department of Communication Sciences and Disorders, University of Iowa, Iowa City, IA
,
Ruth A. Bentler
*   Department of Communication Sciences and Disorders, University of Iowa, Iowa City, IA
› Author Affiliations
Further Information

Corresponding author

Kelsey E. Klein
Department of Communication Sciences and Disorders, University of Iowa
Iowa City, IA 52242

Publication History

Publication Date:
29 May 2020 (online)

 

Abstract

Background:

Auditory environments can influence the communication function of individuals with hearing loss and the effects of hearing aids. Therefore, a tool that can objectively characterize a patient’s real-world auditory environments is needed.

Purpose:

To use the Language Environment Analysis (LENA) system to quantify the auditory environments of adults with hearing loss, to examine if the use of hearing aids changes a user’s auditory environment, and to determine the association between LENA variables and self-report hearing aid outcome measures.

Research Design:

This study used a crossover design.

Study Sample:

Participants included 22 adults with mild-to-moderate hearing loss, age 64–82 yr.

Intervention:

Participants were fitted with bilateral behind-the-ear hearing aids from a major manufacturer.

Data Collection and Analysis:

The LENA system consists of a digital language processor (DLP) that is worn by an individual and records up to 16 hr of the individual’s auditory environment. The recording is then automatically categorized according to time spent in different types of auditory environments (e.g., meaningful speech and TV/electronic sound) by the LENA algorithms. The LENA system also characterizes the user’s auditory environment by providing the sound levels of different auditory categories. Participants in the present study wore a LENA DLP in an unaided condition and aided condition, which each lasted six to eight days. Participants wore bilateral hearing aids in the aided condition. Percentage of time spent in each auditory environment, as well as median levels of TV/electronic sounds and speech, were compared between subjects’ unaided and aided conditions using paired sample t tests. LENA data were also compared to self-report measures of hearing disability and hearing aid benefit using Pearson correlations.

Results:

Overall, participants spent the greatest percentage of time in silence (∼40%), relative to other auditory environments. Participants spent ∼12% and 26% of their time in meaningful speech and TV/electronic sound environments, respectively. No significant differences were found between mean percentage of time spent in each auditory environment in the unaided and aided conditions. Median TV/electronic sound levels were on average 2.4 dB lower in the aided condition than in the unaided condition; speech levels were not significantly different between the two conditions. TV/electronic sound and speech levels did not significantly correlate with self-report data.

Conclusions:

The LENA system can provide rich data to characterize the everyday auditory environments of older adults with hearing loss. Although TV/electronic sound level was significantly lower in the aided than unaided condition, the use of hearing aids seemed not to substantially change users’ auditory environments. Because there is no significant association between objective LENA variables and self-report questionnaire outcomes, these two types of measures may assess different aspects of communication function. The feasibility of using LENA in clinical settings is discussed.


#

INTRODUCTION

Understanding adults’ experiences with hearing aids is essential to optimizing hearing aid outcomes. Without understanding the effect of hearing aids on patients’ lives, it is difficult to know whether a patient’s communication needs are being met and, if they are not, how to improve these outcomes. Specifically, to fully meet the communication needs of a particular patient, the clinician must have information about the characteristics of auditory environments in which the patient is likely to be using the hearing aids (note that other terms that have been used to describe auditory environments include listening environments, auditory lifestyle, and auditory ecology; [Cox et al, 2000]). The subsequent change in the experienced auditory environment of the patient following hearing aid use can then provide information on the effect of hearing aid use on the way a patient interacts with his or her environment. For example, [Gatehouse et al (2003)] showed that speech understanding benefit provided by hearing aids differs depending on the type of listening environment. They found that hearing aid users obtained increased benefit from their hearing aids when the noise was at a low level and included temporal modulations.

Traditionally, users’ experiences with hearing aids in different listening environments have been assessed in the clinic/laboratory using behavioral measures such as speech perception tests, or in the real world using self-report measures such as standardized questionnaires ([Saunders et al, 2005]). Specifically, speech perception measured in the clinic is a commonly used hearing aid outcome within the field of audiology. Some speech perception tasks include competing background noise (e.g., [Kalikow et al, 1977]; [Nilsson et al, 1994]; [Killion et al, 2004]), varying amounts of semantic context (e.g., [Kalikow et al, 1977]), visual cues (e.g., [Cox et al, 1989]), or fluctuating stimulus presentation levels ([Boyle et al, 2013]) to better approximate real-world listening environments. However, it is not feasible in a clinical setting to represent the broad range of complex listening environments encountered on a daily basis outside the clinic. The contrived nature of speech recognition tasks performed in a clinical setting has been criticized for poor ecological validity ([Rønne et al, 2013]) and inconsistent relationship with hearing aid use and self-report measures of benefit ([Cox and Alexander, 1992]; [Bentler et al, 1993]; [Saunders et al, 2005]).

In contrast with clinical speech perception measures, standardized questionnaires directly reflect real-world environments. While these self-report measures provide information about patients’ experiences with hearing aids and listening environments in their everyday lives, the information afforded by self-report measures is limited. Questionnaire responses can be affected by factors such as personality ([Cox et al, 1999]; [2007]), memory ([Bradburn et al, 1987]; [Shiffman et al, 2008]), questionnaire structure ([Yamada et al, 2012]), patient expectations ([Vestergaard, 2006]), and patient and clinician biases ([Bentler et al, 2003]). Although standardized self-report measures offer valuable information about patients’ perceptions of their experiences, they often provide little information about the types of auditory environments patients experience or the acoustic characteristics of these environments. Additionally, patients often cannot accurately recall actual environments and levels of communication difficulty encountered at the time of completing the questionnaire (see [Bradburn et al, 1987]).

Although both in-clinic measures and standardized questionnaires provide valuable and distinct perspectives on users’ experiences with hearing aids, neither provides large amounts of detailed, quantitative information about the patient’s everyday auditory environments or the effect of hearing aids on these auditory environments. Understanding the characteristics of auditory environments encountered by the patient and the amount of time spent in varying noise levels, for example, could be important for the audiologist trying to optimize hearing aid programming and aural rehabilitation planning for the specific needs of the patient. Understanding the effects of hearing aids on auditory environments could also be useful, as this information could help to demonstrate the potential real-world benefit of hearing aids to both patients and audiologists. Therefore, a tool that can objectively and automatically quantify a patient’s real-world listening environments and detect the effect of hearing aids on these environments could provide valuable information in the clinic.

One such tool is the data logging feature of modern hearing aids. Data logging can provide information about users’ listening environments by presenting average amount of device use, percentage of time spent in different auditory environments, time spent using different hearing aid programs, and overall distribution of sound levels in the environment. Although such information can be useful for hearing aid fine tuning and patient consultation, data logging has several limitations. First, it cannot assess unaided listening environments, unless the hearing aids are programmed to be acoustically transparent and fit to patients. Second, its assessment of auditory categories has not been validated in peer-reviewed research, and thus the accuracy with which auditory environments are identified remains unclear. Third, and most importantly, current data logging only provides summary data of sound levels and time spent in different auditory environments. It does not provide fine-grained, time-stamped data with which clinicians and researchers could conduct more in-depth analysis of patients’ environments.

Another tool that may hold the potential to objectively quantify real-world auditory environments and the effects of hearing aids on these environments is the Language Environment Analysis (LENA) system. The LENA system consists of a digital language processor (DLP) that records up to 16 hr of the auditory environment of the DLP wearer. The LENA algorithms then automatically label recording segments offline according to different auditory categories: meaningful speech, distant speech, TV/electronic sound, noise, and silence. The LENA system can provide information about auditory characteristics such as percentage of time spent in different types of listening environments and average sound levels of speech and electronic media. More importantly, the LENA system provides access to detailed second-by-second data of sound levels and auditory environment classification, which allows for in-depth data mining. LENA was designed for and has been used extensively to study the language-learning environments of children (e.g., [VanDam et al, 2012]; [Thiemann-Bourque et al, 2014]; [Gilkerson et al, 2015]; [Sosa, 2016]), but it has also been used with adults ([Li et al, 2014]). Specifically, Li et al asked 37 older adults (aged 64–91 yr) residing in a retirement community to wear a LENA DLP for one day. The results showed that time spent in speech and TV environments varied widely between individuals and indicated that it is feasible to use the LENA system to quantify the auditory and social environments of adults. However, because Li et al did not specify if their research participants had hearing loss, it is unknown if their results can generalize to hearing-impaired adults, with or without hearing aids. Furthermore, it is unknown if hearing aids can change the characteristics of auditory environments and if LENA can detect these changes.

The first goal of the present study was to characterize the auditory environment of older adults with hearing impairment, with and without hearing aids, using the LENA system. The second goal was to examine if hearing aids can change the characteristics of users’ auditory environments measured by the LENA system, such as percentage of time spent in different auditory environments, average TV levels, and average speech levels. We hypothesized that participants’ auditory environments would be different in the unaided and aided conditions. Specifically, we predicted that participants would spend a higher percentage of time in meaningful speech environments and be exposed to a higher number of words while in the aided listening condition relative to the unaided listening condition, as hearing aid use is associated with increases in perceived social participation ([Malinoff and Weinstein, 1989]; [Abrams et al, 1992]; [Humes et al, 2001]; [Chisolm et al, 2007]; [Pronk et al, 2013]). Along this line, we hypothesized that participants would show a lower percentage of time in silence in the aided condition, to reflect decreased social isolation. We also hypothesized that average levels of TV/electronic sounds and other adults’ speech would be lower in the aided condition than the unaided condition, as increased audibility in the aided condition may provide users with access to TV audio at lower sound levels and allow other adults to speak at a lower level and still be understood by the user. The third goal was to examine the relationship between the data from the LENA system and self-reported data obtained from several commonly used hearing-related questionnaires. We hypothesized that the differences in TV and speech levels between the unaided and aided conditions would show significant correlations with hearing aid benefit measured by established self-report questionnaires, since both the LENA and self-report methodologies gather data relating to the real-world auditory environment. However, the correlation would be weak because the LENA data are objective in nature while questionnaires provide self-reported data. To achieve these goals, adults with hearing impairment were recruited and fitted with hearing aids. The participants wore a LENA DLP in two conditions: without hearing aids (unaided condition) and while wearing hearing aids (aided condition). Participants’ experiences with hearing in the unaided and aided conditions were measured using several standardized questionnaires.


#

METHODS

Participants

Participants included 22 adults (9 females and 13 males) aged 64–82 yr (M = 72.4, standard deviation [SD] = 5.3) with bilateral sensorineural hearing loss. Individuals were eligible to participate if their hearing loss met the following criteria: (a) postlingual, bilateral, sensorineural hearing loss (air–bone gap <10 dB); (b) pure-tone average across 0.5, 1, 2, and 4 kHz between 25 and 60 dB HL ([ANSI, 2010]); and (c) hearing symmetry within 20 dB for all test frequencies. The study focused on mild-to-moderate hearing loss because of the high prevalence of this hearing loss ([Lin et al, 2011]). Pure-tone thresholds averaged across both ears and across participants at 0.5, 1, 2, 4, and 8 kHz were 25.8 (SD = 10.3), 31.4 (11.3), 45.2 (12.6), 53.6 (12.7), and 60.9 (16.9) dB HL, respectively. At the start of the study, 5 participants had previous experience wearing hearing aids (M = 2.7 yr, SD = 1.5), and the other 17 participants had no previous hearing aid experience. A participant was considered an experienced user if he or she had ≥1 yr of prior hearing aid experience. Experienced users expressed understanding that they would be asked not to use their hearing aids for a period of four weeks during the study. Although motivation for participating in the study was not recorded, it is possible that some of these experienced users were dissatisfied with the fit or functioning of their previous hearing aids. It is also possible that some were interested in how they would manage without the use of hearing aids, or simply wished to contribute to the pursuit of scientific knowledge. All participants were compensated for their involvement in the study.


#

Hearing Aids and Fitting

All participants were fit with a pair of commercially available entry-level behind-the-ear hearing aids. The instrument features included wide dynamic range compression/automatic gain control, volume control, automatic directional microphones, noise reduction, adaptive feedback suppression, and low-level expansion. The devices were fitted on participants bilaterally using slim tubes coupled to canal ear molds with clinically appropriate vent sizes. The manufacturer’s software was used to program the hearing instruments to meet targets specified by the National Acoustic Laboratory-nonlinear 2 prescriptive formula. In situ responses were measured using a probe microphone hearing aid analyzer (Audioscan Verifit, Dorchester, ON, Canada) with a 65 dB SPL speech signal (the “carrot passage”) presented from the listener’s 0° azimuth. The hearing aid output was adjusted to produce real-ear aided responses equivalent (±3 dB) to the National Acoustic Laboratory-nonlinear 2 (NAL-NL2) targets. Gain adjustments were made to the hearing aids within the first two weeks after fitting, in accordance with common hearing aid follow-up procedures. All features remained active at default settings. All hearing aids included one automatically switching program and one manual program for use in noisy environments. Participants were encouraged to wear the hearing aids ≥4 hr per day, and compliance was measured via self-report. The data logging feature of the hearing aids was turned on, but the logged information was not recorded in the study.


#

LENA

The LENA system consists of the DLP and LENA Pro computer software. The DLP records and stores up to 16 hr of the wearer’s auditory environment. The LENA Pro software (Boulder, CO) automatically categorizes each recording segment offline according to the type of auditory environment: meaningful speech, distant speech, TV/electronic sound, noise, and silence. LENA categorizations are based on statistical models for each category that were derived from human transcription and categorization through machine learning during LENA software development ([Ford et al, 2008]; [Xu, Yapanel, Gray, and Baer, 2009]). [Xu et al (2008)] and [Oller et al (2010)] provide information about the exact acoustic features on which these categorizations are based. Meaningful speech is defined as speech sounds originating within a 6-ft radius of the DLP wearer that match well with the expected statistical model for speech. Distant speech includes speech originating from >6 ft from the DLP wearer, speech that does not match closely with the LENA model for speech, and overlapping sounds. The category of TV/electronic sound encompasses media from a variety of electronic sources, such as the TV, radio, or computer. Furthermore, the adult word count (AWC) in the wearer’s environment is estimated based on the acoustic features of speech segments ([Xu, Yapanel, Gray, and Baer, 2009]). LENA algorithms have been shown to have good accuracy in segment categorization and AWC estimation relative to human raters in recordings of children’s language environments ([Xu, Yapanel, and Gray, 2009]). An additional aspect of the LENA Pro software, the Advanced Data Extractor (ADEX), allows for more in-depth data mining. With ADEX, it is possible to view the average sound level for each segment of the recording. A segment can range in length from 0.6 sec to several minutes and is defined as a length of time in the recording during which only one type of auditory environment is identified by the LENA algorithms.

In both the unaided and aided conditions, participants wore the DLP around their neck in a pouch, so that the DLP rested at chest height ([Figure 1]). The pouch had a hole over the DLP microphone that was covered with acoustically transparent mesh. Because each DLP could store a maximum of 16 hr of audio information, participants used a different DLP for each day of recording. Participants were instructed to turn the DLP on when they got up in the morning and to turn it off when they went to bed. Although participants were told not to turn the DLP off during the day, they were allowed to take the DLP off whenever they did not want to be recorded. Each participant kept a log of the times he or she was not wearing the DLP, and these time spans were excluded from data analysis.

Zoom Image
Figure 1 A LENA DLP with its carrying bag. The mesh opening of the carrying bag allowed environmental sounds to reach the microphone port of the DLP.

#

Self-Report Measures

In order to compare the LENA data to self-reported data, three standardized questionnaires were used. The Hearing Handicap Inventory for the Elderly (HHIE; [Ventry and Weinstein, 1982]) is a 25-item inventory designed to assess the extent to which the social and emotional well-being of a patient is affected by hearing problems, such as frustration and embarrassment during conversations and difficulty communicating at other social events. The patient responds to each question with either “Yes,” “Sometimes,” or “No.” These responses correspond to scores of 4, 2, and 0, respectively. The Total Score is the sum of the scores on all 25 items; a lower score indicates a lower degree of handicap.

The Speech, Spatial, and Qualities of Hearing Scale (SSQ; [Gatehouse and Noble, 2004]) version 5.6 includes 49 items that measure listening or auditory abilities. Each item is answered on a continuum from 0 to 10. Larger scores represent better listening and auditory abilities and smaller scores represent poorer listening and auditory abilities. Although the SSQ provides three subscale scores (Speech, Spatial, and Qualities), only the Speech subscale was examined in the present study due to its predicted association with speech levels. The Speech subscale assesses the patient’s ability to understand speech in a variety of listening contexts, such as when communicating with one person in a quiet room, one person in the presence of background noise, and five people in a busy restaurant, without the use of visual cues. Additionally, two items in the SSQ—Speech subscale items 1 and 10—specifically inquire about the patient’s communication abilities while a TV is on in the same room. Scores on these two items were summed to create a TV Composite score in the present study.

The Abbreviated Profile of Hearing Aid Benefit (APHAB; [Cox and Alexander, 1995]) is a 24-item questionnaire that quantifies disability due to hearing loss and the benefit associated with hearing aid use. The APHAB includes four subscales consisting of six items each: the Ease of Communication subscale measures communication effort in relatively easy listening environments, the Background Noise subscale measures speech understanding in the presence of competing noise, the Reverberation subscale measures speech understanding in reverberant rooms, and the Aversiveness subscale measures negative reactions to environmental sounds. Scores on the three communication subscales (Ease of Communication, Background Noise, and Reverberation) can be summed to produce a Global Score. Patients rate the frequency with which they experience a range of listening situations on a scale from Never (1% of the time) to Always (99% of the time). Higher subscale scores represent a greater degree of communication difficulty.

The above three questionnaires were used in the present study for two reasons. First, they are sensitive to the effects of hearing aid amplification ([Cox and Alexander, 1995]; [Perez et al, 2014]; [Dawes et al, 2015]). Second, they contain questions that were expected to be relevant to the LENA results. For example, the SSQ includes questions related to communication while the TV is on, which could be related to change in TV levels. Similarly, the HHIE measures participation restriction, which may be reflected in the relative amount of time spent in silence and speech environments. Thus, these questionnaires provide a valid source of comparison for the LENA-collected objective measures of the effects of hearing aids in everyday life.


#

Procedures

The study was approved by the institutional review board of the University of Iowa. After agreeing to participate in the study and signing the consent form, participants’ pure-tone thresholds were measured. If the participant met all of the inclusion criteria, hearing aids were fitted. A training session about the LENA DLP was then provided. Specifically, the DLP and carrying bag were demonstrated to participants. Special emphasis was made regarding the orientation of the carrying bag (e.g., always keep the side with the microphone facing outward and do not wear the carrying bag under clothing). After participants confirmed that they had fully understood all the related tasks, participants were sent home with three DLPs and underwent a practice session while wearing a DLP before the actual study began. The purpose of this practice session was to familiarize the participants with wearing a DLP and the process of turning it on and off. Data from this practice week were recorded but not analyzed.

After the practice session, participants returned the three DLPs to the laboratory and the two experimental conditions (unaided and aided) began. The order of the two conditions was counterbalanced. Each condition lasted for five weeks. In the unaided condition, subjects did not wear hearing aids; the experienced hearing aid users did not wear their hearing aids for the four weeks preceding the unaided recording week. In the first four weeks of each condition, the participants did not wear a DLP. This duration was chosen because standardized questionnaires show a significant effect of hearing aids on social participation after four weeks of hearing aid use ([Malinoff and Weinstein, 1989]; [Humes et al, 2001]). The participants returned to the laboratory at the end of week 4. At this time participants were provided with seven DLPs, and the week of LENA recording began. Each DLP was labeled with a different day of the week to ensure that participants used a different DLP each day. The participants were asked to maintain their regular daily activities and schedules. At the conclusion of each of the unaided and aided recording weeks, participants returned the seven DLPs to the laboratory and the questionnaires were administered.


#

Data Analysis

Benefit scores for the questionnaires were calculated by subtracting the aided score from the unaided score (HHIE, APHAB) or the unaided score from the aided score (SSQ). For all the questionnaires, a higher benefit score indicates a greater degree of improvement from the unaided to the aided condition.

The percentage of time each subject spent in each of the five LENA-labeled auditory environments (meaningful speech, distant speech, TV/electronic sound, noise, and silence) was calculated by dividing each participant’s total amount of time spent in each of these environments by the participant’s total length of LENA recordings. These percentages were calculated separately for the unaided and aided conditions. Percentage of time spent in the five auditory environments was compared between the unaided and aided conditions using paired two-tailed t tests. The AWC was also obtained from the LENA software for each participant’s unaided and aided condition. In order to control for differences in total recording times between participants’ unaided and aided conditions, the mean AWC per hour was used during data analysis instead of the raw AWC. Paired two-tailed t tests were used to compare AWC per hour in the unaided and aided condition.

Sound levels for specific types of auditory environments were obtained using the ADEX data mining tool of the LENA Pro software. In each condition, the median levels of sound segments categorized as “TV/electronic” (“TVN” [“Television-Near”] in ADEX) and “meaningful speech” (“FAN” [“Female Adult-Near”] or “MAN” [“Male Adult-Near”] in ADEX) were calculated. ADEX also provides “Far” versions of each of these categories, which are similar to the “Near” versions but statistically are highly similar to the category of Silence ([Xu, Yapanel, Gray, and Baer, 2009]). “Near” labels, on the other hand, match well with the statistical model for the specified category. Due to the relatively low likelihood that segments labeled as “Far” actually contain TV/electronic or speech sounds, these segments were excluded from analysis. For each segment analyzed in this study, the mean sound level was obtained from ADEX. For a given subject and a given type of segment, the median of these mean segment sound levels was calculated; each segment mean was weighted by the duration of the segment. Speech segments labeled as originating from male adult and female adult speakers were calculated separately. This is because we were primarily interested in measuring the potential change between conditions in speech levels of adults in the DLP wearer’s environment, rather than the change in speech levels of the DLP wearer him- or herself. When an adult rather than a child is wearing the DLP, the LENA algorithms do not distinguish between the speech of the adult DLP wearer and other adults in the environment. In order to tease apart the speech of the DLP wearer and other adults, we used the speech of adults of the opposite sex of the DLP wearer (hereafter referred to as “opposite sex speech”) as a proxy for the speech of all other adults in the environment. In this way, we could be confident that the speech we analyzed excluded the speech of the DLP wearer. In other words, for female participants, we compared the speech levels in the unaided and aided conditions of segments that were labeled as coming from male speakers, and vice versa for male participants. By focusing only on the opposite sex speech, we excluded from analysis the speech of other adults in the environment of the same sex as the DLP wearer (hereafter referred to as “same sex speech”). While not an optimal approach to examining the variable in question, this method ensured that the data we did analyze were valid. Paired two-tailed t tests were used to compare the TV/electronic and speech levels in the unaided and aided conditions.

Pearson product-moment correlations were calculated to compare participants’ change in TV and opposite sex speech levels from the unaided to aided condition with scores on self-report assessments of hearing aid outcomes. The specific self-report scores examined in the correlation analyses were HHIE Total Score benefit, SSQ Speech subscale benefit, SSQ TV Composite benefit, and APHAB Global Score benefit. For the SSQ TV Composite benefit, only the correlation with change in TV levels was examined. For the other three scores, the correlations with change in both TV and opposite sex speech levels were examined.


#
#

RESULTS AND DISCUSSION

Hearing aid compliance in the aided condition was generally good. The majority of participants (13/22) reported 8–16 hr of daily hearing aid use, and seven participants reported 4–8 hr of daily use. Two participants reported 1–4 hr of daily use. These two participants had mild hearing losses (better-ear pure-tone averages of 25 and 31.7 dB HL), and thus the reported amount of hearing aid use may have been appropriate for their lifestyles and degrees of hearing loss.

The number of days recorded by the DLP in each condition for each participant ranged from six to eight; the mean number of days recorded was 6.91 in the unaided condition (SD = 0.53) and 7.09 in the aided condition (SD = 0.53). The mean daily recording time was 11.48 hr in the unaided condition (SD = 1.23) and 11.08 hr in the aided condition (SD = 1.51). The percentage of time that participants reported not wearing the DLP did not differ significantly between the unaided (M = 2.3%, SD = 5.9) and aided (M = 2.6%, SD = 4.8) conditions (p > 0.05).

In total, 3,461.2 hr of recording were analyzed.

Percentage of Time in Each Auditory Environment

The amount of time spent in each auditory environment is shown in [Figure 2]. In both the unaided and aided conditions, the greatest amount of time was spent in silence compared to the other auditory environments (unaided M = 38.1%, SD = 12.4; aided M = 42.5%, SD = 15.7). On average, approximately one-quarter of participants’ time was spent in TV/electronic auditory environments (unaided M = 25.7%, SD = 15.4; aided M = 25.6%, SD = 18.0). Note that the time spent in silence and TV/electronic sound environments varied substantially between individuals. In general, participants spent slightly more time in distant speech environments (unaided M = 17.7%, SD = 6.7; aided M = 15.6%, SD = 6.4) than meaningful speech environments (unaided M = 12.5%, SD = 4.8; aided M = 11.1%, SD = 5.0). Overall, participants spent the least amount of time in noise, which also showed the least individual variation (unaided M = 6.0%, SD = 2.4; aided M = 5.2%, SD = 1.8). Paired two-tailed t tests were used to compare the mean percentage of time spent in the five LENA-labeled auditory environments in the unaided and aided conditions. No significant differences between the unaided and aided conditions were found for the percentage of time spent in any of the auditory environments (p > 0.05). Effect size (d) and observed power for the comparisons for each of the auditory environments were as follows: meaningful speech: d = 0.35, power = 0.35; distant speech: d = 0.32, power = 0.30; TV/electronic sound: d = 0.01, power = 0.05; noise: d = 0.28, power = 0.24; and silence: d = 0.38, power = 0.39.

Zoom Image
Figure 2 Percentage of time spent in each of the LENA auditory environments in unaided and aided conditions. Horizontal bars represent median values. Vertical bars represent values within the first and third quartiles ± the interquartile range × 1.5. Dots represent outliers.

The percentage of time participants spent in different auditory environments was comparable to the findings of previous studies of the auditory environments of participants with hearing loss. Our finding that participants spent ∼26% of their time in TV/electronic sound environments agrees well with previous studies, which showed that older adults spent 24% ([Wu and Bentler, 2012]) and 26.7% ([Li et al, 2014]) of their time listening to TV and other media. With regard to time spent in speech environments, [Wu and Bentler (2012)] reported that participants spent 61.2% of their time in speech environments, while [Wagener et al (2008)] found that 50.7% of recordings from participants contained speech, including speech produced by the TV and other electronic media. By summing the percentage of time spent in meaningful speech, distant speech, and TV/electronic noise in the present study, we find percentages that are comparable to previous reports of percentage of time spent in speech environments: 55.9% in the unaided condition and 52.3% in the aided condition. Although the methodology and specific categories of auditory environments differed between the present study, [Wu and Bentler (2012)], and [Wagener et al (2008)], the similarity of the estimates of time spent in speech environments in the three studies supports the validity of the present findings. Using the LENA DLP, [Li et al (2014)] found that average percentage of time spent in meaningful speech and distant speech environments were 19.3% and 22.4%, respectively, which are somewhat higher than the findings of the present study. Li et al also reported that the average percentage of time spent in silence was 26.9%, which is lower than the ∼40% of the time spent in silence found in the present study. The differences observed between the present study and Li et al may be attributed to the different lengths of recording times in the two studies (one day versus two weeks) and the differing demographics of the study participants—Li et al included mostly low-income African American women living in a retirement community. The social nature of the community living environment of the participants in the study of Li et al may explain the relatively high percentage of speech time and low percentage of silence observed in that study.


#

AWC per Hour

The AWC per hour for the two conditions is shown in [Figure 3]. Mean AWC per hour was 1,660 in the unaided condition (SD = 667) and 1,467 in the aided condition (SD = 679); these values did not differ significantly between the unaided and aided conditions (p > 0.05, d = 0.34, observed power = 0.33). Note that variation was high between individuals: AWC per hour ranged from 339 for one participant to 2,927 for another participant in the unaided condition. The mean AWC per hour found in the unaided and aided conditions of the present study was substantially lower than the mean AWC per hour found by [Li et al (2014)], which was 2,508 words (note that because Li et al only reported AWC for the total length of participants’ recordings, average AWC per hour was calculated by dividing the mean AWC of 33,141 by the mean total recording time, 13 hr and 13 min).

Zoom Image
Figure 3 Average AWC per hour in unaided and aided conditions. Horizontal bars represent median values. Vertical bars represent values within the first and third quartiles ± the interquartile range × 1.5.

#

TV/Electronic Sound and Speech Levels

Paired two-tailed t tests were used to compare the TV/electronic level, opposite sex speech level, and same sex speech level between the unaided and aided conditions. TV/electronic sound level across all participants in the unaided condition (M = 59.7 dB SPL, SD = 4.21) was significantly higher than that in the aided condition (M = 57.3 dB SPL, SD = 4.80; t(21) = 2.42, p = 0.024, d = 0.52, observed power = 0.64; [Figure 4]). Note that these levels are very similar to those reported by [Smeds et al (2015)], who reported mean TV/radio sound levels of 57.5 and 58.4 dB SPL at ears with better and worse signal-to-noise ratios, respectively, with experienced bilateral hearing aid users. No significant differences were found between the unaided and aided conditions for opposite sex speech level (unaided M = 66.8 dB SPL, SD = 2.45; aided M = 67.2 dB SPL, SD = 2.31; d = 0.19, observed power = 0.14) or same sex speech level (unaided M = 71.5 dB SPL, SD = 2.31; aided M = 71.8 dB SPL, SD = 2.55; d = 0.17, observed power = 0.12; [Figure 4]). These speech levels fall within the range of speech levels (∼60–77 dB SPL) found in different conversation situations by [Jensen and Nielsen (2005)]. The fact that the LENA-measured levels of both TV/electronic sound and speech were similar to the levels found by previous studies in comparable auditory environments supports the validity of using the LENA system to measure the sound levels of specific auditory environments.

Zoom Image
Figure 4 TV/electronic sound, same sex speech, and opposite sex speech levels in unaided and aided conditions. Horizontal bars represent median values. Vertical bars represent values within the first and third quartiles ± the interquartile range × 1.5. Dots represent outliers.

In order to validate the expectation that the close proximity of the DLP to the wearer’s mouth caused same sex speech levels to be artificially elevated, same sex speech levels were compared to opposite sex speech levels for each participant’s unaided and aided condition. If same sex speech levels are consistently higher than opposite sex speech levels, it is likely that analyzing only opposite sex speech levels effectively removes the effects of the DLP wearer’s own voice. A paired t test indicated that this is the case (p < 0.0001; [Figure 4]). For all 13 of the male participants and 8 of the 9 female participants (i.e., 95.5% of participants), same sex speech level was higher than opposite sex speech level in both the unaided and aided conditions. This result supports the notion that the DLP wearer’s own voice only affects same sex speech, and it validates the approach of selectively analyzing opposite sex speech levels to remove the effects of the DLP wearer’s own voice.

It was predicted that because decreased TV and opposite sex speech levels may reflect improved listening and communication abilities in the aided condition, changes between unaided and aided median TV and opposite sex speech levels would correlate with self-report measures of hearing aid benefit. Contrary to our prediction, none of the measured correlations between change in median TV and opposite sex speech levels and self-report measures of hearing aid benefit were significant (p > 0.05; [Table 1]). This indicates that individual changes in the auditory environment due to hearing aid use may not have a direct relationship with self-perceived benefit from hearing aids. Note that correlations with HHIE Total Score included only 21 participants because one participant omitted responses to several items on this questionnaire.

Table 1

Pearson Correlations between LENA-Measured Changes in Sound Levels and Questionnaire Benefit Scores

HHIE Benefit (n = 21)

SSQ Benefit (n = 22)

APHAB Benefit (n = 22)

Total Score

Speech Subscale

TV Composite

Global Score

Change in median TV level

−0.034 (0.887)

0.052 (0.812)

−0.001 (0.994)

0.079 (0.714)

Change in median opposite sex speech

0.129 (0.570)

0.288 (0.197)

0.118 (0.593)

Note: Values are shown as correlation coefficient (p value).



#
#

GENERAL DISCUSSION

The goals of the present study were to use the LENA system to quantify the auditory environments of adults with hearing loss, examine if hearing aid use changes users’ auditory environments, and determine the association between LENA variables and self-report hearing aid outcome measures. The results of the current study are similar, but not identical to those of [Li et al (2014)], as discussed earlier. The results show that participants did not spend a significantly different percentage of time in any of the five auditory environments (meaningful speech, distant speech, noise, TV/electronic, and silence) between the unaided and aided conditions. This finding goes against our prediction that participants would spend a greater percentage of time in meaningful speech environments and a lower percentage of time in silence in the aided condition. Additionally, TV/electronic sound levels were on average 2.4 dB lower in the aided condition than the unaided condition, but speech levels did not differ between conditions. These findings only partially support our prediction that both TV/electronic sound and speech levels would be lower in the aided condition. Finally, no significant correlations were found between the change in TV/electronic sound and opposite sex speech levels between conditions and benefit measured by the specified questionnaires. We did not predict a strong association between these measures, but we expected to see weak correlations between the LENA-collected data and the self-report measures of benefit.

Speech Environments and Social Participation

The finding that participants did not spend more time in speech environments in the aided condition was somewhat surprising, given that past studies have shown that hearing aid use is associated with increases in perceived social participation ([Malinoff and Weinstein, 1989]; [Abrams et al, 1992]; [Humes et al, 2001]; [Chisolm et al, 2007]; [Pronk et al, 2013]). These studies relied on self-report questionnaires measuring perceived social participation and restrictions. On the other hand, [Dawes et al (2015)] measured social engagement via self-estimated number of hours per week spent in solitary activities and found that hearing aid use did not affect social engagement. Similarly, [Vestergaard (2006)] found that the self-reported auditory lifestyles of older adults did not differ when measured before and three months after hearing aid fitting. It is possible that hearing aids cause older adults to perceive themselves as more capable of effective social interactions, but this does not lead older adults to change the amount of time they spend in social environments. A potential explanation for this finding lies in the socio-emotional selectivity theory ([Carstensen et al, 1999]). This theory states that individuals who perceive that they are nearing the ends of their lives tend to focus more energy on maximizing the emotional content of present social interactions, rather than creating new social bonds that might be beneficial in the future. Elderly adults may reflect this tendency by preferring to engage in a limited number of social routines, rather than frequently trying out new social environments or building new relationships. The finding in the present study that hearing aids did not affect patterns of time spent in different auditory environments may be a reflection of the tendency for older adults to be satisfied with established patterns of social engagement, even if hearing aids could provide benefit in new social environments. It is also possible that older adults are unlikely to change their established routines due to mobility limitations and other impairments to activities of daily living ([Gopinath et al, 2012]). Alternatively, four weeks of hearing aid use may not have been enough time for participants to adjust to wearing hearing aids and subsequently change their lifestyles.


#

TV/Electronic Sound Levels

TV/electronic sound levels were significantly lower in the aided condition than the unaided condition. Patients—or their partners or family members—seen in audiology clinics commonly complain of needing to set the TV at an excessively loud volume due to the patient’s hearing loss ([Ranganathan et al, 2011]). A straightforward method of assessing the extent to which hearing aids address this concern is to measure whether the patient sets the TV at a lower volume while wearing hearing aids. The LENA data in this study showed that TV/electronic sound levels were in fact lower when participants were wearing hearing aids, thus adding face validity to using LENA TV levels as a measure of the positive effect of hearing aids. A decrease in TV levels offers objective evidence to patients of the effectiveness of their hearing aids, as well as useful feedback for audiologists regarding the real-world effects of intervention. It should be noted that in this study, many participants likely had the TV on while other people, such as a spouse, were present and also watching the TV. It is thus impossible to determine whether the measured TV/electronic sound levels reflected the DLP wearer’s preferred volume settings, or if the measured levels were affected by the listening preferences of other TV watchers. It is possible that the presence of other people may have limited the observed difference in TV/electronic sound levels between the unaided and aided conditions, leading to an underestimation of the actual effect of hearing aids on preferred TV levels. Furthermore, it is likely that many participants had the TV on while completing other household tasks, without actively watching the TV. This may account for the somewhat low TV/electronic sound levels measured in both unaided and aided conditions, and these passive listening levels may be differentially affected by hearing aid use compared to active listening levels. Although the change in TV/electronic sound levels observed in the present study was small (2.4 dB) and thus holds potentially little clinical significance by itself, the fact that a difference was observed supports a more nuanced investigation of TV levels in relation to individual listeners’ TV habits, which may provide a clearer understanding of the effects of hearing aids on individuals’ home listening environments.


#

Speech Levels

No change in opposite sex speech levels were found between the unaided and aided conditions. This may suggest that the LENA data have limited sensitivity to the changes in speaking patterns of the DLP wearer’s conversation partners, who presumably lower their speaking levels in response to the improved audibility of the hearing aid user. However, opposite sex speech levels are an imperfect proxy for the speech levels of other adults in the DLP wearer’s environment because this measure does not include those people who are of the same sex as the DLP wearer. The LENA algorithms are able to distinguish between the speech of a child DLP wearer and that of other children in the environment, but currently the LENA algorithms do not isolate the speech of an adult DLP wearer. Singling out the speech of the adult DLP wearer from other adults’ speech would provide more complete data about the DLP wearer’s communication environment. This would also allow for the measurement of the number of conversational turns between the DLP wearer and other adults and the amount of speech produced by the DLP wearer and other adults in a conversation, which would help to assess relative social participation. These variables have provided important insights into children’s patterns of verbal behavior in natural settings (e.g., [VanDam et al, 2012]; [Thiemann-Bourque et al, 2014]; [Gilkerson et al, 2015]; [Sosa, 2016]). Optimizing the LENA algorithms for use with adult DLP wearers could provide researchers and clinicians with a better understanding of the auditory environments and communication patterns of adults, as well as the real-world effects of intervention on the lives of adults.

The lack of any correlations between change in the LENA-collected variables (i.e., TV/electronic sound levels and opposite sex speech levels) and questionnaire benefit scores was somewhat unexpected. Both LENA and the examined questionnaires assess changes that take place in the hearing aid user’s real-world environments, so it was predicted that the change measured by these two approaches would show an association. It is possible that the lack of association between these measurement approaches is the result of the substantially different methodologies used to collect the information. Self-report data can be strongly biased by imperfect recall of past events ([Bradburn et al, 1987]; [Shiffman et al, 2008]) and are based primarily on the patient’s recall of particularly emotionally salient events, rather than a careful consideration of all relevant experiences ([Shiffman et al, 2008]). Thus, the questionnaire data reported in this study may better serve as a reflection of the patient’s perceptions of a limited number of experiences than as a summary of the patient’s total aggregate experiences. In contrast, directly quantifying the patient’s auditory environments removes any recall bias and provides an accurate picture of the characteristics of the patient’s environments within a specified time frame. It is possible, however, that hearing aid benefit in more specific listening situations would correlate with changes in amount of time spent in these specific environments. For example, a user who shows self-report hearing aid benefit in noisy environments might spend more time in noise, as measured by LENA. Future research should examine the possible association between change in LENA variables and self-report benefit in specific listening situations.


#

Limitations

The present study was affected by a number of limitations. The first limiting factor was that the study only included older adults, who may be less likely to change their lifestyles than younger adults. Research with younger adults who receive hearing aids may reveal a greater effect of hearing aids on auditory environments because younger patients may be more likely to seek out new environments and social situations when the opportunities arise ([Carstensen et al, 1999]).

Another potential limitation of the present study is that the amount of time between hearing aid fitting and assessment of the aided environment may have been too short for participants to adjust their auditory environments according to the capabilities of the hearing aids. Measurement of a hearing aid user’s aided auditory environments after several months or years of hearing aid use may provide a better understanding of the long-term effects of hearing aids on a patient’s interactions with different auditory environments.

Additionally, because the DLP was positioned at chest height rather than at the wearer’s ears, the sound levels recorded by the DLP likely differed from the levels that reached the wearer’s ears. This discrepancy may be compounded by the fact that the DLP does not account for hearing aid features, such as directional microphones and noise reduction, that may increase or decrease the sound output experienced by the wearer. Thus, the DLP may only provide a gross estimate of the overall sound levels in the environment, rather than information that is specific to the sound levels in the user’s ear canals.

Another limitation of this study was that only state measure questionnaires were included, rather than change measures. State measures assess hearing aid benefit by comparing the patient’s aided responses to unaided responses. Conversely, change measures directly assess hearing aid benefit and require only one questionnaire administration. Change measures have been shown to be more sensitive than state measures ([Gatehouse, 1999]). Thus, the observed results may have been different if change measures had been used.

The sample size of the present study may also be considered a limitation because although it provided sufficient power to detect medium effect sizes, it was too small to detect small effects. However, small effects of hearing aids on the auditory environment likely would hold little clinical significance, so the sample size of this study was deemed appropriate based on the research questions.

It is possible that the auditory environments of new and experienced hearing aid users are affected by hearing aids differently. Because we did not aim to explicitly compare these two groups, relatively few experienced users participated in this study, and thus a statistical comparison of the auditory environments of these two groups was not appropriate. An examination of the trends in the percentage of time the five experienced users spent in the five LENA environments, however, did not show a markedly different pattern from the new hearing aid users.

Finally, characterizing a DLP wearer’s environments based on average levels and average percentage of time spent in different environments may not take full advantage of the great amount of detailed data provided by the LENA system. Instead of analyzing changes in median TV level, for example, it may be more informative to examine the amount of time the TV produces a range of levels. [Figure 5] shows the distribution of TV levels for a single participant’s unaided and aided conditions; it is clear that the two conditions show different patterns of TV level distribution, in addition to different median TV levels. This type of pattern-based approach to analyzing LENA data may help account for additional factors in the DLP wearer’s environment and may make better use of the LENA system’s vast capabilities.

Zoom Image
Figure 5 Histograms showing the distribution of TV/electronic sound levels in the unaided condition (upper panel) and aided condition (lower panel) for one participant.

#

Clinical Implications

The current study demonstrated the feasibility of using the LENA system to objectively characterize the real-world auditory environments and the effect of hearing aids for older adults with hearing loss. The rich data provided by the LENA system could be clinically informative and could be used as a counseling tool with patients. For example, audiologists may find patient data from unaided listening environments to be valuable when planning which specific hearing aids and hearing aid features to recommend to patients. Furthermore, if patients are presented with quantitative and complete information about their own listening environments, it may be easier to identify how exactly hearing aids can be integrated into their auditory lives. Finally, audiologists may employ LENA data alongside standard methods of assessment in order to gain a more complete understanding of individual auditory needs and experiences in everyday life.


#
#

Abbreviations

ADEX: Advanced Data Extractor
APHAB: Abbreviated Profile of Hearing Aid Benefit
AWC: adult word count
DLP: digital language processor
HHIE: Hearing Handicap Inventory for the Elderly
LENA: Language Environment Analysis
SD: standard deviation
SSQ: Speech, Spatial, and Qualities of Hearing Scale


#

No conflict of interest has been declared by the author(s).

This work was funded by NIH/NIDCD (R03DC012551).


Portions of this work were presented at the 42nd Annual Scientific and Technology Conference of the American Auditory Society in Scottsdale, AZ, March 5–7, 2015, and at the American Speech-Language-Hearing Association Annual Convention in Denver, CO, November 11–14, 2015.


  • REFERENCES

  • Abrams HB, Hnath-Chisolm T, Guerreiro SM, Ritterman SI. 1992; The effects of intervention strategy on self-perception of hearing handicap. Ear Hear 13 (05) 371-377
  • American National Standards Institute (ANSI) 2010. Specification for Audiometers. ANSI S3.6. New York: ANSI;
  • Bentler RA, Niebuhr DP, Getta JP, Anderson CV. 1993; Longitudinal study of hearing aid effectiveness. II: Subjective measures. J Speech Hear Res 36 (04) 820-831
  • Bentler RA, Niebuhr DP, Johnson TA, Flamme GA. 2003; Impact of digital labeling on outcome measures. Ear Hear 24 (03) 215-224
  • Boyle PJ, Nunn TB, O’Connor AF, Moore BCJ. 2013; STARR: a speech test for evaluation of the effectiveness of auditory prostheses under realistic conditions. Ear Hear 34 (02) 203-212
  • Bradburn NM, Rips LJ, Shevell SK. 1987; Answering autobiographical questions: the impact of memory and inference on surveys. Science 236 4798 157-161
  • Carstensen LL, Isaacowitz DM, Charles ST. 1999; Taking time seriously: a theory of socioemotional selectivity. Am Psychol 54 (03) 165-181
  • Chisolm TH, Johnson CE, Danhauer JL, Portz LJ, Abrams HB, Lesner S, McCarthy PA, Newman CW. 2007; A systematic review of health-related quality of life and hearing aids: final report of the American Academy of Audiology Task Force on the Health-Related Quality of Life Benefits of Amplification in Adults. J Am Acad Audiol 18 (02) 151-183
  • Cox RM, Alexander GC. 1992; Maturation of hearing aid benefit: objective and subjective measurements. Ear Hear 13 (03) 131-141
  • Cox RM, Alexander GC. 1995; The abbreviated profile of hearing aid benefit. Ear Hear 16 (02) 176-186
  • Cox RM, Alexander GC, Gilmore C, Pusakulich KM. 1989; The Connected Speech Test version 3: audiovisual administration. Ear Hear 10 (01) 29-32
  • Cox RM, Alexander GC, Gray G. 1999; Personality and the subjective assessment of hearing aids. J Am Acad Audiol 10 (01) 1-13
  • Cox RM, Alexander GC, Gray GA. 2007; Personality, hearing problems, and amplification characteristics: contributions to self-report hearing aid outcomes. Ear Hear 28 (02) 141-162
  • Cox R, Hyde M, Gatehouse S, Noble W, Dillon H, Bentler R, Stephens D, Arlinger S, Beck L, Wilkerson D, Kramer S, Kricos P, Gagné JP, Bess F, Hallberg L. 2000; Optimal outcome measures, research priorities, and international cooperation. Ear Hear 21 (4, Suppl) 106S-115S
  • Dawes P, Cruickshanks KJ, Fischer ME, Klein BE, Klein R, Nondahl DM. 2015; Hearing-aid use and long-term health outcomes: hearing handicap, mental health, social engagement, cognitive function, physical health, and mortality. Int J Audiol 54 (11) 838-844
  • Ford M, Baer CT, Xu D, Yapanel U, Gray S. 2008. The LENA™ Language Environment Analysis System: Audio Specifications of the DLP-0121. LENA Technical Report . Boulder, CO: LENA Foundation;
  • Gatehouse S. 1999; Glasgow Hearing Aid Benefit Profile: derivation and validation of a client-centered outcome measure for hearing aid services. J Am Acad Audiol 10: 80-103
  • Gatehouse S, Naylor G, Elberling C. 2003; Benefits from hearing aids in relation to the interaction between the user and the environment. Int J Audiol 42 1, (Suppl) S77-S85
  • Gatehouse S, Noble W. 2004; The speech, spatial and qualities of hearing scale (SSQ). Int J Audiol 43 (02) 85-99
  • Gilkerson J, Richards JA, Topping KJ. 2015; The impact of book reading in the early years on parent-child language interaction. J Early Child Literacy 17 (01) 92-110
  • Gopinath B, Schneider J, McMahon CM, Teber E, Leeder SR, Mitchell P. 2012; Severity of age-related hearing loss is associated with impaired activities of daily living. Age Ageing 41 (02) 195-200
  • Humes LE, Garner CB, Wilson DL, Barlow NN. 2001; Hearing-aid outcome measured following one month of hearing aid use by the elderly. J Speech Lang Hear Res 44 (03) 469-486
  • Jensen NS, Nielsen C. 2005 Auditory ecology in a group of experienced hearing-aid users: can knowledge about hearing-aid users’ auditory ecology improve their rehabilitation? Proceedings of the 21st Danavox Symposium, Danavox Jubilee Foundation, Kolding, Denmark, 235–260
  • Kalikow DN, Stevens KN, Elliott LL. 1977; Development of a test of speech intelligibility in noise using sentence materials with controlled word predictability. J Acoust Soc Am 61 (05) 1337-1351
  • Killion MC, Niquette PA, Gudmundsen GI, Revit LJ, Banerjee S. 2004; Development of a quick speech-in-noise test for measuring signal-to-noise ratio loss in normal-hearing and hearing-impaired listeners. J Acoust Soc Am 116 (04) 2395-2405
  • Li L, Vikani AR, Harris GC, Lin FR. 2014; Feasibility study to quantify the auditory and social environment of older adults using a digital language processor. Otol Neurotol 35 (08) 1301-1305
  • Lin FR, Thorpe R, Gordon-Salant S, Ferrucci L. 2011; Hearing loss prevalence and risk factors among older adults in the United States. J Gerontol A Biol Sci Med Sci 66A (05) 582-590
  • Malinoff RL, Weinstein BE. 1989; Measurement of hearing aid benefit in the elderly. Ear Hear 10 (06) 354-356
  • Nilsson M, Soli SD, Sullivan JA. 1994; Development of the Hearing in Noise Test for the measurement of speech reception thresholds in quiet and in noise. J Acoust Soc Am 95 (02) 1085-1099
  • Oller DK, Niyogi P, Gray S, Richards JA, Gilkerson J, Xu D, Yapanel U, Warren SF. 2010; Automated vocal analysis of naturalistic recordings from children with autism, language delay, and typical development. Proc Natl Acad Sci USA 107 (30) 13354-13359
  • Perez E, McCormack A, Edmonds BA. 2014; Sensitivity to temporal fine structure and hearing-aid outcomes in older adults. Front Neurosci 8: 7
  • Pronk M, Deeg DJH, Kramer SE. 2013; Hearing status in older persons: a significant determinant of depression and loneliness? Results from the Longitudinal Aging Study Amsterdam. Am J Audiol 22 (02) 316-320
  • Ranganathan B, Counter P, Johnson I. 2011; Validation of self-reported hearing loss using television volume. J Laryngol Otol 125 (01) 18-21
  • Rønne FM, Laugesen S, Jensen NS, Hietkamp RK, Pedersen JH. 2013; Magnitude of speech-reception-threshold manipulators for a spatial speech-in-speech test that takes signal-to-noise ratio confounds and ecological validity into account. J Acoust Soc Am 133: 050069-050069
  • Saunders GH, Chisolm TH, Abrams HB. 2005; Measuring hearing aid outcomes—not as easy as it seems. J Rehabil Res Dev 42 4, Suppl 2 157-168
  • Shiffman S, Stone AA, Hufford MR. 2008; Ecological momentary assessment. Annu Rev Clin Psychol 4: 1-32
  • Smeds K, Wolters F, Rung M. 2015; Estimation of signal-to-noise ratios in realistic sound scenarios. J Am Acad Audiol 26 (02) 183-196
  • Sosa AV. 2016; Association of the type of toy used during play with the quantity and quality of parent-infant communication. JAMA Pediatr 170 (02) 132-137
  • Thiemann-Bourque KS, Warren SF, Brady N, Gilkerson J, Richards JA. 2014; Vocal interaction between children with Down syndrome and their parents. Am J Speech Lang Pathol 23 (03) 474-485
  • VanDam M, Ambrose SE, Moeller MP. 2012; Quantity of parental language in the home environments of hard-of-hearing 2-year-olds. J Deaf Stud Deaf Educ 17 (04) 402-420
  • Ventry IM, Weinstein BE. 1982; The hearing handicap inventory for the elderly: a new tool. Ear Hear 3 (03) 128-134
  • Vestergaard MD. 2006; Self-report outcome in new hearing-aid users: longitudinal trends and relationships between subjective measures of benefit and satisfaction. Int J Audiol 45 (07) 382-392
  • Wagener KC, Hansen M, Ludvigsen C. 2008; Recording and classification of the acoustic environment of hearing aid users. J Am Acad Audiol 19 (04) 348-370
  • Wu YH, Bentler RA. 2012; Do older adults have social lifestyles that place fewer demands on hearing?. J Am Acad Audiol 23 (09) 697-711
  • Xu D, Yapanel U, Gray S. 2009. Reliability of the LENA™ Language Environment Analysis System in Young Children’s Natural Home Environment. LENA Technical Report . Boulder, CO: LENA Foundation;
  • Xu D, Yapanel U, Gray S, Baer CT. 2009. The LENA™ Language Environment Analysis System: The Interpreted Time Segments (ITS) File. LENA Technical Report . Boulder, CO: LENA Foundation;
  • Xu D, Yapanel U, Gray S, Gilkerson J, Richards J, Hansen J. 2008 Signal processing for young child speech language development. Workshop on Child, Computer and Interaction, October 23, 2008, Chania, Crete, Greece
  • Yamada M, Nishiwaki Y, Michikawa T, Takebayashi T. 2012; Self-reported hearing loss in older adults is associated with future decline in instrumental activities of daily living but not in social participation. J Am Geriatr Soc 60 (07) 1304-1309

Corresponding author

Kelsey E. Klein
Department of Communication Sciences and Disorders, University of Iowa
Iowa City, IA 52242

  • REFERENCES

  • Abrams HB, Hnath-Chisolm T, Guerreiro SM, Ritterman SI. 1992; The effects of intervention strategy on self-perception of hearing handicap. Ear Hear 13 (05) 371-377
  • American National Standards Institute (ANSI) 2010. Specification for Audiometers. ANSI S3.6. New York: ANSI;
  • Bentler RA, Niebuhr DP, Getta JP, Anderson CV. 1993; Longitudinal study of hearing aid effectiveness. II: Subjective measures. J Speech Hear Res 36 (04) 820-831
  • Bentler RA, Niebuhr DP, Johnson TA, Flamme GA. 2003; Impact of digital labeling on outcome measures. Ear Hear 24 (03) 215-224
  • Boyle PJ, Nunn TB, O’Connor AF, Moore BCJ. 2013; STARR: a speech test for evaluation of the effectiveness of auditory prostheses under realistic conditions. Ear Hear 34 (02) 203-212
  • Bradburn NM, Rips LJ, Shevell SK. 1987; Answering autobiographical questions: the impact of memory and inference on surveys. Science 236 4798 157-161
  • Carstensen LL, Isaacowitz DM, Charles ST. 1999; Taking time seriously: a theory of socioemotional selectivity. Am Psychol 54 (03) 165-181
  • Chisolm TH, Johnson CE, Danhauer JL, Portz LJ, Abrams HB, Lesner S, McCarthy PA, Newman CW. 2007; A systematic review of health-related quality of life and hearing aids: final report of the American Academy of Audiology Task Force on the Health-Related Quality of Life Benefits of Amplification in Adults. J Am Acad Audiol 18 (02) 151-183
  • Cox RM, Alexander GC. 1992; Maturation of hearing aid benefit: objective and subjective measurements. Ear Hear 13 (03) 131-141
  • Cox RM, Alexander GC. 1995; The abbreviated profile of hearing aid benefit. Ear Hear 16 (02) 176-186
  • Cox RM, Alexander GC, Gilmore C, Pusakulich KM. 1989; The Connected Speech Test version 3: audiovisual administration. Ear Hear 10 (01) 29-32
  • Cox RM, Alexander GC, Gray G. 1999; Personality and the subjective assessment of hearing aids. J Am Acad Audiol 10 (01) 1-13
  • Cox RM, Alexander GC, Gray GA. 2007; Personality, hearing problems, and amplification characteristics: contributions to self-report hearing aid outcomes. Ear Hear 28 (02) 141-162
  • Cox R, Hyde M, Gatehouse S, Noble W, Dillon H, Bentler R, Stephens D, Arlinger S, Beck L, Wilkerson D, Kramer S, Kricos P, Gagné JP, Bess F, Hallberg L. 2000; Optimal outcome measures, research priorities, and international cooperation. Ear Hear 21 (4, Suppl) 106S-115S
  • Dawes P, Cruickshanks KJ, Fischer ME, Klein BE, Klein R, Nondahl DM. 2015; Hearing-aid use and long-term health outcomes: hearing handicap, mental health, social engagement, cognitive function, physical health, and mortality. Int J Audiol 54 (11) 838-844
  • Ford M, Baer CT, Xu D, Yapanel U, Gray S. 2008. The LENA™ Language Environment Analysis System: Audio Specifications of the DLP-0121. LENA Technical Report . Boulder, CO: LENA Foundation;
  • Gatehouse S. 1999; Glasgow Hearing Aid Benefit Profile: derivation and validation of a client-centered outcome measure for hearing aid services. J Am Acad Audiol 10: 80-103
  • Gatehouse S, Naylor G, Elberling C. 2003; Benefits from hearing aids in relation to the interaction between the user and the environment. Int J Audiol 42 1, (Suppl) S77-S85
  • Gatehouse S, Noble W. 2004; The speech, spatial and qualities of hearing scale (SSQ). Int J Audiol 43 (02) 85-99
  • Gilkerson J, Richards JA, Topping KJ. 2015; The impact of book reading in the early years on parent-child language interaction. J Early Child Literacy 17 (01) 92-110
  • Gopinath B, Schneider J, McMahon CM, Teber E, Leeder SR, Mitchell P. 2012; Severity of age-related hearing loss is associated with impaired activities of daily living. Age Ageing 41 (02) 195-200
  • Humes LE, Garner CB, Wilson DL, Barlow NN. 2001; Hearing-aid outcome measured following one month of hearing aid use by the elderly. J Speech Lang Hear Res 44 (03) 469-486
  • Jensen NS, Nielsen C. 2005 Auditory ecology in a group of experienced hearing-aid users: can knowledge about hearing-aid users’ auditory ecology improve their rehabilitation? Proceedings of the 21st Danavox Symposium, Danavox Jubilee Foundation, Kolding, Denmark, 235–260
  • Kalikow DN, Stevens KN, Elliott LL. 1977; Development of a test of speech intelligibility in noise using sentence materials with controlled word predictability. J Acoust Soc Am 61 (05) 1337-1351
  • Killion MC, Niquette PA, Gudmundsen GI, Revit LJ, Banerjee S. 2004; Development of a quick speech-in-noise test for measuring signal-to-noise ratio loss in normal-hearing and hearing-impaired listeners. J Acoust Soc Am 116 (04) 2395-2405
  • Li L, Vikani AR, Harris GC, Lin FR. 2014; Feasibility study to quantify the auditory and social environment of older adults using a digital language processor. Otol Neurotol 35 (08) 1301-1305
  • Lin FR, Thorpe R, Gordon-Salant S, Ferrucci L. 2011; Hearing loss prevalence and risk factors among older adults in the United States. J Gerontol A Biol Sci Med Sci 66A (05) 582-590
  • Malinoff RL, Weinstein BE. 1989; Measurement of hearing aid benefit in the elderly. Ear Hear 10 (06) 354-356
  • Nilsson M, Soli SD, Sullivan JA. 1994; Development of the Hearing in Noise Test for the measurement of speech reception thresholds in quiet and in noise. J Acoust Soc Am 95 (02) 1085-1099
  • Oller DK, Niyogi P, Gray S, Richards JA, Gilkerson J, Xu D, Yapanel U, Warren SF. 2010; Automated vocal analysis of naturalistic recordings from children with autism, language delay, and typical development. Proc Natl Acad Sci USA 107 (30) 13354-13359
  • Perez E, McCormack A, Edmonds BA. 2014; Sensitivity to temporal fine structure and hearing-aid outcomes in older adults. Front Neurosci 8: 7
  • Pronk M, Deeg DJH, Kramer SE. 2013; Hearing status in older persons: a significant determinant of depression and loneliness? Results from the Longitudinal Aging Study Amsterdam. Am J Audiol 22 (02) 316-320
  • Ranganathan B, Counter P, Johnson I. 2011; Validation of self-reported hearing loss using television volume. J Laryngol Otol 125 (01) 18-21
  • Rønne FM, Laugesen S, Jensen NS, Hietkamp RK, Pedersen JH. 2013; Magnitude of speech-reception-threshold manipulators for a spatial speech-in-speech test that takes signal-to-noise ratio confounds and ecological validity into account. J Acoust Soc Am 133: 050069-050069
  • Saunders GH, Chisolm TH, Abrams HB. 2005; Measuring hearing aid outcomes—not as easy as it seems. J Rehabil Res Dev 42 4, Suppl 2 157-168
  • Shiffman S, Stone AA, Hufford MR. 2008; Ecological momentary assessment. Annu Rev Clin Psychol 4: 1-32
  • Smeds K, Wolters F, Rung M. 2015; Estimation of signal-to-noise ratios in realistic sound scenarios. J Am Acad Audiol 26 (02) 183-196
  • Sosa AV. 2016; Association of the type of toy used during play with the quantity and quality of parent-infant communication. JAMA Pediatr 170 (02) 132-137
  • Thiemann-Bourque KS, Warren SF, Brady N, Gilkerson J, Richards JA. 2014; Vocal interaction between children with Down syndrome and their parents. Am J Speech Lang Pathol 23 (03) 474-485
  • VanDam M, Ambrose SE, Moeller MP. 2012; Quantity of parental language in the home environments of hard-of-hearing 2-year-olds. J Deaf Stud Deaf Educ 17 (04) 402-420
  • Ventry IM, Weinstein BE. 1982; The hearing handicap inventory for the elderly: a new tool. Ear Hear 3 (03) 128-134
  • Vestergaard MD. 2006; Self-report outcome in new hearing-aid users: longitudinal trends and relationships between subjective measures of benefit and satisfaction. Int J Audiol 45 (07) 382-392
  • Wagener KC, Hansen M, Ludvigsen C. 2008; Recording and classification of the acoustic environment of hearing aid users. J Am Acad Audiol 19 (04) 348-370
  • Wu YH, Bentler RA. 2012; Do older adults have social lifestyles that place fewer demands on hearing?. J Am Acad Audiol 23 (09) 697-711
  • Xu D, Yapanel U, Gray S. 2009. Reliability of the LENA™ Language Environment Analysis System in Young Children’s Natural Home Environment. LENA Technical Report . Boulder, CO: LENA Foundation;
  • Xu D, Yapanel U, Gray S, Baer CT. 2009. The LENA™ Language Environment Analysis System: The Interpreted Time Segments (ITS) File. LENA Technical Report . Boulder, CO: LENA Foundation;
  • Xu D, Yapanel U, Gray S, Gilkerson J, Richards J, Hansen J. 2008 Signal processing for young child speech language development. Workshop on Child, Computer and Interaction, October 23, 2008, Chania, Crete, Greece
  • Yamada M, Nishiwaki Y, Michikawa T, Takebayashi T. 2012; Self-reported hearing loss in older adults is associated with future decline in instrumental activities of daily living but not in social participation. J Am Geriatr Soc 60 (07) 1304-1309

Zoom Image
Figure 1 A LENA DLP with its carrying bag. The mesh opening of the carrying bag allowed environmental sounds to reach the microphone port of the DLP.
Zoom Image
Figure 2 Percentage of time spent in each of the LENA auditory environments in unaided and aided conditions. Horizontal bars represent median values. Vertical bars represent values within the first and third quartiles ± the interquartile range × 1.5. Dots represent outliers.
Zoom Image
Figure 3 Average AWC per hour in unaided and aided conditions. Horizontal bars represent median values. Vertical bars represent values within the first and third quartiles ± the interquartile range × 1.5.
Zoom Image
Figure 4 TV/electronic sound, same sex speech, and opposite sex speech levels in unaided and aided conditions. Horizontal bars represent median values. Vertical bars represent values within the first and third quartiles ± the interquartile range × 1.5. Dots represent outliers.
Zoom Image
Figure 5 Histograms showing the distribution of TV/electronic sound levels in the unaided condition (upper panel) and aided condition (lower panel) for one participant.