Matching Items (12)
151634-Thumbnail Image.png
Description
Two groups of cochlear implant (CI) listeners were tested for sound source localization and for speech recognition in complex listening environments. One group (n=11) wore bilateral CIs and, potentially, had access to interaural level difference (ILD) cues, but not interaural timing difference (ITD) cues. The second group (n=12) wore a

Two groups of cochlear implant (CI) listeners were tested for sound source localization and for speech recognition in complex listening environments. One group (n=11) wore bilateral CIs and, potentially, had access to interaural level difference (ILD) cues, but not interaural timing difference (ITD) cues. The second group (n=12) wore a single CI and had low-frequency, acoustic hearing in both the ear contralateral to the CI and in the implanted ear. These `hearing preservation' listeners, potentially, had access to ITD cues but not to ILD cues. At issue in this dissertation was the value of the two types of information about sound sources, ITDs and ILDs, for localization and for speech perception when speech and noise sources were separated in space. For Experiment 1, normal hearing (NH) listeners and the two groups of CI listeners were tested for sound source localization using a 13 loudspeaker array. For the NH listeners, the mean RMS error for localization was 7 degrees, for the bilateral CI listeners, 20 degrees, and for the hearing preservation listeners, 23 degrees. The scores for the two CI groups did not differ significantly. Thus, both CI groups showed equivalent, but poorer than normal, localization. This outcome using the filtered noise bands for the normal hearing listeners, suggests ILD and ITD cues can support equivalent levels of localization. For Experiment 2, the two groups of CI listeners were tested for speech recognition in noise when the noise sources and targets were spatially separated in a simulated `restaurant' environment and in two versions of a `cocktail party' environment. At issue was whether either CI group would show benefits from binaural hearing, i.e., better performance when the noise and targets were separated in space. Neither of the CI groups showed spatial release from masking. However, both groups showed a significant binaural advantage (a combination of squelch and summation), which also maintained separation of the target and noise, indicating the presence of some binaural processing or `unmasking' of speech in noise. Finally, localization ability in Experiment 1 was not correlated with binaural advantage in Experiment 2.
ContributorsLoiselle, Louise (Author) / Dorman, Michael F. (Thesis advisor) / Yost, William A. (Thesis advisor) / Azuma, Tamiko (Committee member) / Liss, Julie (Committee member) / Arizona State University (Publisher)
Created2013
153453-Thumbnail Image.png
Description
The present study describes audiovisual sentence recognition in normal hearing listeners, bimodal cochlear implant (CI) listeners and bilateral CI listeners. This study explores a new set of sentences (the AzAV sentences) that were created to have equal auditory intelligibility and equal gain from visual information.

The aims of Experiment I

The present study describes audiovisual sentence recognition in normal hearing listeners, bimodal cochlear implant (CI) listeners and bilateral CI listeners. This study explores a new set of sentences (the AzAV sentences) that were created to have equal auditory intelligibility and equal gain from visual information.

The aims of Experiment I were to (i) compare the lip reading difficulty of the AzAV sentences to that of other sentence materials, (ii) compare the speech-reading ability of CI listeners to that of normal-hearing listeners and (iii) assess the gain in speech understanding when listeners have both auditory and visual information from easy-to-lip-read and difficult-to-lip read sentences. In addition, the sentence lists were subjected to a multi-level text analysis to determine the factors that make sentences easy or difficult to speech read.

The results of Experiment I showed that (i) the AzAV sentences were relatively difficult to lip read, (ii) that CI listeners and normal-hearing listeners did not differ in lip reading ability and (iii) that sentences with low lip-reading intelligibility (10-15 % correct) provide about a 30 percentage point improvement in speech understanding when added to the acoustic stimulus, while sentences with high lip-reading intelligibility (30-60 % correct) provide about a 50 percentage point improvement in the same comparison. The multi-level text analyses showed that the familiarity of phrases in the sentences was the primary driving factor that affects the lip reading difficulty.

The aim of Experiment II was to investigate the value, when visual information is present, of bimodal hearing and bilateral cochlear implants. The results of Experiment II showed that when visual information is present, low-frequency acoustic hearing can be of value to speech understanding for patients fit with a single CI. However, when visual information was available no gain was seen from the provision of a second CI, i.e., bilateral CIs. As was the case in Experiment I, visual information provided about a 30 percentage point improvement in speech understanding.
ContributorsWang, Shuai (Author) / Dorman, Michael (Thesis advisor) / Berisha, Visar (Committee member) / Liss, Julie (Committee member) / Arizona State University (Publisher)
Created2015
153419-Thumbnail Image.png
Description
A multitude of individuals across the globe suffer from hearing loss and that number continues to grow. Cochlear implants, while having limitations, provide electrical input for users enabling them to "hear" and more fully interact socially with their environment. There has been a clinical shift to the

A multitude of individuals across the globe suffer from hearing loss and that number continues to grow. Cochlear implants, while having limitations, provide electrical input for users enabling them to "hear" and more fully interact socially with their environment. There has been a clinical shift to the bilateral placement of implants in both ears and to bimodal placement of a hearing aid in the contralateral ear if residual hearing is present. However, there is potentially more to subsequent speech perception for bilateral and bimodal cochlear implant users than the electric and acoustic input being received via these modalities. For normal listeners vision plays a role and Rosenblum (2005) points out it is a key feature of an integrated perceptual process. Logically, cochlear implant users should also benefit from integrated visual input. The question is how exactly does vision provide benefit to bilateral and bimodal users. Eight (8) bilateral and 5 bimodal participants received randomized experimental phrases previously generated by Liss et al. (1998) in auditory and audiovisual conditions. The participants recorded their perception of the input. Data were consequently analyzed for percent words correct, consonant errors, and lexical boundary error types. Overall, vision was found to improve speech perception for bilateral and bimodal cochlear implant participants. Each group experienced a significant increase in percent words correct when visual input was added. With vision bilateral participants reduced consonant place errors and demonstrated increased use of syllabic stress cues used in lexical segmentation. Therefore, results suggest vision might provide perceptual benefits for bilateral cochlear implant users by granting access to place information and by augmenting cues for syllabic stress in the absence of acoustic input. On the other hand vision did not provide the bimodal participants significantly increased access to place and stress cues. Therefore the exact mechanism by which bimodal implant users improved speech perception with the addition of vision is unknown. These results point to the complexities of audiovisual integration during speech perception and the need for continued research regarding the benefit vision provides to bilateral and bimodal cochlear implant users.
ContributorsLudwig, Cimarron (Author) / Liss, Julie (Thesis advisor) / Dorman, Michael (Committee member) / Azuma, Tamiko (Committee member) / Arizona State University (Publisher)
Created2015
153415-Thumbnail Image.png
Description
In the noise and commotion of daily life, people achieve effective communication partly because spoken messages are replete with redundant information. Listeners exploit available contextual, linguistic, phonemic, and prosodic cues to decipher degraded speech. When other cues are absent or ambiguous, phonemic and prosodic cues are particularly important

In the noise and commotion of daily life, people achieve effective communication partly because spoken messages are replete with redundant information. Listeners exploit available contextual, linguistic, phonemic, and prosodic cues to decipher degraded speech. When other cues are absent or ambiguous, phonemic and prosodic cues are particularly important because they help identify word boundaries, a process known as lexical segmentation. Individuals vary in the degree to which they rely on phonemic or prosodic cues for lexical segmentation in degraded conditions.

Deafened individuals who use a cochlear implant have diminished access to fine frequency information in the speech signal, and show resulting difficulty perceiving phonemic and prosodic cues. Auditory training on phonemic elements improves word recognition for some listeners. Little is known, however, about the potential benefits of prosodic training, or the degree to which individual differences in cue use affect outcomes.

The present study used simulated cochlear implant stimulation to examine the effects of phonemic and prosodic training on lexical segmentation. Participants completed targeted training with either phonemic or prosodic cues, and received passive exposure to the non-targeted cue. Results show that acuity to the targeted cue improved after training. In addition, both targeted attention and passive exposure to prosodic features led to increased use of these cues for lexical segmentation. Individual differences in degree and source of benefit point to the importance of personalizing clinical intervention to increase flexible use of a range of perceptual strategies for understanding speech.
ContributorsHelms Tillery, Augusta Katherine (Author) / Liss, Julie M. (Thesis advisor) / Azuma, Tamiko (Committee member) / Brown, Christopher A. (Committee member) / Dorman, Michael F. (Committee member) / Utianski, Rene L. (Committee member) / Arizona State University (Publisher)
Created2015
Description
Through decades of clinical progress, cochlear implants have brought the world of speech and language to thousands of profoundly deaf patients. However, the technology has many possible areas for improvement, including providing information of non-linguistic cues, also called indexical properties of speech. The field of sensory substitution, providing information relating

Through decades of clinical progress, cochlear implants have brought the world of speech and language to thousands of profoundly deaf patients. However, the technology has many possible areas for improvement, including providing information of non-linguistic cues, also called indexical properties of speech. The field of sensory substitution, providing information relating one sense to another, offers a potential avenue to further assist those with cochlear implants, in addition to the promise they hold for those without existing aids. A user study with a vibrotactile device is evaluated to exhibit the effectiveness of this approach in an auditory gender discrimination task. Additionally, preliminary computational work is included that demonstrates advantages and limitations encountered when expanding the complexity of future implementations.
ContributorsButts, Austin McRae (Author) / Helms Tillery, Stephen (Thesis advisor) / Berisha, Visar (Committee member) / Buneo, Christopher (Committee member) / McDaniel, Troy (Committee member) / Arizona State University (Publisher)
Created2015
136347-Thumbnail Image.png
Description
The ability of cochlear implants (CI) to restore auditory function has advanced significantly in the past decade. Approximately 96,000 people in the United States benefit from these devices, which by the generation and transmission of electrical impulses, enable the brain to perceive sound. But due to the predominantly Western cochlear

The ability of cochlear implants (CI) to restore auditory function has advanced significantly in the past decade. Approximately 96,000 people in the United States benefit from these devices, which by the generation and transmission of electrical impulses, enable the brain to perceive sound. But due to the predominantly Western cochlear implant market, current CI characterization primarily focuses on improving the quality of American English. Only recently has research begun to evaluate CI performance using other languages such as Mandarin Chinese, which rely on distinct spectral characteristics not present in English. Mandarin, a tonal language utilizes four, distinct pitch patterns, which when voiced a syllable, conveys different meanings for the same word. This presents a challenge to hearing research as spectral, or frequency based information like pitch is readily acknowledged to be significantly reduced by CI processing algorithms. Thus the present study sought to identify the intelligibility differences for English and Mandarin when processed using current CI strategies. The objective of the study was to pinpoint any notable discrepancies in speech recognition, using voice-coded (vocoded) audio that simulates a CI generated stimuli. This approach allowed 12 normal hearing English speakers, and 9 normal hearing Mandarin listeners to participate in the experiment. The number of frequency channels available and the carrier type of excitation were varied in order to compare their effects on two cases of Mandarin intelligibility: Case 1) word recognition and Case 2) combined word and tone recognition. The results indicated a statistically significant difference between English and Mandarin intelligibility for Condition 1 (8Ch-Sinewave Carrier, p=0.022) given Case 1 and Condition 1 (8Ch-Sinewave Carrier, p=0.001) and Condition 3 (16Ch-Sinewave Carrier, p=0.001) given Case 2. The data suggests that the nature of the carrier type does have an effect on tonal language intelligibility and warrants further research as a design consideration for future cochlear implants.
ContributorsSchiltz, Jessica Hammitt (Author) / Berisha, Visar (Thesis director) / Frakes, David (Committee member) / Barrett, The Honors College (Contributor) / Harrington Bioengineering Program (Contributor)
Created2015-05
133858-Thumbnail Image.png
Description
Working memory and cognitive functions contribute to speech recognition in normal hearing and hearing impaired listeners. In this study, auditory and cognitive functions are measured in young adult normal hearing, elderly normal hearing, and elderly cochlear implant subjects. The effects of age and hearing on the different measures are investigated.

Working memory and cognitive functions contribute to speech recognition in normal hearing and hearing impaired listeners. In this study, auditory and cognitive functions are measured in young adult normal hearing, elderly normal hearing, and elderly cochlear implant subjects. The effects of age and hearing on the different measures are investigated. The correlations between auditory/cognitive functions and speech/music recognition are examined. The results may demonstrate which factors can better explain the variable performance across elderly cochlear implant users.
ContributorsKolberg, Courtney Elizabeth (Author) / Luo, Xin (Thesis director) / Azuma, Tamiko (Committee member) / Department of Speech and Hearing Science (Contributor) / Barrett, The Honors College (Contributor)
Created2018-05
135175-Thumbnail Image.png
Description
Cochlear implants are electronic medical devices that create hearing capabilities in those with inner ear damage that results in total or partial hearing loss. The decision to get a cochlear implant can be difficult and controversial. Cochlear implants have many physical and social impacts on cochlear implant users. The aim

Cochlear implants are electronic medical devices that create hearing capabilities in those with inner ear damage that results in total or partial hearing loss. The decision to get a cochlear implant can be difficult and controversial. Cochlear implants have many physical and social impacts on cochlear implant users. The aim of this study was to evaluate how patient narratives written by people with cochlear implants (or their caregivers) express issues of quality of life and personhood related to the use of this medical device. The methodology used to answer this question was a content analysis of patient narratives. The content analysis was done using grounded theory and the constant comparative method. Two sensitizing concepts, quality of life and personhood, were used and became the large umbrella themes found in the narratives. Under the major theme of quality of life, the sub-themes that emerged were improved hearing, improved communication skills, and assimilation into the hearing world. Under the major theme of personhood, the sub-themes that emerged were confidence, self-image, and technology and the body. Another major theme, importance of education, also emerged. In general, cochlear implant users and their caregivers expressed in their narratives that cochlear implants have positive effects on the quality of life of cochlear implant users. This is because almost all of the narrative writers reported improved hearing, improved communication skills, and better assimilation into the hearing world. In addition, it was found that cochlear implants do not have a significant affect on the actual personal identity of cochlear implant users, though they do make them more confident. The majority of cochlear implant users expressed that they view the cochlear implant device as an assistive tool they use as opposed to a part of themselves. Lastly, there is a need for more awareness of or access to education and therapy for cochlear implant users.
ContributorsResnick, Jessica Helen (Author) / Helms Tillery, Stephen (Thesis director) / Robert, Jason (Committee member) / Piemonte, Nicole (Committee member) / School of International Letters and Cultures (Contributor) / School of Life Sciences (Contributor) / Barrett, The Honors College (Contributor)
Created2016-05
137669-Thumbnail Image.png
Description
When listeners hear sentences presented simultaneously, the listeners are better able to discriminate between speakers when there is a difference in fundamental frequency (F0). This paper explores the use of a pulse train vocoder to simulate cochlear implant listening. A pulse train vocoder, rather than a noise or tonal vocoder,

When listeners hear sentences presented simultaneously, the listeners are better able to discriminate between speakers when there is a difference in fundamental frequency (F0). This paper explores the use of a pulse train vocoder to simulate cochlear implant listening. A pulse train vocoder, rather than a noise or tonal vocoder, was used so the fundamental frequency (F0) of speech would be well represented. The results of this experiment showed that listeners are able to use the F0 information to aid in speaker segregation. As expected, recognition performance is the poorest when there was no difference in F0 between speakers, and listeners performed better as the difference in F0 increased. The type of errors that the listeners made was also analyzed. The results show that when an error was made in identifying the correct word from the target sentence, the response was usually (~60%) a word that was uttered in the competing sentence.
ContributorsStanley, Nicole Ernestine (Author) / Yost, William (Thesis director) / Dorman, Michael (Committee member) / Liss, Julie (Committee member) / Barrett, The Honors College (Contributor) / Department of Speech and Hearing Science (Contributor) / Hugh Downs School of Human Communication (Contributor)
Created2013-05
Description

Vocal emotion production is important for social interactions in daily life. Previous studies found that pre-lingually deafened cochlear implant (CI) children without residual acoustic hearing had significant deficits in producing pitch cues for vocal emotions as compared to post-lingually deafened CI adults, normal-hearing (NH) children, and NH adults. In light

Vocal emotion production is important for social interactions in daily life. Previous studies found that pre-lingually deafened cochlear implant (CI) children without residual acoustic hearing had significant deficits in producing pitch cues for vocal emotions as compared to post-lingually deafened CI adults, normal-hearing (NH) children, and NH adults. In light of the importance of residual acoustic hearing for the development of vocal emotion production, this study tested whether pre-lingually deafened CI children with residual acoustic hearing may produce similar pitch cues for vocal emotions as the other participant groups. Sixteen pre-lingually deafened CI children with residual acoustic hearing, nine post-lingually deafened CI adults with residual acoustic hearing, twelve NH children, and eleven NH adults were asked to produce ten semantically neutral sentences in happy or sad emotion. The results showed that there was no significant group effect for the ratio of mean fundamental frequency (F0) and the ratio of F0 standard deviation between emotions. Instead, CI children showed significantly greater intensity difference between emotions than CI adults, NH children, and NH adults. In CI children, aided pure-tone average hearing threshold of acoustic ear was correlated with the ratio of mean F0 and the ratio of duration between emotions. These results suggest that residual acoustic hearing with low-frequency pitch cues may facilitate the development of vocal emotion production in pre-lingually deafened CI children.

ContributorsMacdonald, Andrina Elizabeth (Author) / Luo, Xin (Thesis director) / Pittman, Andrea (Committee member) / College of Health Solutions (Contributor, Contributor) / Barrett, The Honors College (Contributor)
Created2021-05