Matching Items (11)

135175-Thumbnail Image.png

The Impact of Cochlear Implants on the Quality of Life and Personhood of Cochlear Implant Users as Expressed In Patient Narratives

Description

Cochlear implants are electronic medical devices that create hearing capabilities in those with inner ear damage that results in total or partial hearing loss. The decision to get a cochlear

Cochlear implants are electronic medical devices that create hearing capabilities in those with inner ear damage that results in total or partial hearing loss. The decision to get a cochlear implant can be difficult and controversial. Cochlear implants have many physical and social impacts on cochlear implant users. The aim of this study was to evaluate how patient narratives written by people with cochlear implants (or their caregivers) express issues of quality of life and personhood related to the use of this medical device. The methodology used to answer this question was a content analysis of patient narratives. The content analysis was done using grounded theory and the constant comparative method. Two sensitizing concepts, quality of life and personhood, were used and became the large umbrella themes found in the narratives. Under the major theme of quality of life, the sub-themes that emerged were improved hearing, improved communication skills, and assimilation into the hearing world. Under the major theme of personhood, the sub-themes that emerged were confidence, self-image, and technology and the body. Another major theme, importance of education, also emerged. In general, cochlear implant users and their caregivers expressed in their narratives that cochlear implants have positive effects on the quality of life of cochlear implant users. This is because almost all of the narrative writers reported improved hearing, improved communication skills, and better assimilation into the hearing world. In addition, it was found that cochlear implants do not have a significant affect on the actual personal identity of cochlear implant users, though they do make them more confident. The majority of cochlear implant users expressed that they view the cochlear implant device as an assistive tool they use as opposed to a part of themselves. Lastly, there is a need for more awareness of or access to education and therapy for cochlear implant users.

Contributors

Agent

Created

Date Created
  • 2016-05

133858-Thumbnail Image.png

Cognitive and Auditory Factors for Speech and Music Perception in Elderly Adult Cochlear Implant Users

Description

Working memory and cognitive functions contribute to speech recognition in normal hearing and hearing impaired listeners. In this study, auditory and cognitive functions are measured in young adult normal hearing,

Working memory and cognitive functions contribute to speech recognition in normal hearing and hearing impaired listeners. In this study, auditory and cognitive functions are measured in young adult normal hearing, elderly normal hearing, and elderly cochlear implant subjects. The effects of age and hearing on the different measures are investigated. The correlations between auditory/cognitive functions and speech/music recognition are examined. The results may demonstrate which factors can better explain the variable performance across elderly cochlear implant users.

Contributors

Agent

Created

Date Created
  • 2018-05

137669-Thumbnail Image.png

Two-Sentence Recognition with a Pulse Train Vocoder

Description

When listeners hear sentences presented simultaneously, the listeners are better able to discriminate between speakers when there is a difference in fundamental frequency (F0). This paper explores the use of

When listeners hear sentences presented simultaneously, the listeners are better able to discriminate between speakers when there is a difference in fundamental frequency (F0). This paper explores the use of a pulse train vocoder to simulate cochlear implant listening. A pulse train vocoder, rather than a noise or tonal vocoder, was used so the fundamental frequency (F0) of speech would be well represented. The results of this experiment showed that listeners are able to use the F0 information to aid in speaker segregation. As expected, recognition performance is the poorest when there was no difference in F0 between speakers, and listeners performed better as the difference in F0 increased. The type of errors that the listeners made was also analyzed. The results show that when an error was made in identifying the correct word from the target sentence, the response was usually (~60%) a word that was uttered in the competing sentence.

Contributors

Agent

Created

Date Created
  • 2013-05

136347-Thumbnail Image.png

Differences in intelligibility of vocoded speech for adult Mandarin and English speakers

Description

The ability of cochlear implants (CI) to restore auditory function has advanced significantly in the past decade. Approximately 96,000 people in the United States benefit from these devices, which by

The ability of cochlear implants (CI) to restore auditory function has advanced significantly in the past decade. Approximately 96,000 people in the United States benefit from these devices, which by the generation and transmission of electrical impulses, enable the brain to perceive sound. But due to the predominantly Western cochlear implant market, current CI characterization primarily focuses on improving the quality of American English. Only recently has research begun to evaluate CI performance using other languages such as Mandarin Chinese, which rely on distinct spectral characteristics not present in English. Mandarin, a tonal language utilizes four, distinct pitch patterns, which when voiced a syllable, conveys different meanings for the same word. This presents a challenge to hearing research as spectral, or frequency based information like pitch is readily acknowledged to be significantly reduced by CI processing algorithms. Thus the present study sought to identify the intelligibility differences for English and Mandarin when processed using current CI strategies. The objective of the study was to pinpoint any notable discrepancies in speech recognition, using voice-coded (vocoded) audio that simulates a CI generated stimuli. This approach allowed 12 normal hearing English speakers, and 9 normal hearing Mandarin listeners to participate in the experiment. The number of frequency channels available and the carrier type of excitation were varied in order to compare their effects on two cases of Mandarin intelligibility: Case 1) word recognition and Case 2) combined word and tone recognition. The results indicated a statistically significant difference between English and Mandarin intelligibility for Condition 1 (8Ch-Sinewave Carrier, p=0.022) given Case 1 and Condition 1 (8Ch-Sinewave Carrier, p=0.001) and Condition 3 (16Ch-Sinewave Carrier, p=0.001) given Case 2. The data suggests that the nature of the carrier type does have an effect on tonal language intelligibility and warrants further research as a design consideration for future cochlear implants.

Contributors

Agent

Created

Date Created
  • 2015-05

Vocal Emotion Production of Pre-lingually Deafened Cochlear Implant Children with Residual Acoustic Hearing

Description

Vocal emotion production is important for social interactions in daily life. Previous studies found that pre-lingually deafened cochlear implant (CI) children without residual acoustic hearing had significant deficits in producing

Vocal emotion production is important for social interactions in daily life. Previous studies found that pre-lingually deafened cochlear implant (CI) children without residual acoustic hearing had significant deficits in producing pitch cues for vocal emotions as compared to post-lingually deafened CI adults, normal-hearing (NH) children, and NH adults. In light of the importance of residual acoustic hearing for the development of vocal emotion production, this study tested whether pre-lingually deafened CI children with residual acoustic hearing may produce similar pitch cues for vocal emotions as the other participant groups. Sixteen pre-lingually deafened CI children with residual acoustic hearing, nine post-lingually deafened CI adults with residual acoustic hearing, twelve NH children, and eleven NH adults were asked to produce ten semantically neutral sentences in happy or sad emotion. The results showed that there was no significant group effect for the ratio of mean fundamental frequency (F0) and the ratio of F0 standard deviation between emotions. Instead, CI children showed significantly greater intensity difference between emotions than CI adults, NH children, and NH adults. In CI children, aided pure-tone average hearing threshold of acoustic ear was correlated with the ratio of mean F0 and the ratio of duration between emotions. These results suggest that residual acoustic hearing with low-frequency pitch cues may facilitate the development of vocal emotion production in pre-lingually deafened CI children.

Contributors

Agent

Created

Date Created
  • 2021-05

132359-Thumbnail Image.png

The Effect of Spectral Resolution of Auditory Feedback on Speech Production of Normal-­hearing Listeners

Description

Cochlear implant (CI) successfully restores hearing sensation to profoundly deaf patients, but its
performance is limited by poor spectral resolution. Acoustic CI simulation has been widely used
in normal-­hearing (NH)

Cochlear implant (CI) successfully restores hearing sensation to profoundly deaf patients, but its
performance is limited by poor spectral resolution. Acoustic CI simulation has been widely used
in normal-­hearing (NH) listeners to study the effect of spectral resolution on speech perception,
while avoiding patient-­related confounds. It is unclear how speech production may change with
the degree of spectral degradation of auditory feedback as experience by CI users. In this study,
a real-­time sinewave CI simulation was developed to provide NH subjects with auditory
feedback of different spectral resolution (1, 2, 4, and 8 channels). NH subjects were asked to
produce and identify vowels, as well as recognize sentences while listening to the real-­time CI
simulation. The results showed that sentence recognition scores with the real-­time CI simulation
improved with more channels, similar to those with the traditional off-­line CI simulation.
Perception of a vowel continuum “HEAD”-­ “HAD” was near chance with 1, 2, and 4 channels,
and greatly improved with 8 channels and full spectrum. The spectral resolution of auditory
feedback did not significantly affect any acoustic feature of vowel production (e.g., vowel space
area, mean amplitude, mean and variability of fundamental and formant frequencies). There
was no correlation between vowel production and perception. The lack of effect of auditory
feedback spectral resolution on vowel production was likely due to the limited exposure of NH
subjects to CI simulation and the limited frequency ranges covered by the sinewave carriers of
CI simulation. Future studies should investigate the effects of various CI processing parameters
on speech production using a noise-­band CI simulation.

Contributors

Agent

Created

Date Created
  • 2019-05

Enhancing the perception of speech indexical properties of Cochlear implants through sensory substitution

Description

Through decades of clinical progress, cochlear implants have brought the world of speech and language to thousands of profoundly deaf patients. However, the technology has many possible areas for improvement,

Through decades of clinical progress, cochlear implants have brought the world of speech and language to thousands of profoundly deaf patients. However, the technology has many possible areas for improvement, including providing information of non-linguistic cues, also called indexical properties of speech. The field of sensory substitution, providing information relating one sense to another, offers a potential avenue to further assist those with cochlear implants, in addition to the promise they hold for those without existing aids. A user study with a vibrotactile device is evaluated to exhibit the effectiveness of this approach in an auditory gender discrimination task. Additionally, preliminary computational work is included that demonstrates advantages and limitations encountered when expanding the complexity of future implementations.

Contributors

Agent

Created

Date Created
  • 2015

153415-Thumbnail Image.png

Individual differences in the perceptual learning of degraded speech: implications for cochlear implant aural rehabilitation

Description

In the noise and commotion of daily life, people achieve effective communication partly because spoken messages are replete with redundant information. Listeners exploit available contextual, linguistic, phonemic, and prosodic

In the noise and commotion of daily life, people achieve effective communication partly because spoken messages are replete with redundant information. Listeners exploit available contextual, linguistic, phonemic, and prosodic cues to decipher degraded speech. When other cues are absent or ambiguous, phonemic and prosodic cues are particularly important because they help identify word boundaries, a process known as lexical segmentation. Individuals vary in the degree to which they rely on phonemic or prosodic cues for lexical segmentation in degraded conditions.

Deafened individuals who use a cochlear implant have diminished access to fine frequency information in the speech signal, and show resulting difficulty perceiving phonemic and prosodic cues. Auditory training on phonemic elements improves word recognition for some listeners. Little is known, however, about the potential benefits of prosodic training, or the degree to which individual differences in cue use affect outcomes.

The present study used simulated cochlear implant stimulation to examine the effects of phonemic and prosodic training on lexical segmentation. Participants completed targeted training with either phonemic or prosodic cues, and received passive exposure to the non-targeted cue. Results show that acuity to the targeted cue improved after training. In addition, both targeted attention and passive exposure to prosodic features led to increased use of these cues for lexical segmentation. Individual differences in degree and source of benefit point to the importance of personalizing clinical intervention to increase flexible use of a range of perceptual strategies for understanding speech.

Contributors

Agent

Created

Date Created
  • 2015

153419-Thumbnail Image.png

The impact of visual input on the ability of bilateral and bimodal cochlear implant users to accurately perceive words and phonemes in experimental phrases

Description

A multitude of individuals across the globe suffer from hearing loss and that number continues to grow. Cochlear implants, while having limitations, provide electrical input for users enabling

A multitude of individuals across the globe suffer from hearing loss and that number continues to grow. Cochlear implants, while having limitations, provide electrical input for users enabling them to "hear" and more fully interact socially with their environment. There has been a clinical shift to the bilateral placement of implants in both ears and to bimodal placement of a hearing aid in the contralateral ear if residual hearing is present. However, there is potentially more to subsequent speech perception for bilateral and bimodal cochlear implant users than the electric and acoustic input being received via these modalities. For normal listeners vision plays a role and Rosenblum (2005) points out it is a key feature of an integrated perceptual process. Logically, cochlear implant users should also benefit from integrated visual input. The question is how exactly does vision provide benefit to bilateral and bimodal users. Eight (8) bilateral and 5 bimodal participants received randomized experimental phrases previously generated by Liss et al. (1998) in auditory and audiovisual conditions. The participants recorded their perception of the input. Data were consequently analyzed for percent words correct, consonant errors, and lexical boundary error types. Overall, vision was found to improve speech perception for bilateral and bimodal cochlear implant participants. Each group experienced a significant increase in percent words correct when visual input was added. With vision bilateral participants reduced consonant place errors and demonstrated increased use of syllabic stress cues used in lexical segmentation. Therefore, results suggest vision might provide perceptual benefits for bilateral cochlear implant users by granting access to place information and by augmenting cues for syllabic stress in the absence of acoustic input. On the other hand vision did not provide the bimodal participants significantly increased access to place and stress cues. Therefore the exact mechanism by which bimodal implant users improved speech perception with the addition of vision is unknown. These results point to the complexities of audiovisual integration during speech perception and the need for continued research regarding the benefit vision provides to bilateral and bimodal cochlear implant users.

Contributors

Agent

Created

Date Created
  • 2015

151634-Thumbnail Image.png

The value of two ears for sound source localization and speech understanding in complex listening environments: : two cochlear implants vs. two partially hearing ears and one cochlear implant

Description

Two groups of cochlear implant (CI) listeners were tested for sound source localization and for speech recognition in complex listening environments. One group (n=11) wore bilateral CIs and, potentially, had

Two groups of cochlear implant (CI) listeners were tested for sound source localization and for speech recognition in complex listening environments. One group (n=11) wore bilateral CIs and, potentially, had access to interaural level difference (ILD) cues, but not interaural timing difference (ITD) cues. The second group (n=12) wore a single CI and had low-frequency, acoustic hearing in both the ear contralateral to the CI and in the implanted ear. These `hearing preservation' listeners, potentially, had access to ITD cues but not to ILD cues. At issue in this dissertation was the value of the two types of information about sound sources, ITDs and ILDs, for localization and for speech perception when speech and noise sources were separated in space. For Experiment 1, normal hearing (NH) listeners and the two groups of CI listeners were tested for sound source localization using a 13 loudspeaker array. For the NH listeners, the mean RMS error for localization was 7 degrees, for the bilateral CI listeners, 20 degrees, and for the hearing preservation listeners, 23 degrees. The scores for the two CI groups did not differ significantly. Thus, both CI groups showed equivalent, but poorer than normal, localization. This outcome using the filtered noise bands for the normal hearing listeners, suggests ILD and ITD cues can support equivalent levels of localization. For Experiment 2, the two groups of CI listeners were tested for speech recognition in noise when the noise sources and targets were spatially separated in a simulated `restaurant' environment and in two versions of a `cocktail party' environment. At issue was whether either CI group would show benefits from binaural hearing, i.e., better performance when the noise and targets were separated in space. Neither of the CI groups showed spatial release from masking. However, both groups showed a significant binaural advantage (a combination of squelch and summation), which also maintained separation of the target and noise, indicating the presence of some binaural processing or `unmasking' of speech in noise. Finally, localization ability in Experiment 1 was not correlated with binaural advantage in Experiment 2.

Contributors

Agent

Created

Date Created
  • 2013