Matching Items (8)
Filtering by

Clear all filters

136347-Thumbnail Image.png
Description
The ability of cochlear implants (CI) to restore auditory function has advanced significantly in the past decade. Approximately 96,000 people in the United States benefit from these devices, which by the generation and transmission of electrical impulses, enable the brain to perceive sound. But due to the predominantly Western cochlear

The ability of cochlear implants (CI) to restore auditory function has advanced significantly in the past decade. Approximately 96,000 people in the United States benefit from these devices, which by the generation and transmission of electrical impulses, enable the brain to perceive sound. But due to the predominantly Western cochlear implant market, current CI characterization primarily focuses on improving the quality of American English. Only recently has research begun to evaluate CI performance using other languages such as Mandarin Chinese, which rely on distinct spectral characteristics not present in English. Mandarin, a tonal language utilizes four, distinct pitch patterns, which when voiced a syllable, conveys different meanings for the same word. This presents a challenge to hearing research as spectral, or frequency based information like pitch is readily acknowledged to be significantly reduced by CI processing algorithms. Thus the present study sought to identify the intelligibility differences for English and Mandarin when processed using current CI strategies. The objective of the study was to pinpoint any notable discrepancies in speech recognition, using voice-coded (vocoded) audio that simulates a CI generated stimuli. This approach allowed 12 normal hearing English speakers, and 9 normal hearing Mandarin listeners to participate in the experiment. The number of frequency channels available and the carrier type of excitation were varied in order to compare their effects on two cases of Mandarin intelligibility: Case 1) word recognition and Case 2) combined word and tone recognition. The results indicated a statistically significant difference between English and Mandarin intelligibility for Condition 1 (8Ch-Sinewave Carrier, p=0.022) given Case 1 and Condition 1 (8Ch-Sinewave Carrier, p=0.001) and Condition 3 (16Ch-Sinewave Carrier, p=0.001) given Case 2. The data suggests that the nature of the carrier type does have an effect on tonal language intelligibility and warrants further research as a design consideration for future cochlear implants.
ContributorsSchiltz, Jessica Hammitt (Author) / Berisha, Visar (Thesis director) / Frakes, David (Committee member) / Barrett, The Honors College (Contributor) / Harrington Bioengineering Program (Contributor)
Created2015-05
133858-Thumbnail Image.png
Description
Working memory and cognitive functions contribute to speech recognition in normal hearing and hearing impaired listeners. In this study, auditory and cognitive functions are measured in young adult normal hearing, elderly normal hearing, and elderly cochlear implant subjects. The effects of age and hearing on the different measures are investigated.

Working memory and cognitive functions contribute to speech recognition in normal hearing and hearing impaired listeners. In this study, auditory and cognitive functions are measured in young adult normal hearing, elderly normal hearing, and elderly cochlear implant subjects. The effects of age and hearing on the different measures are investigated. The correlations between auditory/cognitive functions and speech/music recognition are examined. The results may demonstrate which factors can better explain the variable performance across elderly cochlear implant users.
ContributorsKolberg, Courtney Elizabeth (Author) / Luo, Xin (Thesis director) / Azuma, Tamiko (Committee member) / Department of Speech and Hearing Science (Contributor) / Barrett, The Honors College (Contributor)
Created2018-05
133028-Thumbnail Image.png
Description
Previous studies have found that the detection of near-threshold stimuli is decreased immediately before movement and throughout movement production. This has been suggested to occur through the use of the internal forward model processing an efferent copy of the motor command and creating a prediction that is used to cancel

Previous studies have found that the detection of near-threshold stimuli is decreased immediately before movement and throughout movement production. This has been suggested to occur through the use of the internal forward model processing an efferent copy of the motor command and creating a prediction that is used to cancel out the resulting sensory feedback. Currently, there are no published accounts of the perception of tactile signals for motor tasks and contexts related to the lips during both speech planning and production. In this study, we measured the responsiveness of the somatosensory system during speech planning using light electrical stimulation below the lower lip by comparing perception during mixed speaking and silent reading conditions. Participants were asked to judge whether a constant near-threshold electrical stimulation (subject-specific intensity, 85% detected at rest) was present during different time points relative to an initial visual cue. In the speaking condition, participants overtly produced target words shown on a computer monitor. In the reading condition, participants read the same target words silently to themselves without any movement or sound. We found that detection of the stimulus was attenuated during speaking conditions while remaining at a constant level close to the perceptual threshold throughout the silent reading condition. Perceptual modulation was most intense during speech production and showed some attenuation just prior to speech production during the planning period of speech. This demonstrates that there is a significant decrease in the responsiveness of the somatosensory system during speech production as well as milliseconds before speech is even produced which has implications for speech disorders such as stuttering and schizophrenia with pronounced deficits in the somatosensory system.
ContributorsMcguffin, Brianna Jean (Author) / Daliri, Ayoub (Thesis director) / Liss, Julie (Committee member) / Department of Psychology (Contributor) / School of Life Sciences (Contributor) / Barrett, The Honors College (Contributor)
Created2019-05
135175-Thumbnail Image.png
Description
Cochlear implants are electronic medical devices that create hearing capabilities in those with inner ear damage that results in total or partial hearing loss. The decision to get a cochlear implant can be difficult and controversial. Cochlear implants have many physical and social impacts on cochlear implant users. The aim

Cochlear implants are electronic medical devices that create hearing capabilities in those with inner ear damage that results in total or partial hearing loss. The decision to get a cochlear implant can be difficult and controversial. Cochlear implants have many physical and social impacts on cochlear implant users. The aim of this study was to evaluate how patient narratives written by people with cochlear implants (or their caregivers) express issues of quality of life and personhood related to the use of this medical device. The methodology used to answer this question was a content analysis of patient narratives. The content analysis was done using grounded theory and the constant comparative method. Two sensitizing concepts, quality of life and personhood, were used and became the large umbrella themes found in the narratives. Under the major theme of quality of life, the sub-themes that emerged were improved hearing, improved communication skills, and assimilation into the hearing world. Under the major theme of personhood, the sub-themes that emerged were confidence, self-image, and technology and the body. Another major theme, importance of education, also emerged. In general, cochlear implant users and their caregivers expressed in their narratives that cochlear implants have positive effects on the quality of life of cochlear implant users. This is because almost all of the narrative writers reported improved hearing, improved communication skills, and better assimilation into the hearing world. In addition, it was found that cochlear implants do not have a significant affect on the actual personal identity of cochlear implant users, though they do make them more confident. The majority of cochlear implant users expressed that they view the cochlear implant device as an assistive tool they use as opposed to a part of themselves. Lastly, there is a need for more awareness of or access to education and therapy for cochlear implant users.
ContributorsResnick, Jessica Helen (Author) / Helms Tillery, Stephen (Thesis director) / Robert, Jason (Committee member) / Piemonte, Nicole (Committee member) / School of International Letters and Cultures (Contributor) / School of Life Sciences (Contributor) / Barrett, The Honors College (Contributor)
Created2016-05
137669-Thumbnail Image.png
Description
When listeners hear sentences presented simultaneously, the listeners are better able to discriminate between speakers when there is a difference in fundamental frequency (F0). This paper explores the use of a pulse train vocoder to simulate cochlear implant listening. A pulse train vocoder, rather than a noise or tonal vocoder,

When listeners hear sentences presented simultaneously, the listeners are better able to discriminate between speakers when there is a difference in fundamental frequency (F0). This paper explores the use of a pulse train vocoder to simulate cochlear implant listening. A pulse train vocoder, rather than a noise or tonal vocoder, was used so the fundamental frequency (F0) of speech would be well represented. The results of this experiment showed that listeners are able to use the F0 information to aid in speaker segregation. As expected, recognition performance is the poorest when there was no difference in F0 between speakers, and listeners performed better as the difference in F0 increased. The type of errors that the listeners made was also analyzed. The results show that when an error was made in identifying the correct word from the target sentence, the response was usually (~60%) a word that was uttered in the competing sentence.
ContributorsStanley, Nicole Ernestine (Author) / Yost, William (Thesis director) / Dorman, Michael (Committee member) / Liss, Julie (Committee member) / Barrett, The Honors College (Contributor) / Department of Speech and Hearing Science (Contributor) / Hugh Downs School of Human Communication (Contributor)
Created2013-05
148400-Thumbnail Image.png
Description

The brain continuously monitors speech output to detect potential errors between its sensory prediction and its sensory production (Daliri et al., 2020). When the brain encounters an error, it generates a corrective motor response, usually in the opposite direction, to reduce the effect of the error. Previous studies have shown

The brain continuously monitors speech output to detect potential errors between its sensory prediction and its sensory production (Daliri et al., 2020). When the brain encounters an error, it generates a corrective motor response, usually in the opposite direction, to reduce the effect of the error. Previous studies have shown that the type of auditory error received may impact a participant’s corrective response. In this study, we examined whether participants respond differently to categorical or non-categorical errors. We applied two types of perturbation in real-time by shifting the first formant (F1) and second formant (F2) at three different magnitudes. The vowel /ɛ/ was shifted toward the vowel /æ/ in the categorical perturbation condition. In the non-categorical perturbation condition, the vowel /ɛ/ was shifted to a sound outside of the vowel quadrilateral (increasing both F1 and F2). Our results showed that participants responded to the categorical perturbation while they did not respond to the non-categorical perturbation. Additionally, we found that in the categorical perturbation condition, as the magnitude of the perturbation increased, the magnitude of the response increased. Overall, our results suggest that the brain may respond differently to categorical and non-categorical errors, and the brain is highly attuned to errors in speech.

ContributorsCincera, Kirsten Michelle (Author) / Daliri, Ayoub (Thesis director) / Azuma, Tamiko (Committee member) / School of Sustainability (Contributor) / College of Health Solutions (Contributor, Contributor) / Barrett, The Honors College (Contributor)
Created2021-05
Description

Vocal emotion production is important for social interactions in daily life. Previous studies found that pre-lingually deafened cochlear implant (CI) children without residual acoustic hearing had significant deficits in producing pitch cues for vocal emotions as compared to post-lingually deafened CI adults, normal-hearing (NH) children, and NH adults. In light

Vocal emotion production is important for social interactions in daily life. Previous studies found that pre-lingually deafened cochlear implant (CI) children without residual acoustic hearing had significant deficits in producing pitch cues for vocal emotions as compared to post-lingually deafened CI adults, normal-hearing (NH) children, and NH adults. In light of the importance of residual acoustic hearing for the development of vocal emotion production, this study tested whether pre-lingually deafened CI children with residual acoustic hearing may produce similar pitch cues for vocal emotions as the other participant groups. Sixteen pre-lingually deafened CI children with residual acoustic hearing, nine post-lingually deafened CI adults with residual acoustic hearing, twelve NH children, and eleven NH adults were asked to produce ten semantically neutral sentences in happy or sad emotion. The results showed that there was no significant group effect for the ratio of mean fundamental frequency (F0) and the ratio of F0 standard deviation between emotions. Instead, CI children showed significantly greater intensity difference between emotions than CI adults, NH children, and NH adults. In CI children, aided pure-tone average hearing threshold of acoustic ear was correlated with the ratio of mean F0 and the ratio of duration between emotions. These results suggest that residual acoustic hearing with low-frequency pitch cues may facilitate the development of vocal emotion production in pre-lingually deafened CI children.

ContributorsMacdonald, Andrina Elizabeth (Author) / Luo, Xin (Thesis director) / Pittman, Andrea (Committee member) / College of Health Solutions (Contributor, Contributor) / Barrett, The Honors College (Contributor)
Created2021-05
132359-Thumbnail Image.png
Description
Cochlear implant (CI) successfully restores hearing sensation to profoundly deaf patients, but its
performance is limited by poor spectral resolution. Acoustic CI simulation has been widely used
in normal-­hearing (NH) listeners to study the effect of spectral resolution on speech perception,
while avoiding patient-­related confounds. It is unclear how speech production may change

Cochlear implant (CI) successfully restores hearing sensation to profoundly deaf patients, but its
performance is limited by poor spectral resolution. Acoustic CI simulation has been widely used
in normal-­hearing (NH) listeners to study the effect of spectral resolution on speech perception,
while avoiding patient-­related confounds. It is unclear how speech production may change with
the degree of spectral degradation of auditory feedback as experience by CI users. In this study,
a real-­time sinewave CI simulation was developed to provide NH subjects with auditory
feedback of different spectral resolution (1, 2, 4, and 8 channels). NH subjects were asked to
produce and identify vowels, as well as recognize sentences while listening to the real-­time CI
simulation. The results showed that sentence recognition scores with the real-­time CI simulation
improved with more channels, similar to those with the traditional off-­line CI simulation.
Perception of a vowel continuum “HEAD”-­ “HAD” was near chance with 1, 2, and 4 channels,
and greatly improved with 8 channels and full spectrum. The spectral resolution of auditory
feedback did not significantly affect any acoustic feature of vowel production (e.g., vowel space
area, mean amplitude, mean and variability of fundamental and formant frequencies). There
was no correlation between vowel production and perception. The lack of effect of auditory
feedback spectral resolution on vowel production was likely due to the limited exposure of NH
subjects to CI simulation and the limited frequency ranges covered by the sinewave carriers of
CI simulation. Future studies should investigate the effects of various CI processing parameters
on speech production using a noise-­band CI simulation.
ContributorsPerez Lustre, Sarahi (Author) / Luo, Xin (Thesis director) / Daliri, Ayoub (Committee member) / Division of Teacher Preparation (Contributor) / College of Health Solutions (Contributor, Contributor) / Barrett, The Honors College (Contributor)
Created2019-05