Matching Items (10)
Filtering by

Clear all filters

153419-Thumbnail Image.png
Description
A multitude of individuals across the globe suffer from hearing loss and that number continues to grow. Cochlear implants, while having limitations, provide electrical input for users enabling them to "hear" and more fully interact socially with their environment. There has been a clinical shift to the

A multitude of individuals across the globe suffer from hearing loss and that number continues to grow. Cochlear implants, while having limitations, provide electrical input for users enabling them to "hear" and more fully interact socially with their environment. There has been a clinical shift to the bilateral placement of implants in both ears and to bimodal placement of a hearing aid in the contralateral ear if residual hearing is present. However, there is potentially more to subsequent speech perception for bilateral and bimodal cochlear implant users than the electric and acoustic input being received via these modalities. For normal listeners vision plays a role and Rosenblum (2005) points out it is a key feature of an integrated perceptual process. Logically, cochlear implant users should also benefit from integrated visual input. The question is how exactly does vision provide benefit to bilateral and bimodal users. Eight (8) bilateral and 5 bimodal participants received randomized experimental phrases previously generated by Liss et al. (1998) in auditory and audiovisual conditions. The participants recorded their perception of the input. Data were consequently analyzed for percent words correct, consonant errors, and lexical boundary error types. Overall, vision was found to improve speech perception for bilateral and bimodal cochlear implant participants. Each group experienced a significant increase in percent words correct when visual input was added. With vision bilateral participants reduced consonant place errors and demonstrated increased use of syllabic stress cues used in lexical segmentation. Therefore, results suggest vision might provide perceptual benefits for bilateral cochlear implant users by granting access to place information and by augmenting cues for syllabic stress in the absence of acoustic input. On the other hand vision did not provide the bimodal participants significantly increased access to place and stress cues. Therefore the exact mechanism by which bimodal implant users improved speech perception with the addition of vision is unknown. These results point to the complexities of audiovisual integration during speech perception and the need for continued research regarding the benefit vision provides to bilateral and bimodal cochlear implant users.
ContributorsLudwig, Cimarron (Author) / Liss, Julie (Thesis advisor) / Dorman, Michael (Committee member) / Azuma, Tamiko (Committee member) / Arizona State University (Publisher)
Created2015
153415-Thumbnail Image.png
Description
In the noise and commotion of daily life, people achieve effective communication partly because spoken messages are replete with redundant information. Listeners exploit available contextual, linguistic, phonemic, and prosodic cues to decipher degraded speech. When other cues are absent or ambiguous, phonemic and prosodic cues are particularly important

In the noise and commotion of daily life, people achieve effective communication partly because spoken messages are replete with redundant information. Listeners exploit available contextual, linguistic, phonemic, and prosodic cues to decipher degraded speech. When other cues are absent or ambiguous, phonemic and prosodic cues are particularly important because they help identify word boundaries, a process known as lexical segmentation. Individuals vary in the degree to which they rely on phonemic or prosodic cues for lexical segmentation in degraded conditions.

Deafened individuals who use a cochlear implant have diminished access to fine frequency information in the speech signal, and show resulting difficulty perceiving phonemic and prosodic cues. Auditory training on phonemic elements improves word recognition for some listeners. Little is known, however, about the potential benefits of prosodic training, or the degree to which individual differences in cue use affect outcomes.

The present study used simulated cochlear implant stimulation to examine the effects of phonemic and prosodic training on lexical segmentation. Participants completed targeted training with either phonemic or prosodic cues, and received passive exposure to the non-targeted cue. Results show that acuity to the targeted cue improved after training. In addition, both targeted attention and passive exposure to prosodic features led to increased use of these cues for lexical segmentation. Individual differences in degree and source of benefit point to the importance of personalizing clinical intervention to increase flexible use of a range of perceptual strategies for understanding speech.
ContributorsHelms Tillery, Augusta Katherine (Author) / Liss, Julie M. (Thesis advisor) / Azuma, Tamiko (Committee member) / Brown, Christopher A. (Committee member) / Dorman, Michael F. (Committee member) / Utianski, Rene L. (Committee member) / Arizona State University (Publisher)
Created2015
150496-Thumbnail Image.png
Description
Distorted vowel production is a hallmark characteristic of dysarthric speech, irrespective of the underlying neurological condition or dysarthria diagnosis. A variety of acoustic metrics have been used to study the nature of vowel production deficits in dysarthria; however, not all demonstrate sensitivity to the exhibited deficits. Less attention has been

Distorted vowel production is a hallmark characteristic of dysarthric speech, irrespective of the underlying neurological condition or dysarthria diagnosis. A variety of acoustic metrics have been used to study the nature of vowel production deficits in dysarthria; however, not all demonstrate sensitivity to the exhibited deficits. Less attention has been paid to quantifying the vowel production deficits associated with the specific dysarthrias. Attempts to characterize the relationship between naturally degraded vowel production in dysarthria with overall intelligibility have met with mixed results, leading some to question the nature of this relationship. It has been suggested that aberrant vowel acoustics may be an index of overall severity of the impairment and not an "integral component" of the intelligibility deficit. A limitation of previous work detailing perceptual consequences of disordered vowel acoustics is that overall intelligibility, not vowel identification accuracy, has been the perceptual measure of interest. A series of three experiments were conducted to address the problems outlined herein. The goals of the first experiment were to identify subsets of vowel metrics that reliably distinguish speakers with dysarthria from non-disordered speakers and differentiate the dysarthria subtypes. Vowel metrics that capture vowel centralization and reduced spectral distinctiveness among vowels differentiated dysarthric from non-disordered speakers. Vowel metrics generally failed to differentiate speakers according to their dysarthria diagnosis. The second and third experiments were conducted to evaluate the relationship between degraded vowel acoustics and the resulting percept. In the second experiment, correlation and regression analyses revealed vowel metrics that capture vowel centralization and distinctiveness and movement of the second formant frequency were most predictive of vowel identification accuracy and overall intelligibility. The third experiment was conducted to evaluate the extent to which the nature of the acoustic degradation predicts the resulting percept. Results suggest distinctive vowel tokens are better identified and, likewise, better-identified tokens are more distinctive. Further, an above-chance level agreement between nature of vowel misclassification and misidentification errors was demonstrated for all vowels, suggesting degraded vowel acoustics are not merely an index of severity in dysarthria, but rather are an integral component of the resultant intelligibility disorder.
ContributorsLansford, Kaitlin L (Author) / Liss, Julie M (Thesis advisor) / Dorman, Michael F. (Committee member) / Azuma, Tamiko (Committee member) / Lotto, Andrew J (Committee member) / Arizona State University (Publisher)
Created2012
150688-Thumbnail Image.png
Description
Otoacoustic emissions (OAEs) are soft sounds generated by the inner ear and can be recorded within the ear canal. Since OAEs can reflect the functional status of the inner ear, OAE measurements have been widely used for hearing loss screening in the clinic. However, there are limitations in current clinical

Otoacoustic emissions (OAEs) are soft sounds generated by the inner ear and can be recorded within the ear canal. Since OAEs can reflect the functional status of the inner ear, OAE measurements have been widely used for hearing loss screening in the clinic. However, there are limitations in current clinical OAE measurements, such as the restricted frequency range, low efficiency and inaccurate calibration. In this dissertation project, a new method of OAE measurement which used a swept tone to evoke the stimulus frequency OAEs (SFOAEs) was developed to overcome the limitations of current methods. In addition, an in-situ calibration was applied to equalize the spectral level of the swept-tone stimulus at the tympanic membrane (TM). With this method, SFOAEs could be recorded with high resolution over a wide frequency range within one or two minutes. Two experiments were conducted to verify the accuracy of the in-situ calibration and to test the performance of the swept-tone SFOAEs. In experiment I, the calibration of the TM sound pressure was verified in both acoustic cavities and real ears by using a second probe microphone. In addition, the benefits of the in-situ calibration were investigated by measuring OAEs under different calibration conditions. Results showed that the TM pressure could be predicted correctly, and the in-situ calibration provided the most reliable results in OAE measurements. In experiment II, a three-interval paradigm with a tracking-filter technique was used to record the swept-tone SFOAEs in 20 normal-hearing subjects. The test-retest reliability of the swept-tone SFOAEs was examined using a repeated-measure design under various stimulus levels and durations. The accuracy of the swept-tone method was evaluated by comparisons with a standard method using discrete pure tones. Results showed that SFOAEs could be reliably and accurately measured with the swept-tone method. Comparing with the pure-tone approach, the swept-tone method showed significantly improved efficiency. The swept-tone SFOAEs with in-situ calibration may be an alternative of current clinical OAE measurements for more detailed evaluation of inner ear function and accurate diagnosis.
ContributorsChen, Shixiong (Author) / Bian, Lin (Thesis advisor) / Yost, William (Committee member) / Azuma, Tamiko (Committee member) / Dorman, Michael (Committee member) / Arizona State University (Publisher)
Created2012
133858-Thumbnail Image.png
Description
Working memory and cognitive functions contribute to speech recognition in normal hearing and hearing impaired listeners. In this study, auditory and cognitive functions are measured in young adult normal hearing, elderly normal hearing, and elderly cochlear implant subjects. The effects of age and hearing on the different measures are investigated.

Working memory and cognitive functions contribute to speech recognition in normal hearing and hearing impaired listeners. In this study, auditory and cognitive functions are measured in young adult normal hearing, elderly normal hearing, and elderly cochlear implant subjects. The effects of age and hearing on the different measures are investigated. The correlations between auditory/cognitive functions and speech/music recognition are examined. The results may demonstrate which factors can better explain the variable performance across elderly cochlear implant users.
ContributorsKolberg, Courtney Elizabeth (Author) / Luo, Xin (Thesis director) / Azuma, Tamiko (Committee member) / Department of Speech and Hearing Science (Contributor) / Barrett, The Honors College (Contributor)
Created2018-05
137447-Thumbnail Image.png
Description
In this study, the Bark transform and Lobanov method were used to normalize vowel formants in speech produced by persons with dysarthria. The computer classification accuracy of these normalized data were then compared to the results of human perceptual classification accuracy of the actual vowels. These results were then analyzed

In this study, the Bark transform and Lobanov method were used to normalize vowel formants in speech produced by persons with dysarthria. The computer classification accuracy of these normalized data were then compared to the results of human perceptual classification accuracy of the actual vowels. These results were then analyzed to determine if these techniques correlated with the human data.
ContributorsJones, Hanna Vanessa (Author) / Liss, Julie (Thesis director) / Dorman, Michael (Committee member) / Borrie, Stephanie (Committee member) / Barrett, The Honors College (Contributor) / Department of Speech and Hearing Science (Contributor) / Department of English (Contributor) / Speech and Hearing Science (Contributor)
Created2013-05
171425-Thumbnail Image.png
Description
Prosodic features such as fundamental frequency (F0), intensity, and duration convey important information of speech intonation (i.e., is it a statement or a question?). Because cochlear implants (CIs) do not adequately encode pitch-related F0 cues, pre-lignually deaf pediatric CI users have poorer speech intonation perception and production than normal-hearing (NH)

Prosodic features such as fundamental frequency (F0), intensity, and duration convey important information of speech intonation (i.e., is it a statement or a question?). Because cochlear implants (CIs) do not adequately encode pitch-related F0 cues, pre-lignually deaf pediatric CI users have poorer speech intonation perception and production than normal-hearing (NH) children. In contrast, post-lingually deaf adult CI users have developed speech production skills via normal hearing before deafness and implantation. Further, combined electric hearing (via CI) and acoustic hearing (via hearing aid, HA) may improve CI users’ perception of pitch cues in speech intonation. Therefore, this study tested (1) whether post-lingually deaf adult CI users have similar speech intonation production to NH adults and (2) whether their speech intonation production improves with auditory feedback via CI+HA (i.e., bimodal hearing). Eight post-lingually deaf adult bimodal CI users and nine NH adults participated in this study. 10 question-and-answer dialogues with an experimenter were used to elicit 10 pairs of syntactically matched questions and statements from each participant. Bimodal CI users were tested under four hearing conditions: no-device (ND), HA, CI, and CI+HA. F0 change, intensity change, and duration ratio between the last two syllables of each utterance were analyzed to evaluate the quality of speech intonation production. The results showed no significant differences between CI and NH participants in any of the acoustic features of questions and statements. For CI participants, the CI+HA condition led to significantly greater F0 decreases of statements than the ND condition, while the ND condition led to significantly greater duration ratios of questions and statements. These results suggest that bimodal CI users change the use of prosodic cues for speech intonation production in different hearing conditions and access to auditory feedback via CI+HA may improve their voice pitch control to produce more salient statement intonation contours.
ContributorsAi, Chang (Author) / Luo, Xin (Thesis advisor) / Daliri, Ayoub (Committee member) / Davidson, Lisa (Committee member) / Arizona State University (Publisher)
Created2022
171661-Thumbnail Image.png
Description
Speech and music are traditionally thought to be primarily supported by different hemispheres. A growing body of evidence suggests that speech and music often rely on shared resources in bilateral brain networks, though the right and left hemispheres exhibit some domain-specific specialization. While there is ample research investigating speech deficits

Speech and music are traditionally thought to be primarily supported by different hemispheres. A growing body of evidence suggests that speech and music often rely on shared resources in bilateral brain networks, though the right and left hemispheres exhibit some domain-specific specialization. While there is ample research investigating speech deficits in individuals with right hemisphere lesions and amusia, fewer investigate amusia in individuals with left hemisphere lesions and aphasia. Many of the fronto-temporal-parietal regions in the left hemisphere commonly associated with speech processing and production are also implicated in bilateral music processing networks. The current study investigates the relationship between damage to specific regions of interest within these networks, and an individual’s ability to successfully match the pitch and rhythm of a presented melody. Twenty-seven participants with chronic-stroke lesions were given a melody repetition task to hum short novel piano melodies. Participants underwent structural MRI acquisition and were administered an extensive speech and cognitive battery. Pitch and rhythm scores were calculated by correlating participant responses and target piano notes. Production errors were calculated by counting trials with responses that don’t match the target melody’s note count. Overall, performance varied widely, and rhythm scores were significantly correlated. Working memory scores were significantly correlated with rhythm scores and production errors, but not pitch scores. Broca’s area lesions were not associated with significant differences in any of the melody repetition measures, while left Heschl’s gyrus lesions were associated with worse performance on pitch, rhythm, and production errors. Lower rhythm scores were associated with lesions including both the left anterior and posterior superior temporal gyrus, and in participants with damage to the left planum temporale. The other regions of interest were not consistently associated with poorer pitch scores or production errors. Although the present study does have limitations, the current study suggests lesions to left hemisphere regions thought to only affect speech also affect musical pitch and rhythm processing. Therefore, amusia should not be characterized solely as a right hemisphere disorder. Instead, musical abilities of individuals with left hemisphere stroke and aphasia should be characterized to better understand their deficits and mechanisms of impairment.
ContributorsWojtaszek, Mallory (Author) / Rogalsky, Corianne (Thesis advisor) / Daliri, Ayoub (Committee member) / Patten, Kristopher (Committee member) / Arizona State University (Publisher)
Created2022
161435-Thumbnail Image.png
Description
This study focuses on the properties of binaural beats (BBs) compared to Monaural beats (MBs) and their steady-state response at the level of the Superior Olivary Complex (SOC). An auditory nerve stimulator was used to simulate the response of the SOC. The simulator was fed either BBs or MBs stimuli

This study focuses on the properties of binaural beats (BBs) compared to Monaural beats (MBs) and their steady-state response at the level of the Superior Olivary Complex (SOC). An auditory nerve stimulator was used to simulate the response of the SOC. The simulator was fed either BBs or MBs stimuli to compare the SOC response. This was done for different frequencies at twenty, forty, and sixty hertz for comparison of the SOC response envelopes. A correlation between the SOC response envelopes for both types of beats and the waveform resulting from adding two tones together was completed. The highest correlation for BBs was found to be forty hertz and for MBs it was sixty hertz. A Fast Fourier Transform (FFT) was also completed on the stimulus envelope and the SOC response envelopes. The FFT was able to show that within the BBs presentation the envelopes of the original stimuli showed no difference frequency. However, the difference frequency was present in the binaural SOC response envelope. For the MBs, the difference frequency was present within the stimulus and the monaural SOC response envelope.
ContributorsCrawford, Taylor Janay (Author) / Brewer, Gene (Thesis advisor) / Zhou, Yi (Committee member) / Azuma, Tamiko (Committee member) / Arizona State University (Publisher)
Created2021
132557-Thumbnail Image.png
Description
Past studies have shown that auditory feedback plays an important role in maintaining the speech production system. Typically, speakers compensate for auditory feedback alterations when the alterations persist over time (auditory motor adaptation). Our study focused on how to increase the rate of adaptation by using different auditory feedback conditions.

Past studies have shown that auditory feedback plays an important role in maintaining the speech production system. Typically, speakers compensate for auditory feedback alterations when the alterations persist over time (auditory motor adaptation). Our study focused on how to increase the rate of adaptation by using different auditory feedback conditions. For the present study, we recruited a total of 30 participants. We examined auditory motor adaptation after participants completed three conditions: Normal speaking, noise-masked speaking, and silent reading. The normal condition was used as a control condition. In the noise-masked condition, noise was added to the auditory feedback to completely mask speech outputs. In the silent reading condition, participants were instructed to silently read target words in their heads, then read the words out loud. We found that the learning rate in the noise-masked condition was lower than that in the normal condition. In contrast, participants adapted at a faster rate after they experience the silent reading condition. Overall, this study demonstrated that adaptation rate can be modified through pre-exposing participants to different types auditory-motor manipulations.
ContributorsNavarrete, Karina (Author) / Daliri, Ayoub (Thesis director) / Peter, Beate (Committee member) / College of Health Solutions (Contributor) / Barrett, The Honors College (Contributor)
Created2019-05