Matching Items (7)
Filtering by

Clear all filters

137447-Thumbnail Image.png
Description
In this study, the Bark transform and Lobanov method were used to normalize vowel formants in speech produced by persons with dysarthria. The computer classification accuracy of these normalized data were then compared to the results of human perceptual classification accuracy of the actual vowels. These results were then analyzed

In this study, the Bark transform and Lobanov method were used to normalize vowel formants in speech produced by persons with dysarthria. The computer classification accuracy of these normalized data were then compared to the results of human perceptual classification accuracy of the actual vowels. These results were then analyzed to determine if these techniques correlated with the human data.
ContributorsJones, Hanna Vanessa (Author) / Liss, Julie (Thesis director) / Dorman, Michael (Committee member) / Borrie, Stephanie (Committee member) / Barrett, The Honors College (Contributor) / Department of Speech and Hearing Science (Contributor) / Department of English (Contributor) / Speech and Hearing Science (Contributor)
Created2013-05
148383-Thumbnail Image.png
Description

The distinctions between the neural resources supporting speech and music comprehension have long been studied using contexts like aphasia and amusia, and neuroimaging in control subjects. While many models have emerged to describe the different networks uniquely recruited in response to speech and music stimuli, there are still many questions,

The distinctions between the neural resources supporting speech and music comprehension have long been studied using contexts like aphasia and amusia, and neuroimaging in control subjects. While many models have emerged to describe the different networks uniquely recruited in response to speech and music stimuli, there are still many questions, especially regarding left-hemispheric strokes that disrupt typical speech-processing brain networks, and how musical training might affect the brain networks recruited for speech after a stroke. Thus, our study aims to explore some questions related to the above topics. We collected task-based functional MRI data from 12 subjects who previously experienced a left-hemispheric stroke. Subjects listened to blocks of spoken sentences and novel piano melodies during scanning to examine the differences in brain activations in response to speech and music. We hypothesized that speech stimuli would activate right frontal regions, and music stimuli would activate the right superior temporal regions more than speech (both findings not seen in previous studies of control subjects), as a result of functional changes in the brain, following the left-hemispheric stroke and particularly the loss of functionality in the left temporal lobe. We also hypothesized that the music stimuli would cause a stronger activation in right temporal cortex for participants who have had musical training than those who have not. Our results indicate that speech stimuli compared to rest activated the anterior superior temporal gyrus bilaterally and activated the right inferior frontal lobe. Music stimuli compared to rest did not activate the brain bilaterally, but rather only activated the right middle temporal gyrus. When the group analysis was performed with music experience as a covariate, we found that musical training did not affect activations to music stimuli specifically, but there was greater right hemisphere activation in several regions in response to speech stimuli as a function of more years of musical training. The results of the study agree with our hypotheses regarding the functional changes in the brain, but they conflict with our hypothesis about musical expertise. Overall, the study has generated interesting starting points for further explorations of how musical neural resources may be recruited for speech processing after damage to typical language networks.

ContributorsKarthigeyan, Vishnu R (Author) / Rogalsky, Corianne (Thesis director) / Daliri, Ayoub (Committee member) / Harrington Bioengineering Program (Contributor) / School of Life Sciences (Contributor) / Barrett, The Honors College (Contributor)
Created2021-05
171425-Thumbnail Image.png
Description
Prosodic features such as fundamental frequency (F0), intensity, and duration convey important information of speech intonation (i.e., is it a statement or a question?). Because cochlear implants (CIs) do not adequately encode pitch-related F0 cues, pre-lignually deaf pediatric CI users have poorer speech intonation perception and production than normal-hearing (NH)

Prosodic features such as fundamental frequency (F0), intensity, and duration convey important information of speech intonation (i.e., is it a statement or a question?). Because cochlear implants (CIs) do not adequately encode pitch-related F0 cues, pre-lignually deaf pediatric CI users have poorer speech intonation perception and production than normal-hearing (NH) children. In contrast, post-lingually deaf adult CI users have developed speech production skills via normal hearing before deafness and implantation. Further, combined electric hearing (via CI) and acoustic hearing (via hearing aid, HA) may improve CI users’ perception of pitch cues in speech intonation. Therefore, this study tested (1) whether post-lingually deaf adult CI users have similar speech intonation production to NH adults and (2) whether their speech intonation production improves with auditory feedback via CI+HA (i.e., bimodal hearing). Eight post-lingually deaf adult bimodal CI users and nine NH adults participated in this study. 10 question-and-answer dialogues with an experimenter were used to elicit 10 pairs of syntactically matched questions and statements from each participant. Bimodal CI users were tested under four hearing conditions: no-device (ND), HA, CI, and CI+HA. F0 change, intensity change, and duration ratio between the last two syllables of each utterance were analyzed to evaluate the quality of speech intonation production. The results showed no significant differences between CI and NH participants in any of the acoustic features of questions and statements. For CI participants, the CI+HA condition led to significantly greater F0 decreases of statements than the ND condition, while the ND condition led to significantly greater duration ratios of questions and statements. These results suggest that bimodal CI users change the use of prosodic cues for speech intonation production in different hearing conditions and access to auditory feedback via CI+HA may improve their voice pitch control to produce more salient statement intonation contours.
ContributorsAi, Chang (Author) / Luo, Xin (Thesis advisor) / Daliri, Ayoub (Committee member) / Davidson, Lisa (Committee member) / Arizona State University (Publisher)
Created2022
171661-Thumbnail Image.png
Description
Speech and music are traditionally thought to be primarily supported by different hemispheres. A growing body of evidence suggests that speech and music often rely on shared resources in bilateral brain networks, though the right and left hemispheres exhibit some domain-specific specialization. While there is ample research investigating speech deficits

Speech and music are traditionally thought to be primarily supported by different hemispheres. A growing body of evidence suggests that speech and music often rely on shared resources in bilateral brain networks, though the right and left hemispheres exhibit some domain-specific specialization. While there is ample research investigating speech deficits in individuals with right hemisphere lesions and amusia, fewer investigate amusia in individuals with left hemisphere lesions and aphasia. Many of the fronto-temporal-parietal regions in the left hemisphere commonly associated with speech processing and production are also implicated in bilateral music processing networks. The current study investigates the relationship between damage to specific regions of interest within these networks, and an individual’s ability to successfully match the pitch and rhythm of a presented melody. Twenty-seven participants with chronic-stroke lesions were given a melody repetition task to hum short novel piano melodies. Participants underwent structural MRI acquisition and were administered an extensive speech and cognitive battery. Pitch and rhythm scores were calculated by correlating participant responses and target piano notes. Production errors were calculated by counting trials with responses that don’t match the target melody’s note count. Overall, performance varied widely, and rhythm scores were significantly correlated. Working memory scores were significantly correlated with rhythm scores and production errors, but not pitch scores. Broca’s area lesions were not associated with significant differences in any of the melody repetition measures, while left Heschl’s gyrus lesions were associated with worse performance on pitch, rhythm, and production errors. Lower rhythm scores were associated with lesions including both the left anterior and posterior superior temporal gyrus, and in participants with damage to the left planum temporale. The other regions of interest were not consistently associated with poorer pitch scores or production errors. Although the present study does have limitations, the current study suggests lesions to left hemisphere regions thought to only affect speech also affect musical pitch and rhythm processing. Therefore, amusia should not be characterized solely as a right hemisphere disorder. Instead, musical abilities of individuals with left hemisphere stroke and aphasia should be characterized to better understand their deficits and mechanisms of impairment.
ContributorsWojtaszek, Mallory (Author) / Rogalsky, Corianne (Thesis advisor) / Daliri, Ayoub (Committee member) / Patten, Kristopher (Committee member) / Arizona State University (Publisher)
Created2022
171445-Thumbnail Image.png
Description
Stroke is the leading cause of long-term disability in the U.S., with up to 60% of strokescausing speech loss. Individuals with severe stroke, who require the most frequent, intense speech therapy, often cannot adhere to treatments due to high cost and low success rates. Therefore, the ability to make functionally

Stroke is the leading cause of long-term disability in the U.S., with up to 60% of strokescausing speech loss. Individuals with severe stroke, who require the most frequent, intense speech therapy, often cannot adhere to treatments due to high cost and low success rates. Therefore, the ability to make functionally significant changes in individuals with severe post- stroke aphasia remains a key challenge for the rehabilitation community. This dissertation aimed to evaluate the efficacy of Startle Adjuvant Rehabilitation Therapy (START), a tele-enabled, low- cost treatment, to improve quality of life and speech in individuals with severe-to-moderate stroke. START is the exposure to startling acoustic stimuli during practice of motor tasks in individuals with stroke. START increases the speed and intensity of practice in severely impaired post-stroke reaching, with START eliciting muscle activity 2-3 times higher than maximum voluntary contraction. Voluntary reaching distance, onset, and final accuracy increased after a session of START, suggesting a rehabilitative effect. However, START has not been evaluated during impaired speech. The objective of this study is to determine if impaired speech can be elicited by startling acoustic stimuli, and if three days of START training can enhance clinical measures of moderate to severe post-stroke aphasia and apraxia of speech. This dissertation evaluates START in 42 individuals with post-stroke speech impairment via telehealth in a Phase 0 clinical trial. Results suggest that impaired speech can be elicited by startling acoustic stimuli and that START benefits individuals with severe-to-moderate post-stroke impairments in both linguistic and motor speech domains. This fills an important gap in aphasia care, as many speech therapies remain ineffective and financially inaccessible for patients with severe deficits. START is effective, remotely delivered, and may likely serve as an affordable adjuvant to traditional therapy for those that have poor access to quality care.
ContributorsSwann, Zoe Elisabeth (Author) / Honeycutt, Claire F (Thesis advisor) / Daliri, Ayoub (Committee member) / Rogalsky, Corianne (Committee member) / Liss, Julie (Committee member) / Schaefer, Sydney (Committee member) / Arizona State University (Publisher)
Created2022
132557-Thumbnail Image.png
Description
Past studies have shown that auditory feedback plays an important role in maintaining the speech production system. Typically, speakers compensate for auditory feedback alterations when the alterations persist over time (auditory motor adaptation). Our study focused on how to increase the rate of adaptation by using different auditory feedback conditions.

Past studies have shown that auditory feedback plays an important role in maintaining the speech production system. Typically, speakers compensate for auditory feedback alterations when the alterations persist over time (auditory motor adaptation). Our study focused on how to increase the rate of adaptation by using different auditory feedback conditions. For the present study, we recruited a total of 30 participants. We examined auditory motor adaptation after participants completed three conditions: Normal speaking, noise-masked speaking, and silent reading. The normal condition was used as a control condition. In the noise-masked condition, noise was added to the auditory feedback to completely mask speech outputs. In the silent reading condition, participants were instructed to silently read target words in their heads, then read the words out loud. We found that the learning rate in the noise-masked condition was lower than that in the normal condition. In contrast, participants adapted at a faster rate after they experience the silent reading condition. Overall, this study demonstrated that adaptation rate can be modified through pre-exposing participants to different types auditory-motor manipulations.
ContributorsNavarrete, Karina (Author) / Daliri, Ayoub (Thesis director) / Peter, Beate (Committee member) / College of Health Solutions (Contributor) / Barrett, The Honors College (Contributor)
Created2019-05
165768-Thumbnail Image.png
Description

Aphasia is an acquired speech-language disorder that is brought upon because of post-stroke damage to the left hemisphere of the brain. Treatment for individuals with these speech production impairments can be challenging for clinicians because there is high variability in language recovery after stroke and lesion size does not predict

Aphasia is an acquired speech-language disorder that is brought upon because of post-stroke damage to the left hemisphere of the brain. Treatment for individuals with these speech production impairments can be challenging for clinicians because there is high variability in language recovery after stroke and lesion size does not predict language outcome (Lazar et al, 2008). It is also important to note that adequate integration between the sensory and motor systems is critical for many aspects of fluent speech and correcting speech errors. The present study seeks to investigate how delayed auditory-feedback paradigms, which alter the time scale of sensorimotor interactions in speech, might be useful in characterizing the speech production impairments in individuals with aphasia. To this end, six individuals with aphasia and nine age-matched control subjects were introduced to delayed auditory feedback at 4 different intervals during a sentence reading task. Our study found that the aphasia group generated more errors in 3 out of the 4 linguistic categories measured across all delay lengths, but that there was no significant main effect delay or interaction between group and delay. Acoustic analyses revealed variability among scores within the control and aphasia groups on all phoneme types. For example, acoustic analyses highlighted how the individual with conduction aphasia showed significantly longer amplitudes at all delays, and significantly larger duration at no delay, but that significance diminished as delay periods increased. Overall, this study suggests that delayed auditory feedback’s effects vary across individuals with aphasia and provides a base of research to be further built on by future testing of individuals with varying aphasia types and levels of severity.

ContributorsPettijohn, Madilyn (Author) / Rogalsky, Corianne (Thesis director) / Daliri, Ayoub (Committee member) / Barrett, The Honors College (Contributor) / College of Health Solutions (Contributor)
Created2022-05