Matching Items (7)
Filtering by

Clear all filters

137447-Thumbnail Image.png
Description
In this study, the Bark transform and Lobanov method were used to normalize vowel formants in speech produced by persons with dysarthria. The computer classification accuracy of these normalized data were then compared to the results of human perceptual classification accuracy of the actual vowels. These results were then analyzed

In this study, the Bark transform and Lobanov method were used to normalize vowel formants in speech produced by persons with dysarthria. The computer classification accuracy of these normalized data were then compared to the results of human perceptual classification accuracy of the actual vowels. These results were then analyzed to determine if these techniques correlated with the human data.
ContributorsJones, Hanna Vanessa (Author) / Liss, Julie (Thesis director) / Dorman, Michael (Committee member) / Borrie, Stephanie (Committee member) / Barrett, The Honors College (Contributor) / Department of Speech and Hearing Science (Contributor) / Department of English (Contributor) / Speech and Hearing Science (Contributor)
Created2013-05
133025-Thumbnail Image.png
Description
During speech, the brain is constantly processing and monitoring speech output through the auditory feedback loop to ensure correct and accurate speech. If the speech signal is experimentally altered/perturbed while speaking, the brain compensates for the perturbations by changing speech output in the opposite direction of the perturbations. In this

During speech, the brain is constantly processing and monitoring speech output through the auditory feedback loop to ensure correct and accurate speech. If the speech signal is experimentally altered/perturbed while speaking, the brain compensates for the perturbations by changing speech output in the opposite direction of the perturbations. In this study, we designed an experiment that examined the compensatory responses in response to unexpected vowel perturbations during speech. We applied two types of perturbations. In one condition, the vowel /ɛ/ was perturbed toward the vowel /æ/ by simultaneously shifting both the first formant (F1) and the second formant (F2) at 3 different levels (.5=small, 1=medium, and 1.5=large shifts). In another condition, the vowel /ɛ/ was perturbed by shifting F1 at 3 different levels (small, medium, and large shifts). Our results showed that there was a significant perturbation-type effect, with participants compensating more in response to perturbation that shifted /ɛ/ toward /æ/. In addition, we found that there was a significant level effect, with the compensatory responses to level .5 being significantly smaller than the compensatory responses to levels 1 and 1.5, regardless of the perturbation pathway. We also found that responses to shift level 1 and shift level 1.5 did not differ. Overall, our results highlighted the importance of the auditory feedback loop during speech production and how the brain is more sensitive to auditory errors that change a vowel category (e.g., /ɛ/ to /æ/).
ContributorsFitzgerald, Lacee (Author) / Daliri, Ayoub (Thesis director) / Corianne, Rogalsky (Committee member) / College of Health Solutions (Contributor) / Barrett, The Honors College (Contributor)
Created2019-05
133014-Thumbnail Image.png
Description
Speech perception and production are bidirectionally related, and they influence each other. The purpose of this study was to better understand the relationship between speech perception and speech production. It is known that applying auditory perturbations during speech production causes subjects to alter their productions (e.g., change their formant frequencies).

Speech perception and production are bidirectionally related, and they influence each other. The purpose of this study was to better understand the relationship between speech perception and speech production. It is known that applying auditory perturbations during speech production causes subjects to alter their productions (e.g., change their formant frequencies). In other words, previous studies have examined the effects of altered speech perception on speech production. However, in this study, we examined potential effects of speech production on speech perception. Subjects completed a block of a categorical perception task followed by a block of a speaking or a listening task followed by another block of the categorical perception task. Subjects completed three blocks of the speaking task and three blocks of the listening task. In the three blocks of a given task (speaking or listening) auditory feedback was 1) normal, 2) altered to be less variable, or 3) altered to be more variable. Unlike previous studies, we used subject’s own speech samples to generate speech stimuli for the perception task. For each categorical perception block, we calculated subject’s psychometric function and determined subject’s categorical boundary. The results showed that subjects’ perceptual boundary remained stable in all conditions and all blocks. Overall, our results did not provide evidence for the effects of speech production on speech perception.
ContributorsDaugherty, Allison (Author) / Daliri, Ayoub (Thesis director) / Rogalsky, Corianne (Committee member) / College of Health Solutions (Contributor) / Barrett, The Honors College (Contributor)
Created2019-05
148400-Thumbnail Image.png
Description

The brain continuously monitors speech output to detect potential errors between its sensory prediction and its sensory production (Daliri et al., 2020). When the brain encounters an error, it generates a corrective motor response, usually in the opposite direction, to reduce the effect of the error. Previous studies have shown

The brain continuously monitors speech output to detect potential errors between its sensory prediction and its sensory production (Daliri et al., 2020). When the brain encounters an error, it generates a corrective motor response, usually in the opposite direction, to reduce the effect of the error. Previous studies have shown that the type of auditory error received may impact a participant’s corrective response. In this study, we examined whether participants respond differently to categorical or non-categorical errors. We applied two types of perturbation in real-time by shifting the first formant (F1) and second formant (F2) at three different magnitudes. The vowel /ɛ/ was shifted toward the vowel /æ/ in the categorical perturbation condition. In the non-categorical perturbation condition, the vowel /ɛ/ was shifted to a sound outside of the vowel quadrilateral (increasing both F1 and F2). Our results showed that participants responded to the categorical perturbation while they did not respond to the non-categorical perturbation. Additionally, we found that in the categorical perturbation condition, as the magnitude of the perturbation increased, the magnitude of the response increased. Overall, our results suggest that the brain may respond differently to categorical and non-categorical errors, and the brain is highly attuned to errors in speech.

ContributorsCincera, Kirsten Michelle (Author) / Daliri, Ayoub (Thesis director) / Azuma, Tamiko (Committee member) / School of Sustainability (Contributor) / College of Health Solutions (Contributor, Contributor) / Barrett, The Honors College (Contributor)
Created2021-05
171425-Thumbnail Image.png
Description
Prosodic features such as fundamental frequency (F0), intensity, and duration convey important information of speech intonation (i.e., is it a statement or a question?). Because cochlear implants (CIs) do not adequately encode pitch-related F0 cues, pre-lignually deaf pediatric CI users have poorer speech intonation perception and production than normal-hearing (NH)

Prosodic features such as fundamental frequency (F0), intensity, and duration convey important information of speech intonation (i.e., is it a statement or a question?). Because cochlear implants (CIs) do not adequately encode pitch-related F0 cues, pre-lignually deaf pediatric CI users have poorer speech intonation perception and production than normal-hearing (NH) children. In contrast, post-lingually deaf adult CI users have developed speech production skills via normal hearing before deafness and implantation. Further, combined electric hearing (via CI) and acoustic hearing (via hearing aid, HA) may improve CI users’ perception of pitch cues in speech intonation. Therefore, this study tested (1) whether post-lingually deaf adult CI users have similar speech intonation production to NH adults and (2) whether their speech intonation production improves with auditory feedback via CI+HA (i.e., bimodal hearing). Eight post-lingually deaf adult bimodal CI users and nine NH adults participated in this study. 10 question-and-answer dialogues with an experimenter were used to elicit 10 pairs of syntactically matched questions and statements from each participant. Bimodal CI users were tested under four hearing conditions: no-device (ND), HA, CI, and CI+HA. F0 change, intensity change, and duration ratio between the last two syllables of each utterance were analyzed to evaluate the quality of speech intonation production. The results showed no significant differences between CI and NH participants in any of the acoustic features of questions and statements. For CI participants, the CI+HA condition led to significantly greater F0 decreases of statements than the ND condition, while the ND condition led to significantly greater duration ratios of questions and statements. These results suggest that bimodal CI users change the use of prosodic cues for speech intonation production in different hearing conditions and access to auditory feedback via CI+HA may improve their voice pitch control to produce more salient statement intonation contours.
ContributorsAi, Chang (Author) / Luo, Xin (Thesis advisor) / Daliri, Ayoub (Committee member) / Davidson, Lisa (Committee member) / Arizona State University (Publisher)
Created2022
171661-Thumbnail Image.png
Description
Speech and music are traditionally thought to be primarily supported by different hemispheres. A growing body of evidence suggests that speech and music often rely on shared resources in bilateral brain networks, though the right and left hemispheres exhibit some domain-specific specialization. While there is ample research investigating speech deficits

Speech and music are traditionally thought to be primarily supported by different hemispheres. A growing body of evidence suggests that speech and music often rely on shared resources in bilateral brain networks, though the right and left hemispheres exhibit some domain-specific specialization. While there is ample research investigating speech deficits in individuals with right hemisphere lesions and amusia, fewer investigate amusia in individuals with left hemisphere lesions and aphasia. Many of the fronto-temporal-parietal regions in the left hemisphere commonly associated with speech processing and production are also implicated in bilateral music processing networks. The current study investigates the relationship between damage to specific regions of interest within these networks, and an individual’s ability to successfully match the pitch and rhythm of a presented melody. Twenty-seven participants with chronic-stroke lesions were given a melody repetition task to hum short novel piano melodies. Participants underwent structural MRI acquisition and were administered an extensive speech and cognitive battery. Pitch and rhythm scores were calculated by correlating participant responses and target piano notes. Production errors were calculated by counting trials with responses that don’t match the target melody’s note count. Overall, performance varied widely, and rhythm scores were significantly correlated. Working memory scores were significantly correlated with rhythm scores and production errors, but not pitch scores. Broca’s area lesions were not associated with significant differences in any of the melody repetition measures, while left Heschl’s gyrus lesions were associated with worse performance on pitch, rhythm, and production errors. Lower rhythm scores were associated with lesions including both the left anterior and posterior superior temporal gyrus, and in participants with damage to the left planum temporale. The other regions of interest were not consistently associated with poorer pitch scores or production errors. Although the present study does have limitations, the current study suggests lesions to left hemisphere regions thought to only affect speech also affect musical pitch and rhythm processing. Therefore, amusia should not be characterized solely as a right hemisphere disorder. Instead, musical abilities of individuals with left hemisphere stroke and aphasia should be characterized to better understand their deficits and mechanisms of impairment.
ContributorsWojtaszek, Mallory (Author) / Rogalsky, Corianne (Thesis advisor) / Daliri, Ayoub (Committee member) / Patten, Kristopher (Committee member) / Arizona State University (Publisher)
Created2022
132557-Thumbnail Image.png
Description
Past studies have shown that auditory feedback plays an important role in maintaining the speech production system. Typically, speakers compensate for auditory feedback alterations when the alterations persist over time (auditory motor adaptation). Our study focused on how to increase the rate of adaptation by using different auditory feedback conditions.

Past studies have shown that auditory feedback plays an important role in maintaining the speech production system. Typically, speakers compensate for auditory feedback alterations when the alterations persist over time (auditory motor adaptation). Our study focused on how to increase the rate of adaptation by using different auditory feedback conditions. For the present study, we recruited a total of 30 participants. We examined auditory motor adaptation after participants completed three conditions: Normal speaking, noise-masked speaking, and silent reading. The normal condition was used as a control condition. In the noise-masked condition, noise was added to the auditory feedback to completely mask speech outputs. In the silent reading condition, participants were instructed to silently read target words in their heads, then read the words out loud. We found that the learning rate in the noise-masked condition was lower than that in the normal condition. In contrast, participants adapted at a faster rate after they experience the silent reading condition. Overall, this study demonstrated that adaptation rate can be modified through pre-exposing participants to different types auditory-motor manipulations.
ContributorsNavarrete, Karina (Author) / Daliri, Ayoub (Thesis director) / Peter, Beate (Committee member) / College of Health Solutions (Contributor) / Barrett, The Honors College (Contributor)
Created2019-05