Matching Items (1,284)
Filtering by

Clear all filters

156545-Thumbnail Image.png
Description
Adapting to one novel condition of a motor task has been shown to generalize to other naïve conditions (i.e., motor generalization). In contrast, learning one task affects the proficiency of another task that is altogether different (i.e. motor transfer). Much more is known about motor generalization than about motor transfer,

Adapting to one novel condition of a motor task has been shown to generalize to other naïve conditions (i.e., motor generalization). In contrast, learning one task affects the proficiency of another task that is altogether different (i.e. motor transfer). Much more is known about motor generalization than about motor transfer, despite of decades of behavioral evidence. Moreover, motor generalization is studied as a probe to understanding how movements in any novel situations are affected by previous experiences. Thus, one could assume that mechanisms underlying transfer from trained to untrained tasks may be same as the ones known to be underlying motor generalization. However, the direct relationship between transfer and generalization has not yet been shown, thereby limiting the assumption that transfer and generalization rely on the same mechanisms. The purpose of this study was to test whether there is a relationship between motor generalization and motor transfer. To date, ten healthy young adult subjects were scored on their motor generalization ability and motor transfer ability on various upper extremity tasks. Although our current sample size is too small to clearly identify whether there is a relationship between generalization and transfer, Pearson product-moment correlation results and a priori power analysis suggest that a significant relationship will be observed with an increased sample size by 30%. If so, this would suggest that the mechanisms of transfer may be similar to those of motor generalization.
ContributorsSohani, Priyanka (Author) / Schaefer, Sydney (Thesis advisor) / Daliri, Ayoub (Committee member) / Honeycutt, Claire (Committee member) / Arizona State University (Publisher)
Created2018
133868-Thumbnail Image.png
Description
Previous studies have shown that experimentally implemented formant perturbations result in production of compensatory responses in the opposite direction of the perturbations. In this study, we investigated how participants adapt to a) auditory perturbations that shift formants to a specific point in the vowel space and hence remove variability of

Previous studies have shown that experimentally implemented formant perturbations result in production of compensatory responses in the opposite direction of the perturbations. In this study, we investigated how participants adapt to a) auditory perturbations that shift formants to a specific point in the vowel space and hence remove variability of formants (focused perturbations), and b) auditory perturbations that preserve the natural variability of formants (uniform perturbations). We examined whether the degree of adaptation to focused perturbations was different from adaptation to uniform adaptations. We found that adaptation magnitude of the first formant (F1) was smaller in response to focused perturbations. However, F1 adaptation was initially moved in the same direction as the perturbation, and after several trials the F1 adaptation changed its course toward the opposite direction of the perturbation. We also found that adaptation of the second formant (F2) was smaller in response to focused perturbations than F2 responses to uniform perturbations. Overall, these results suggest that formant variability is an important component of speech, and that our central nervous system takes into account such variability to produce more accurate speech output.
ContributorsDittman, Jonathan William (Author) / Daliri, Ayoub (Thesis director) / Berisha, Visar (Committee member) / School of Life Sciences (Contributor) / Barrett, The Honors College (Contributor)
Created2018-05
133445-Thumbnail Image.png
Description
The objective of this study was to analyze the auditory feedback system and the pitch-shift reflex in relation to vibrato. 11 subjects (female n = 8, male n = 3) without speech, hearing, or neurological disorders were used. Compensation magnitude, adaptation magnitude, relative response phase, and passive and active perception

The objective of this study was to analyze the auditory feedback system and the pitch-shift reflex in relation to vibrato. 11 subjects (female n = 8, male n = 3) without speech, hearing, or neurological disorders were used. Compensation magnitude, adaptation magnitude, relative response phase, and passive and active perception were recorded when the subjects were subjected to auditory feedback perturbed by phasic amplitude and F0 modulation, or “vibrato”. “Tremolo,” or phasic amplitude modulation, was used as a control. Significant correlation was found between the ability to perceive vibrato and tremolo in active trials and the ability to perceive in passive trials (p=0.01). Passive perceptions were lower (more sensitive) than active perceptions (p< 0.01). Adaptation vibrato trials showed significant modulation magnitude (p=0.031), while tremolo did not. The two conditions were significantly different (p<0.01). There was significant phase change for both tremolo and vibrato, but vibrato phase change was greater, nearly 180° (p<0.01). In the compensation trials, the modulation change from control to vibrato trials was significantly greater than the change from control to tremolo (p=0.01). Vibrato and tremolo also had significantly different average phase change (p<0.01). It can be concluded that the auditory feedback system tries to cancel out dynamic pitch perturbations by cancelling them out out-of-phase. Similar systems must be used to adapt and to compensate to vibrato. Despite the auditory feedback system’s online monitoring, the passive perception was still better than active perception, possibly because it required only one task (perceiving) rather than two (perceiving and producing). The pitch-shift reflex compensates to the sensitivity of the auditory feedback system, as shown by the increased perception of vibrato over tremolo.
ContributorsHiggins, Alexis Brittany (Author) / Daliri, Ayoub (Thesis director) / Liss, Julie (Committee member) / Luo, Xin (Committee member) / School of Life Sciences (Contributor) / Barrett, The Honors College (Contributor)
Created2018-05
133025-Thumbnail Image.png
Description
During speech, the brain is constantly processing and monitoring speech output through the auditory feedback loop to ensure correct and accurate speech. If the speech signal is experimentally altered/perturbed while speaking, the brain compensates for the perturbations by changing speech output in the opposite direction of the perturbations. In this

During speech, the brain is constantly processing and monitoring speech output through the auditory feedback loop to ensure correct and accurate speech. If the speech signal is experimentally altered/perturbed while speaking, the brain compensates for the perturbations by changing speech output in the opposite direction of the perturbations. In this study, we designed an experiment that examined the compensatory responses in response to unexpected vowel perturbations during speech. We applied two types of perturbations. In one condition, the vowel /ɛ/ was perturbed toward the vowel /æ/ by simultaneously shifting both the first formant (F1) and the second formant (F2) at 3 different levels (.5=small, 1=medium, and 1.5=large shifts). In another condition, the vowel /ɛ/ was perturbed by shifting F1 at 3 different levels (small, medium, and large shifts). Our results showed that there was a significant perturbation-type effect, with participants compensating more in response to perturbation that shifted /ɛ/ toward /æ/. In addition, we found that there was a significant level effect, with the compensatory responses to level .5 being significantly smaller than the compensatory responses to levels 1 and 1.5, regardless of the perturbation pathway. We also found that responses to shift level 1 and shift level 1.5 did not differ. Overall, our results highlighted the importance of the auditory feedback loop during speech production and how the brain is more sensitive to auditory errors that change a vowel category (e.g., /ɛ/ to /æ/).
ContributorsFitzgerald, Lacee (Author) / Daliri, Ayoub (Thesis director) / Corianne, Rogalsky (Committee member) / College of Health Solutions (Contributor) / Barrett, The Honors College (Contributor)
Created2019-05
133028-Thumbnail Image.png
Description
Previous studies have found that the detection of near-threshold stimuli is decreased immediately before movement and throughout movement production. This has been suggested to occur through the use of the internal forward model processing an efferent copy of the motor command and creating a prediction that is used to cancel

Previous studies have found that the detection of near-threshold stimuli is decreased immediately before movement and throughout movement production. This has been suggested to occur through the use of the internal forward model processing an efferent copy of the motor command and creating a prediction that is used to cancel out the resulting sensory feedback. Currently, there are no published accounts of the perception of tactile signals for motor tasks and contexts related to the lips during both speech planning and production. In this study, we measured the responsiveness of the somatosensory system during speech planning using light electrical stimulation below the lower lip by comparing perception during mixed speaking and silent reading conditions. Participants were asked to judge whether a constant near-threshold electrical stimulation (subject-specific intensity, 85% detected at rest) was present during different time points relative to an initial visual cue. In the speaking condition, participants overtly produced target words shown on a computer monitor. In the reading condition, participants read the same target words silently to themselves without any movement or sound. We found that detection of the stimulus was attenuated during speaking conditions while remaining at a constant level close to the perceptual threshold throughout the silent reading condition. Perceptual modulation was most intense during speech production and showed some attenuation just prior to speech production during the planning period of speech. This demonstrates that there is a significant decrease in the responsiveness of the somatosensory system during speech production as well as milliseconds before speech is even produced which has implications for speech disorders such as stuttering and schizophrenia with pronounced deficits in the somatosensory system.
ContributorsMcguffin, Brianna Jean (Author) / Daliri, Ayoub (Thesis director) / Liss, Julie (Committee member) / Department of Psychology (Contributor) / School of Life Sciences (Contributor) / Barrett, The Honors College (Contributor)
Created2019-05
133014-Thumbnail Image.png
Description
Speech perception and production are bidirectionally related, and they influence each other. The purpose of this study was to better understand the relationship between speech perception and speech production. It is known that applying auditory perturbations during speech production causes subjects to alter their productions (e.g., change their formant frequencies).

Speech perception and production are bidirectionally related, and they influence each other. The purpose of this study was to better understand the relationship between speech perception and speech production. It is known that applying auditory perturbations during speech production causes subjects to alter their productions (e.g., change their formant frequencies). In other words, previous studies have examined the effects of altered speech perception on speech production. However, in this study, we examined potential effects of speech production on speech perception. Subjects completed a block of a categorical perception task followed by a block of a speaking or a listening task followed by another block of the categorical perception task. Subjects completed three blocks of the speaking task and three blocks of the listening task. In the three blocks of a given task (speaking or listening) auditory feedback was 1) normal, 2) altered to be less variable, or 3) altered to be more variable. Unlike previous studies, we used subject’s own speech samples to generate speech stimuli for the perception task. For each categorical perception block, we calculated subject’s psychometric function and determined subject’s categorical boundary. The results showed that subjects’ perceptual boundary remained stable in all conditions and all blocks. Overall, our results did not provide evidence for the effects of speech production on speech perception.
ContributorsDaugherty, Allison (Author) / Daliri, Ayoub (Thesis director) / Rogalsky, Corianne (Committee member) / College of Health Solutions (Contributor) / Barrett, The Honors College (Contributor)
Created2019-05
148383-Thumbnail Image.png
Description

The distinctions between the neural resources supporting speech and music comprehension have long been studied using contexts like aphasia and amusia, and neuroimaging in control subjects. While many models have emerged to describe the different networks uniquely recruited in response to speech and music stimuli, there are still many questions,

The distinctions between the neural resources supporting speech and music comprehension have long been studied using contexts like aphasia and amusia, and neuroimaging in control subjects. While many models have emerged to describe the different networks uniquely recruited in response to speech and music stimuli, there are still many questions, especially regarding left-hemispheric strokes that disrupt typical speech-processing brain networks, and how musical training might affect the brain networks recruited for speech after a stroke. Thus, our study aims to explore some questions related to the above topics. We collected task-based functional MRI data from 12 subjects who previously experienced a left-hemispheric stroke. Subjects listened to blocks of spoken sentences and novel piano melodies during scanning to examine the differences in brain activations in response to speech and music. We hypothesized that speech stimuli would activate right frontal regions, and music stimuli would activate the right superior temporal regions more than speech (both findings not seen in previous studies of control subjects), as a result of functional changes in the brain, following the left-hemispheric stroke and particularly the loss of functionality in the left temporal lobe. We also hypothesized that the music stimuli would cause a stronger activation in right temporal cortex for participants who have had musical training than those who have not. Our results indicate that speech stimuli compared to rest activated the anterior superior temporal gyrus bilaterally and activated the right inferior frontal lobe. Music stimuli compared to rest did not activate the brain bilaterally, but rather only activated the right middle temporal gyrus. When the group analysis was performed with music experience as a covariate, we found that musical training did not affect activations to music stimuli specifically, but there was greater right hemisphere activation in several regions in response to speech stimuli as a function of more years of musical training. The results of the study agree with our hypotheses regarding the functional changes in the brain, but they conflict with our hypothesis about musical expertise. Overall, the study has generated interesting starting points for further explorations of how musical neural resources may be recruited for speech processing after damage to typical language networks.

ContributorsKarthigeyan, Vishnu R (Author) / Rogalsky, Corianne (Thesis director) / Daliri, Ayoub (Committee member) / Harrington Bioengineering Program (Contributor) / School of Life Sciences (Contributor) / Barrett, The Honors College (Contributor)
Created2021-05
148400-Thumbnail Image.png
Description

The brain continuously monitors speech output to detect potential errors between its sensory prediction and its sensory production (Daliri et al., 2020). When the brain encounters an error, it generates a corrective motor response, usually in the opposite direction, to reduce the effect of the error. Previous studies have shown

The brain continuously monitors speech output to detect potential errors between its sensory prediction and its sensory production (Daliri et al., 2020). When the brain encounters an error, it generates a corrective motor response, usually in the opposite direction, to reduce the effect of the error. Previous studies have shown that the type of auditory error received may impact a participant’s corrective response. In this study, we examined whether participants respond differently to categorical or non-categorical errors. We applied two types of perturbation in real-time by shifting the first formant (F1) and second formant (F2) at three different magnitudes. The vowel /ɛ/ was shifted toward the vowel /æ/ in the categorical perturbation condition. In the non-categorical perturbation condition, the vowel /ɛ/ was shifted to a sound outside of the vowel quadrilateral (increasing both F1 and F2). Our results showed that participants responded to the categorical perturbation while they did not respond to the non-categorical perturbation. Additionally, we found that in the categorical perturbation condition, as the magnitude of the perturbation increased, the magnitude of the response increased. Overall, our results suggest that the brain may respond differently to categorical and non-categorical errors, and the brain is highly attuned to errors in speech.

ContributorsCincera, Kirsten Michelle (Author) / Daliri, Ayoub (Thesis director) / Azuma, Tamiko (Committee member) / School of Sustainability (Contributor) / College of Health Solutions (Contributor, Contributor) / Barrett, The Honors College (Contributor)
Created2021-05