Matching Items (4)
Filtering by

Clear all filters

133028-Thumbnail Image.png
Description
Previous studies have found that the detection of near-threshold stimuli is decreased immediately before movement and throughout movement production. This has been suggested to occur through the use of the internal forward model processing an efferent copy of the motor command and creating a prediction that is used to cancel

Previous studies have found that the detection of near-threshold stimuli is decreased immediately before movement and throughout movement production. This has been suggested to occur through the use of the internal forward model processing an efferent copy of the motor command and creating a prediction that is used to cancel out the resulting sensory feedback. Currently, there are no published accounts of the perception of tactile signals for motor tasks and contexts related to the lips during both speech planning and production. In this study, we measured the responsiveness of the somatosensory system during speech planning using light electrical stimulation below the lower lip by comparing perception during mixed speaking and silent reading conditions. Participants were asked to judge whether a constant near-threshold electrical stimulation (subject-specific intensity, 85% detected at rest) was present during different time points relative to an initial visual cue. In the speaking condition, participants overtly produced target words shown on a computer monitor. In the reading condition, participants read the same target words silently to themselves without any movement or sound. We found that detection of the stimulus was attenuated during speaking conditions while remaining at a constant level close to the perceptual threshold throughout the silent reading condition. Perceptual modulation was most intense during speech production and showed some attenuation just prior to speech production during the planning period of speech. This demonstrates that there is a significant decrease in the responsiveness of the somatosensory system during speech production as well as milliseconds before speech is even produced which has implications for speech disorders such as stuttering and schizophrenia with pronounced deficits in the somatosensory system.
ContributorsMcguffin, Brianna Jean (Author) / Daliri, Ayoub (Thesis director) / Liss, Julie (Committee member) / Department of Psychology (Contributor) / School of Life Sciences (Contributor) / Barrett, The Honors College (Contributor)
Created2019-05
134804-Thumbnail Image.png
Description
Previous research has shown that a loud acoustic stimulus can trigger an individual's prepared movement plan. This movement response is referred to as a startle-evoked movement (SEM). SEM has been observed in the stroke survivor population where results have shown that SEM enhances single joint movements that are usually performed

Previous research has shown that a loud acoustic stimulus can trigger an individual's prepared movement plan. This movement response is referred to as a startle-evoked movement (SEM). SEM has been observed in the stroke survivor population where results have shown that SEM enhances single joint movements that are usually performed with difficulty. While the presence of SEM in the stroke survivor population advances scientific understanding of movement capabilities following a stroke, published studies using the SEM phenomenon only examined one joint. The ability of SEM to generate multi-jointed movements is understudied and consequently limits SEM as a potential therapy tool. In order to apply SEM as a therapy tool however, the biomechanics of the arm in multi-jointed movement planning and execution must be better understood. Thus, the objective of our study was to evaluate if SEM could elicit multi-joint reaching movements that were accurate in an unrestrained, two-dimensional workspace. Data was collected from ten subjects with no previous neck, arm, or brain injury. Each subject performed a reaching task to five Targets that were equally spaced in a semi-circle to create a two-dimensional workspace. The subject reached to each Target following a sequence of two non-startling acoustic stimuli cues: "Get Ready" and "Go". A loud acoustic stimuli was randomly substituted for the "Go" cue. We hypothesized that SEM is accessible and accurate for unrestricted multi-jointed reaching tasks in a functional workspace and is therefore independent of movement direction. Our results found that SEM is possible in all five Target directions. The probability of evoking SEM and the movement kinematics (i.e. total movement time, linear deviation, average velocity) to each Target are not statistically different. Thus, we conclude that SEM is possible in a functional workspace and is not dependent on where arm stability is maximized. Moreover, coordinated preparation and storage of a multi-jointed movement is indeed possible.
ContributorsOssanna, Meilin Ryan (Author) / Honeycutt, Claire (Thesis director) / Schaefer, Sydney (Committee member) / Harrington Bioengineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2016-12
171425-Thumbnail Image.png
Description
Prosodic features such as fundamental frequency (F0), intensity, and duration convey important information of speech intonation (i.e., is it a statement or a question?). Because cochlear implants (CIs) do not adequately encode pitch-related F0 cues, pre-lignually deaf pediatric CI users have poorer speech intonation perception and production than normal-hearing (NH)

Prosodic features such as fundamental frequency (F0), intensity, and duration convey important information of speech intonation (i.e., is it a statement or a question?). Because cochlear implants (CIs) do not adequately encode pitch-related F0 cues, pre-lignually deaf pediatric CI users have poorer speech intonation perception and production than normal-hearing (NH) children. In contrast, post-lingually deaf adult CI users have developed speech production skills via normal hearing before deafness and implantation. Further, combined electric hearing (via CI) and acoustic hearing (via hearing aid, HA) may improve CI users’ perception of pitch cues in speech intonation. Therefore, this study tested (1) whether post-lingually deaf adult CI users have similar speech intonation production to NH adults and (2) whether their speech intonation production improves with auditory feedback via CI+HA (i.e., bimodal hearing). Eight post-lingually deaf adult bimodal CI users and nine NH adults participated in this study. 10 question-and-answer dialogues with an experimenter were used to elicit 10 pairs of syntactically matched questions and statements from each participant. Bimodal CI users were tested under four hearing conditions: no-device (ND), HA, CI, and CI+HA. F0 change, intensity change, and duration ratio between the last two syllables of each utterance were analyzed to evaluate the quality of speech intonation production. The results showed no significant differences between CI and NH participants in any of the acoustic features of questions and statements. For CI participants, the CI+HA condition led to significantly greater F0 decreases of statements than the ND condition, while the ND condition led to significantly greater duration ratios of questions and statements. These results suggest that bimodal CI users change the use of prosodic cues for speech intonation production in different hearing conditions and access to auditory feedback via CI+HA may improve their voice pitch control to produce more salient statement intonation contours.
ContributorsAi, Chang (Author) / Luo, Xin (Thesis advisor) / Daliri, Ayoub (Committee member) / Davidson, Lisa (Committee member) / Arizona State University (Publisher)
Created2022
171661-Thumbnail Image.png
Description
Speech and music are traditionally thought to be primarily supported by different hemispheres. A growing body of evidence suggests that speech and music often rely on shared resources in bilateral brain networks, though the right and left hemispheres exhibit some domain-specific specialization. While there is ample research investigating speech deficits

Speech and music are traditionally thought to be primarily supported by different hemispheres. A growing body of evidence suggests that speech and music often rely on shared resources in bilateral brain networks, though the right and left hemispheres exhibit some domain-specific specialization. While there is ample research investigating speech deficits in individuals with right hemisphere lesions and amusia, fewer investigate amusia in individuals with left hemisphere lesions and aphasia. Many of the fronto-temporal-parietal regions in the left hemisphere commonly associated with speech processing and production are also implicated in bilateral music processing networks. The current study investigates the relationship between damage to specific regions of interest within these networks, and an individual’s ability to successfully match the pitch and rhythm of a presented melody. Twenty-seven participants with chronic-stroke lesions were given a melody repetition task to hum short novel piano melodies. Participants underwent structural MRI acquisition and were administered an extensive speech and cognitive battery. Pitch and rhythm scores were calculated by correlating participant responses and target piano notes. Production errors were calculated by counting trials with responses that don’t match the target melody’s note count. Overall, performance varied widely, and rhythm scores were significantly correlated. Working memory scores were significantly correlated with rhythm scores and production errors, but not pitch scores. Broca’s area lesions were not associated with significant differences in any of the melody repetition measures, while left Heschl’s gyrus lesions were associated with worse performance on pitch, rhythm, and production errors. Lower rhythm scores were associated with lesions including both the left anterior and posterior superior temporal gyrus, and in participants with damage to the left planum temporale. The other regions of interest were not consistently associated with poorer pitch scores or production errors. Although the present study does have limitations, the current study suggests lesions to left hemisphere regions thought to only affect speech also affect musical pitch and rhythm processing. Therefore, amusia should not be characterized solely as a right hemisphere disorder. Instead, musical abilities of individuals with left hemisphere stroke and aphasia should be characterized to better understand their deficits and mechanisms of impairment.
ContributorsWojtaszek, Mallory (Author) / Rogalsky, Corianne (Thesis advisor) / Daliri, Ayoub (Committee member) / Patten, Kristopher (Committee member) / Arizona State University (Publisher)
Created2022