Matching Items (6)
Filtering by

Clear all filters

156177-Thumbnail Image.png
Description
The activation of the primary motor cortex (M1) is common in speech perception tasks that involve difficult listening conditions. Although the challenge of recognizing and discriminating non-native speech sounds appears to be an instantiation of listening under difficult circumstances, it is still unknown if M1 recruitment is facilitatory of second

The activation of the primary motor cortex (M1) is common in speech perception tasks that involve difficult listening conditions. Although the challenge of recognizing and discriminating non-native speech sounds appears to be an instantiation of listening under difficult circumstances, it is still unknown if M1 recruitment is facilitatory of second language speech perception. The purpose of this study was to investigate the role of M1 associated with speech motor centers in processing acoustic inputs in the native (L1) and second language (L2), using repetitive Transcranial Magnetic Stimulation (rTMS) to selectively alter neural activity in M1. Thirty-six healthy English/Spanish bilingual subjects participated in the experiment. The performance on a listening word-to-picture matching task was measured before and after real- and sham-rTMS to the orbicularis oris (lip muscle) associated M1. Vowel Space Area (VSA) obtained from recordings of participants reading a passage in L2 before and after real-rTMS, was calculated to determine its utility as an rTMS aftereffect measure. There was high variability in the aftereffect of the rTMS protocol to the lip muscle among the participants. Approximately 50% of participants showed an inhibitory effect of rTMS, evidenced by smaller motor evoked potentials (MEPs) area, whereas the other 50% had a facilitatory effect, with larger MEPs. This suggests that rTMS has a complex influence on M1 excitability, and relying on grand-average results can obscure important individual differences in rTMS physiological and functional outcomes. Evidence of motor support to word recognition in the L2 was found. Participants showing an inhibitory aftereffect of rTMS on M1 produced slower and less accurate responses in the L2 task, whereas those showing a facilitatory aftereffect of rTMS on M1 produced more accurate responses in L2. In contrast, no effect of rTMS was found on the L1, where accuracy and speed were very similar after sham- and real-rTMS. The L2 VSA measure was indicative of the aftereffect of rTMS to M1 associated with speech production, supporting its utility as an rTMS aftereffect measure. This result revealed an interesting and novel relation between cerebral motor cortex activation and speech measures.
ContributorsBarragan, Beatriz (Author) / Liss, Julie (Thesis advisor) / Berisha, Visar (Committee member) / Rogalsky, Corianne (Committee member) / Restrepo, Adelaida (Committee member) / Arizona State University (Publisher)
Created2018
136335-Thumbnail Image.png
Description
The primary motor cortex (M1) plays a vital role in motor planning and execution, as well as in motor learning. Baseline corticospinal excitability (CSE) in M1 is known to increase as a result of motor learning, but less is understand about the modulation of CSE at the pre-execution planning stage

The primary motor cortex (M1) plays a vital role in motor planning and execution, as well as in motor learning. Baseline corticospinal excitability (CSE) in M1 is known to increase as a result of motor learning, but less is understand about the modulation of CSE at the pre-execution planning stage due to learning. This question was addressed using single pulse transcranial magnetic stimulation (TMS) to measure the modulation of both baseline and planning CSE due to learning a reach to grasp task. It was hypothesized that baseline CSE would increase and planning CSE decrease as a function of trial; an increase in baseline CSE would replicate established findings in the literature, while a decrease in planning would be a novel finding. Eight right-handed subjects were visually cued to exert a precise grip force, with the goal of producing that force accurately and consistently. Subjects effectively learned the task in the first 10 trials, but no significant trends were found in the modulation of baseline or planning CSE. The lack of significant results may be due to the very quick learning phase or the lower intensity of training as compared to past studies. The findings presented here suggest that planning and baseline CSE may be modulated along different time courses as learning occurs and point to some important considerations for future studies addressing this question.
ContributorsMoore, Dalton Dale (Author) / Santello, Marco (Thesis director) / Kleim, Jeff (Committee member) / Barrett, The Honors College (Contributor) / Harrington Bioengineering Program (Contributor)
Created2015-05
133858-Thumbnail Image.png
Description
Working memory and cognitive functions contribute to speech recognition in normal hearing and hearing impaired listeners. In this study, auditory and cognitive functions are measured in young adult normal hearing, elderly normal hearing, and elderly cochlear implant subjects. The effects of age and hearing on the different measures are investigated.

Working memory and cognitive functions contribute to speech recognition in normal hearing and hearing impaired listeners. In this study, auditory and cognitive functions are measured in young adult normal hearing, elderly normal hearing, and elderly cochlear implant subjects. The effects of age and hearing on the different measures are investigated. The correlations between auditory/cognitive functions and speech/music recognition are examined. The results may demonstrate which factors can better explain the variable performance across elderly cochlear implant users.
ContributorsKolberg, Courtney Elizabeth (Author) / Luo, Xin (Thesis director) / Azuma, Tamiko (Committee member) / Department of Speech and Hearing Science (Contributor) / Barrett, The Honors College (Contributor)
Created2018-05
137282-Thumbnail Image.png
Description
A previous study demonstrated that learning to lift an object is context-based and that in the presence of both the memory and visual cues, the acquired sensorimotor memory to manipulate an object in one context interferes with the performance of the same task in presence of visual information about a

A previous study demonstrated that learning to lift an object is context-based and that in the presence of both the memory and visual cues, the acquired sensorimotor memory to manipulate an object in one context interferes with the performance of the same task in presence of visual information about a different context (Fu et al, 2012).
The purpose of this study is to know whether the primary motor cortex (M1) plays a role in the sensorimotor memory. It was hypothesized that temporary disruption of the M1 following the learning to minimize a tilt using a ‘L’ shaped object would negatively affect the retention of sensorimotor memory and thus reduce interference between the memory acquired in one context and the visual cues to perform the same task in a different context.
Significant findings were shown in blocks 1, 2, and 4. In block 3, subjects displayed insignificant amount of learning. However, it cannot be concluded that there is full interference in block 3. Therefore, looked into 3 effects in statistical analysis: the main effects of the blocks, the main effects of the trials, and the effects of the blocks and trials combined. From the block effects, there is a p-value of 0.001, and from the trial effects, the p-value is less than 0.001. Both of these effects indicate that there is learning occurring. However, when looking at the blocks * trials effects, we see a p-value of 0.002 < 0.05 indicating significant interaction between sensorimotor memories. Based on the results that were found, there is a presence of interference in all the blocks but not enough to justify the use of TMS in order to reduce interference because there is a partial reduction of interference from the control experiment. It is evident that the time delay might be the issue between context switches. By reducing the time delay between block 2 and 3 from 10 minutes to 5 minutes, I will hope to see significant learning to occur from the first trial to the second trial.
ContributorsHasan, Salman Bashir (Author) / Santello, Marco (Thesis director) / Kleim, Jeffrey (Committee member) / Helms Tillery, Stephen (Committee member) / Barrett, The Honors College (Contributor) / W. P. Carey School of Business (Contributor) / Harrington Bioengineering Program (Contributor)
Created2014-05
134484-Thumbnail Image.png
Description
The purpose of the present study was to determine if an automated speech perception task yields results that are equivalent to a word recognition test used in audiometric evaluations. This was done by testing 51 normally hearing adults using a traditional word recognition task (NU-6) and an automated Non-Word Detection

The purpose of the present study was to determine if an automated speech perception task yields results that are equivalent to a word recognition test used in audiometric evaluations. This was done by testing 51 normally hearing adults using a traditional word recognition task (NU-6) and an automated Non-Word Detection task. Stimuli for each task were presented in quiet as well as in six signal-to-noise ratios (SNRs) increasing in 3 dB increments (+0 dB, +3 dB, +6 dB, +9 dB, + 12 dB, +15 dB). A two one-sided test procedure (TOST) was used to determine equivalency of the two tests. This approach required the performance for both tasks to be arcsine transformed and converted to z-scores in order to calculate the difference in scores across listening conditions. These values were then compared to a predetermined criterion to establish if equivalency exists. It was expected that the TOST procedure would reveal equivalency between the traditional word recognition task and the automated Non-Word Detection Task. The results confirmed that the two tasks differed by no more than 2 test items in any of the listening conditions. Overall, the results indicate that the automated Non-Word Detection task could be used in addition to, or in place of, traditional word recognition tests. In addition, the features of an automated test such as the Non-Word Detection task offer additional benefits including rapid administration, accurate scoring, and supplemental performance data (e.g., error analyses) beyond those obtained in traditional speech perception measures.
ContributorsStahl, Amy Nicole (Author) / Pittman, Andrea (Thesis director) / Boothroyd, Arthur (Committee member) / McBride, Ingrid (Committee member) / School of Human Evolution and Social Change (Contributor) / Department of Speech and Hearing Science (Contributor) / Barrett, The Honors College (Contributor)
Created2017-05
Description
Orofacial Myofunctional Disorder (OMD) is defined as “abnormal movement patterns of the face and mouth” by ASHA (2023). OMD leads to anterior carriage of the tongue, open mouth posture, mouth breathing, and tongue thrust swallow. Dentalization speech errors of /s/ and /z/ are also known to be caused by low and forward position

Orofacial Myofunctional Disorder (OMD) is defined as “abnormal movement patterns of the face and mouth” by ASHA (2023). OMD leads to anterior carriage of the tongue, open mouth posture, mouth breathing, and tongue thrust swallow. Dentalization speech errors of /s/ and /z/ are also known to be caused by low and forward position of the tongue (Wadsworth, Maui, & Stevens, 1998). This study used the OMES-E protocol to identify 10 out of 40 participants with OMD. A cut-off below 80% accuracy for the production of /s/ and /z/ sounds classified 6 out of 40 participants with speech errors. Then, a correlation was run between speech score and OMD classification; it was not significant. This raises the question, why do some people with OMD have moderate to severe speech errors of /s/ and /z/, and some who have OMD do not? This study aims to explore this question beyond the motor modality. Using an auditory perception paradigm, the first and second formants of the vowel /ɛ/ were shifted to approximate /æ/. The participant’s responses and compensations to these shifts were recorded in real time. Results of this perceptual test could suggest that perceptual/compensatory differences may explain why some people in the OMD population have speech errors and some do not.
ContributorsDeOrio, Sophia (Author) / Weinhold, Juliet (Thesis director) / Bruce, Laurel (Committee member) / Barrett, The Honors College (Contributor) / School of Public Affairs (Contributor) / College of Health Solutions (Contributor) / Sanford School of Social and Family Dynamics (Contributor)
Created2023-12