Matching Items (6)
Filtering by

Clear all filters

133868-Thumbnail Image.png
Description
Previous studies have shown that experimentally implemented formant perturbations result in production of compensatory responses in the opposite direction of the perturbations. In this study, we investigated how participants adapt to a) auditory perturbations that shift formants to a specific point in the vowel space and hence remove variability of

Previous studies have shown that experimentally implemented formant perturbations result in production of compensatory responses in the opposite direction of the perturbations. In this study, we investigated how participants adapt to a) auditory perturbations that shift formants to a specific point in the vowel space and hence remove variability of formants (focused perturbations), and b) auditory perturbations that preserve the natural variability of formants (uniform perturbations). We examined whether the degree of adaptation to focused perturbations was different from adaptation to uniform adaptations. We found that adaptation magnitude of the first formant (F1) was smaller in response to focused perturbations. However, F1 adaptation was initially moved in the same direction as the perturbation, and after several trials the F1 adaptation changed its course toward the opposite direction of the perturbation. We also found that adaptation of the second formant (F2) was smaller in response to focused perturbations than F2 responses to uniform perturbations. Overall, these results suggest that formant variability is an important component of speech, and that our central nervous system takes into account such variability to produce more accurate speech output.
ContributorsDittman, Jonathan William (Author) / Daliri, Ayoub (Thesis director) / Berisha, Visar (Committee member) / School of Life Sciences (Contributor) / Barrett, The Honors College (Contributor)
Created2018-05
133028-Thumbnail Image.png
Description
Previous studies have found that the detection of near-threshold stimuli is decreased immediately before movement and throughout movement production. This has been suggested to occur through the use of the internal forward model processing an efferent copy of the motor command and creating a prediction that is used to cancel

Previous studies have found that the detection of near-threshold stimuli is decreased immediately before movement and throughout movement production. This has been suggested to occur through the use of the internal forward model processing an efferent copy of the motor command and creating a prediction that is used to cancel out the resulting sensory feedback. Currently, there are no published accounts of the perception of tactile signals for motor tasks and contexts related to the lips during both speech planning and production. In this study, we measured the responsiveness of the somatosensory system during speech planning using light electrical stimulation below the lower lip by comparing perception during mixed speaking and silent reading conditions. Participants were asked to judge whether a constant near-threshold electrical stimulation (subject-specific intensity, 85% detected at rest) was present during different time points relative to an initial visual cue. In the speaking condition, participants overtly produced target words shown on a computer monitor. In the reading condition, participants read the same target words silently to themselves without any movement or sound. We found that detection of the stimulus was attenuated during speaking conditions while remaining at a constant level close to the perceptual threshold throughout the silent reading condition. Perceptual modulation was most intense during speech production and showed some attenuation just prior to speech production during the planning period of speech. This demonstrates that there is a significant decrease in the responsiveness of the somatosensory system during speech production as well as milliseconds before speech is even produced which has implications for speech disorders such as stuttering and schizophrenia with pronounced deficits in the somatosensory system.
ContributorsMcguffin, Brianna Jean (Author) / Daliri, Ayoub (Thesis director) / Liss, Julie (Committee member) / Department of Psychology (Contributor) / School of Life Sciences (Contributor) / Barrett, The Honors College (Contributor)
Created2019-05
134804-Thumbnail Image.png
Description
Previous research has shown that a loud acoustic stimulus can trigger an individual's prepared movement plan. This movement response is referred to as a startle-evoked movement (SEM). SEM has been observed in the stroke survivor population where results have shown that SEM enhances single joint movements that are usually performed

Previous research has shown that a loud acoustic stimulus can trigger an individual's prepared movement plan. This movement response is referred to as a startle-evoked movement (SEM). SEM has been observed in the stroke survivor population where results have shown that SEM enhances single joint movements that are usually performed with difficulty. While the presence of SEM in the stroke survivor population advances scientific understanding of movement capabilities following a stroke, published studies using the SEM phenomenon only examined one joint. The ability of SEM to generate multi-jointed movements is understudied and consequently limits SEM as a potential therapy tool. In order to apply SEM as a therapy tool however, the biomechanics of the arm in multi-jointed movement planning and execution must be better understood. Thus, the objective of our study was to evaluate if SEM could elicit multi-joint reaching movements that were accurate in an unrestrained, two-dimensional workspace. Data was collected from ten subjects with no previous neck, arm, or brain injury. Each subject performed a reaching task to five Targets that were equally spaced in a semi-circle to create a two-dimensional workspace. The subject reached to each Target following a sequence of two non-startling acoustic stimuli cues: "Get Ready" and "Go". A loud acoustic stimuli was randomly substituted for the "Go" cue. We hypothesized that SEM is accessible and accurate for unrestricted multi-jointed reaching tasks in a functional workspace and is therefore independent of movement direction. Our results found that SEM is possible in all five Target directions. The probability of evoking SEM and the movement kinematics (i.e. total movement time, linear deviation, average velocity) to each Target are not statistically different. Thus, we conclude that SEM is possible in a functional workspace and is not dependent on where arm stability is maximized. Moreover, coordinated preparation and storage of a multi-jointed movement is indeed possible.
ContributorsOssanna, Meilin Ryan (Author) / Honeycutt, Claire (Thesis director) / Schaefer, Sydney (Committee member) / Harrington Bioengineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2016-12
131325-Thumbnail Image.png
Description
The purpose of this study was to explore the relationship between acoustic indicators in speech and the presence of orofacial myofunctional disorder (OMD). This study analyzed the first and second formant frequencies (F1 and F2) of the four corner vowels [/i/, /u/, /æ/ and /ɑ/] found in the spontaneous

The purpose of this study was to explore the relationship between acoustic indicators in speech and the presence of orofacial myofunctional disorder (OMD). This study analyzed the first and second formant frequencies (F1 and F2) of the four corner vowels [/i/, /u/, /æ/ and /ɑ/] found in the spontaneous speech of thirty participants. It was predicted that speakers with orofacial myofunctional disorder would have a raised F1 and F2 because of habitual low and anterior tongue positioning. This study concluded no significant statistical differences in the formant frequencies. Further inspection of the total vowel space area of the OMD speakers suggested that OMD speakers had a smaller, more centralized vowel space. We concluded that more study of the total vowel space area for OMD speakers is warranted.
ContributorsWasson, Sarah Alicia (Co-author) / Wasson, Sarah (Co-author) / Weinhold, Juliet (Thesis director) / Daliri, Ayoub (Committee member) / College of Health Solutions (Contributor) / Hugh Downs School of Human Communication (Contributor) / Barrett, The Honors College (Contributor)
Created2020-05
164965-Thumbnail Image.png
Description

Transcranial magnetic stimulation (TMS) is a non-invasive brain stimulation technique used in a variety of research settings, including speech neuroscience studies. However, one of the difficulties in using TMS for speech studies is the time that it takes to localize the lip motor cortex representation on the scalp. For my

Transcranial magnetic stimulation (TMS) is a non-invasive brain stimulation technique used in a variety of research settings, including speech neuroscience studies. However, one of the difficulties in using TMS for speech studies is the time that it takes to localize the lip motor cortex representation on the scalp. For my project, I used MATLAB to create a software package that facilitates the localization of the ‘hotspot’ for TMS studies in a systematic, reliable manner. The software sends TMS pulses at certain locations, collects electromyography (EMG) data, and extracts motor-evoked potentials (MEPs) to help users visualize the resulting muscle activation. In this way, users can systematically find the subject’s hotspot for TMS stimulation of the motor cortex. The hotspot detection software was found to be an effective and efficient improvement on previous localization methods.

ContributorsKshatriya, Nyah (Author) / Daliri, Ayoub (Thesis director) / Liss, Julie (Committee member) / Barrett, The Honors College (Contributor) / Business (Minor) (Contributor)
Created2022-05
158859-Thumbnail Image.png
Description
Speech sound disorders (SSDs) are the most prevalent type of communication disorder in children. Clinically, speech-language pathologists (SLPs) rely on behavioral methods for assessing and treating SSDs. Though clients typically experience improved speech outcomes as a result of therapy, there is evidence that underlying deficits may persist even

Speech sound disorders (SSDs) are the most prevalent type of communication disorder in children. Clinically, speech-language pathologists (SLPs) rely on behavioral methods for assessing and treating SSDs. Though clients typically experience improved speech outcomes as a result of therapy, there is evidence that underlying deficits may persist even in individuals who have completed treatment for surface-level speech behaviors. Advances in the field of genetics have created the opportunity to investigate the contribution of genes to human communication. Due to the heterogeneity of many communication disorders, the manner in which specific genetic changes influence neural mechanisms, and thereby behavioral phenotypes, remains largely unknown. The purpose of this study was to identify genotype-phenotype associations, along with perceptual, and motor-related biomarkers within families displaying SSDs. Five parent-child trios participated in genetic testing, and five families participated in a combination of genetic and behavioral testing to help elucidate biomarkers related to SSDs. All of the affected individuals had a history of childhood apraxia of speech (CAS) except for one family that displayed a phonological disorder. Genetic investigation yielded several genes of interest relevant for an SSD phenotype: CNTNAP2, CYFIP1, GPR56, HERC1, KIAA0556, LAMA5, LAMB1, MDGA2, MECP2, NBEA, SHANK3, TENM3, and ZNF142. All of these genes showed at least some expression in the developing brain. Gene ontology analysis yielded terms supporting a genetic influence on central nervous system development. Behavioral testing revealed evidence of a sequential processing biomarker for all individuals with CAS, with many showing deficits in sequential motor skills in addition to speech deficits. In some families, participants also showed evidence of a co-occurring perceptual processing biomarker. The family displaying a phonological phenotype showed milder sequential processing deficits compared to CAS families. Overall, this study supports the presence of a sequential processing biomarker for CAS and shows that relevant genes of interest may be influencing a CAS phenotype via sequential processing. Knowledge of these biomarkers can help strengthen precision of clinical assessment and motivate development of novel interventions for individuals with SSDs.
ContributorsBruce, Laurel (Author) / Peter, Beate (Thesis advisor) / Daliri, Ayoub (Committee member) / Liu, Li (Committee member) / Scherer, Nancy (Committee member) / Weinhold, Juliet (Committee member) / Arizona State University (Publisher)
Created2020