Matching Items (7)
Filtering by

Clear all filters

133028-Thumbnail Image.png
Description
Previous studies have found that the detection of near-threshold stimuli is decreased immediately before movement and throughout movement production. This has been suggested to occur through the use of the internal forward model processing an efferent copy of the motor command and creating a prediction that is used to cancel

Previous studies have found that the detection of near-threshold stimuli is decreased immediately before movement and throughout movement production. This has been suggested to occur through the use of the internal forward model processing an efferent copy of the motor command and creating a prediction that is used to cancel out the resulting sensory feedback. Currently, there are no published accounts of the perception of tactile signals for motor tasks and contexts related to the lips during both speech planning and production. In this study, we measured the responsiveness of the somatosensory system during speech planning using light electrical stimulation below the lower lip by comparing perception during mixed speaking and silent reading conditions. Participants were asked to judge whether a constant near-threshold electrical stimulation (subject-specific intensity, 85% detected at rest) was present during different time points relative to an initial visual cue. In the speaking condition, participants overtly produced target words shown on a computer monitor. In the reading condition, participants read the same target words silently to themselves without any movement or sound. We found that detection of the stimulus was attenuated during speaking conditions while remaining at a constant level close to the perceptual threshold throughout the silent reading condition. Perceptual modulation was most intense during speech production and showed some attenuation just prior to speech production during the planning period of speech. This demonstrates that there is a significant decrease in the responsiveness of the somatosensory system during speech production as well as milliseconds before speech is even produced which has implications for speech disorders such as stuttering and schizophrenia with pronounced deficits in the somatosensory system.
ContributorsMcguffin, Brianna Jean (Author) / Daliri, Ayoub (Thesis director) / Liss, Julie (Committee member) / Department of Psychology (Contributor) / School of Life Sciences (Contributor) / Barrett, The Honors College (Contributor)
Created2019-05
134804-Thumbnail Image.png
Description
Previous research has shown that a loud acoustic stimulus can trigger an individual's prepared movement plan. This movement response is referred to as a startle-evoked movement (SEM). SEM has been observed in the stroke survivor population where results have shown that SEM enhances single joint movements that are usually performed

Previous research has shown that a loud acoustic stimulus can trigger an individual's prepared movement plan. This movement response is referred to as a startle-evoked movement (SEM). SEM has been observed in the stroke survivor population where results have shown that SEM enhances single joint movements that are usually performed with difficulty. While the presence of SEM in the stroke survivor population advances scientific understanding of movement capabilities following a stroke, published studies using the SEM phenomenon only examined one joint. The ability of SEM to generate multi-jointed movements is understudied and consequently limits SEM as a potential therapy tool. In order to apply SEM as a therapy tool however, the biomechanics of the arm in multi-jointed movement planning and execution must be better understood. Thus, the objective of our study was to evaluate if SEM could elicit multi-joint reaching movements that were accurate in an unrestrained, two-dimensional workspace. Data was collected from ten subjects with no previous neck, arm, or brain injury. Each subject performed a reaching task to five Targets that were equally spaced in a semi-circle to create a two-dimensional workspace. The subject reached to each Target following a sequence of two non-startling acoustic stimuli cues: "Get Ready" and "Go". A loud acoustic stimuli was randomly substituted for the "Go" cue. We hypothesized that SEM is accessible and accurate for unrestricted multi-jointed reaching tasks in a functional workspace and is therefore independent of movement direction. Our results found that SEM is possible in all five Target directions. The probability of evoking SEM and the movement kinematics (i.e. total movement time, linear deviation, average velocity) to each Target are not statistically different. Thus, we conclude that SEM is possible in a functional workspace and is not dependent on where arm stability is maximized. Moreover, coordinated preparation and storage of a multi-jointed movement is indeed possible.
ContributorsOssanna, Meilin Ryan (Author) / Honeycutt, Claire (Thesis director) / Schaefer, Sydney (Committee member) / Harrington Bioengineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2016-12
147824-Thumbnail Image.png
Description

Speech motor learning is important for learning to speak during childhood and maintaining the speech system throughout adulthood. Motor and auditory cortical regions play crucial roles in speech motor learning. This experiment aimed to use transcranial alternating current stimulation, a neurostimulation technique, to influence auditory and motor cortical activity. In

Speech motor learning is important for learning to speak during childhood and maintaining the speech system throughout adulthood. Motor and auditory cortical regions play crucial roles in speech motor learning. This experiment aimed to use transcranial alternating current stimulation, a neurostimulation technique, to influence auditory and motor cortical activity. In this study, we used an auditory-motor adaptation task as an experimental model of speech motor learning. Subjects repeated words while receiving formant shifts, which made the subjects’ speech sound different from their production. During the adaptation task, subjects received Beta (20 Hz), Alpha (10 Hz), or Sham stimulation. We applied the stimulation to the ventral motor cortex that is involved in planning speech movements. We found that the stimulation did not influence the magnitude of adaptation. We suggest that some limitations of the study may have contributed to the negative results.

ContributorsMannan, Arhum (Author) / Daliri, Ayoub (Thesis director) / Luo, Xin (Committee member) / School of Life Sciences (Contributor) / Barrett, The Honors College (Contributor)
Created2021-05
131951-Thumbnail Image.png
Description
Previous research has showed that auditory modulation may be affected by pure tone
stimuli played prior to the onset of speech production. In this experiment, we are examining the
specificity of the auditory stimulus by implementing congruent and incongruent speech sounds in
addition to non-speech sound. Electroencephalography (EEG) data was recorded for eleven

Previous research has showed that auditory modulation may be affected by pure tone
stimuli played prior to the onset of speech production. In this experiment, we are examining the
specificity of the auditory stimulus by implementing congruent and incongruent speech sounds in
addition to non-speech sound. Electroencephalography (EEG) data was recorded for eleven adult
subjects in both speaking (speech planning) and silent reading (no speech planning) conditions.
Data analysis was accomplished manually as well as via generation of a MATLAB code to
combine data sets and calculate auditory modulation (suppression). Results of the P200
modulation showed that modulation was larger for incongruent stimuli than congruent stimuli.
However, this was not the case for the N100 modulation. The data for pure tone could not be
analyzed because the intensity of this stimulus was substantially lower than that of the speech
stimuli. Overall, the results indicated that the P200 component plays a significant role in
processing stimuli and determining the relevance of stimuli; this result is consistent with role of
P200 component in high-level analysis of speech and perceptual processing. This experiment is
ongoing, and we hope to obtain data from more subjects to support the current findings.
ContributorsTaylor, Megan Kathleen (Author) / Daliri, Ayoub (Thesis director) / Liss, Julie (Committee member) / School of Life Sciences (Contributor) / School of International Letters and Cultures (Contributor) / Barrett, The Honors College (Contributor)
Created2020-05
131919-Thumbnail Image.png
Description
In the past, researchers have studied the elements of speech and how they work together in the human brain. Auditory feedback, an important aid in speech production, provides information to speakers and allows them to gain an understanding if the prediction of their speech matches their production. The speech motor

In the past, researchers have studied the elements of speech and how they work together in the human brain. Auditory feedback, an important aid in speech production, provides information to speakers and allows them to gain an understanding if the prediction of their speech matches their production. The speech motor system uses auditory goals to determine errors in its auditory output during vowel production. We learn from discrepancies between our prediction and auditory feedback. In this study, we examined error assessment processes by systematically manipulating the correspondence between speech motor outputs and their auditory consequences while producing speech. We conducted a study (n = 14 adults) in which participants’ auditory feedback was perturbed to test their learning rate in two conditions. During the trials, participants repeated CVC words and were instructed to prolong the vowel each time. The adaptation trials were used to examine the reliance of auditory feedback and speech prediction by systematically changing the weight of auditory feedback. Participants heard their perturbed feedback through insert earphones in real time. Each speaker’s auditory feedback was perturbed according to task-relevant and task-irrelevant errors. Then, these perturbations were presented to subjects gradually and suddenly in the study. We found that adaptation was less extensive with task-irrelevant errors, adaptation did not saturate significantly in the sudden condition, and adaptation, which was expected to be extensive and faster in the task-relevant condition, was closer to the rate of adaptation in the task-irrelevant perturbation. Though adjustments are necessary, we found an efficient way for speakers to rely on auditory feedback more than their prediction. Furthermore, this research opens the door to future investigations in adaptation in speech and presents implications for clinical purposes (e.g. speech therapy).
ContributorsLukowiak, Ariana (Author) / Daliri, Ayoub (Thesis director) / Rogalsky, Corianne (Committee member) / Sanford School of Social and Family Dynamics (Contributor) / College of Health Solutions (Contributor, Contributor, Contributor) / Barrett, The Honors College (Contributor)
Created2020-05
131570-Thumbnail Image.png
Description
Transcranial Current Stimulation (TCS) is a long-established method of modulating neuronal activity in the brain. One type of this stimulation, transcranial alternating current stimulation (tACS), is able to entrain endogenous oscillations and result in behavioral change. In the present study, we used five stimulation conditions: tACS at three different frequencies

Transcranial Current Stimulation (TCS) is a long-established method of modulating neuronal activity in the brain. One type of this stimulation, transcranial alternating current stimulation (tACS), is able to entrain endogenous oscillations and result in behavioral change. In the present study, we used five stimulation conditions: tACS at three different frequencies (6Hz, 12Hz, and 22Hz), transcranial random noise stimulation (tRNS), and a no-stimulation sham condition. In all stimulation conditions, we recorded electroencephalographic data to investigate the link between different frequencies of tACS and their effects on brain oscillations. We recruited 12 healthy participants. Each participant completed 30 trials of the stimulation conditions. In a given trial, we recorded brain activity for 10 seconds, stimulated for 12 seconds, and recorded an additional 10 seconds of brain activity. The difference between the average oscillation power before and after a stimulation condition indicated change in oscillation amplitude due to the stimulation. Our results showed the stimulation conditions entrained brain activity of a sub-group of participants.
ContributorsChernicky, Jacob Garrett (Author) / Daliri, Ayoub (Thesis director) / Liss, Julie (Committee member) / School of Life Sciences (Contributor, Contributor) / Barrett, The Honors College (Contributor)
Created2020-05
165644-Thumbnail Image.png
Description

When we produce speech movements, we expect a specific auditory consequence, but an error occurs when the predicted outcomes do not match the actual speech outcome. The brain notes these discrepancies, learns from the errors, and works to lower these errors. Previous studies have shown a relationship between speech motor

When we produce speech movements, we expect a specific auditory consequence, but an error occurs when the predicted outcomes do not match the actual speech outcome. The brain notes these discrepancies, learns from the errors, and works to lower these errors. Previous studies have shown a relationship between speech motor learning and auditory targets. Subjects with smaller auditory targets were more sensitive to errors. These subjects estimated larger perturbations and generated larger responses. However, these responses were often ineffective, and the changes were usually minimal. The current study examined whether subjects’ auditory targets can be manipulated in an experimental setting. We recruited 10 healthy young adults to complete a perceptual vowel categorization task. We developed a novel procedure where subjects heard different auditory stimuli and reported the stimuli by locating the stimuli relative to adjacent vowels. We found that when stimuli are closer to vowel boundary, subjects are less accurate. Importantly, by providing visual feedback to subjects, subjects were able to improve their accuracy of locating the stimuli. These results indicated that we might be able to improve subjects’ auditory targets and thus may improve their speech motor learning ability.

ContributorsGurrala, SreeLakshmi (Author) / Daliri, Ayoub (Thesis director) / Chao, Saraching (Committee member) / Barrett, The Honors College (Contributor) / School of Life Sciences (Contributor) / School of Art (Contributor)
Created2022-05