Matching Items (9)
Filtering by

Clear all filters

137004-Thumbnail Image.png
Description
Brain-computer interface technology establishes communication between the brain and a computer, allowing users to control devices, machines, or virtual objects using their thoughts. This study investigates optimal conditions to facilitate learning to operate this interface. It compares two biofeedback methods, which dictate the relationship between brain activity and the movement

Brain-computer interface technology establishes communication between the brain and a computer, allowing users to control devices, machines, or virtual objects using their thoughts. This study investigates optimal conditions to facilitate learning to operate this interface. It compares two biofeedback methods, which dictate the relationship between brain activity and the movement of a virtual ball in a target-hitting task. Preliminary results indicate that a method in which the position of the virtual object directly relates to the amplitude of brain signals is most conducive to success. In addition, this research explores learning in the context of neural signals during training with a BCI task. Specifically, it investigates whether subjects can adapt to parameters of the interface without guidance. This experiment prompts subjects to modulate brain signals spectrally, spatially, and temporally, as well differentially to discriminate between two different targets. However, subjects are not given knowledge regarding these desired changes, nor are they given instruction on how to move the virtual ball. Preliminary analysis of signal trends suggests that some successful participants are able to adapt brain wave activity in certain pre-specified locations and frequency bands over time in order to achieve control. Future studies will further explore these phenomena, and future BCI projects will be advised by these methods, which will give insight into the creation of more intuitive and reliable BCI technology.
ContributorsLancaster, Jenessa Mae (Co-author) / Appavu, Brian (Co-author) / Wahnoun, Remy (Co-author, Committee member) / Helms Tillery, Stephen (Thesis director) / Barrett, The Honors College (Contributor) / Harrington Bioengineering Program (Contributor) / Department of Psychology (Contributor)
Created2014-05
133028-Thumbnail Image.png
Description
Previous studies have found that the detection of near-threshold stimuli is decreased immediately before movement and throughout movement production. This has been suggested to occur through the use of the internal forward model processing an efferent copy of the motor command and creating a prediction that is used to cancel

Previous studies have found that the detection of near-threshold stimuli is decreased immediately before movement and throughout movement production. This has been suggested to occur through the use of the internal forward model processing an efferent copy of the motor command and creating a prediction that is used to cancel out the resulting sensory feedback. Currently, there are no published accounts of the perception of tactile signals for motor tasks and contexts related to the lips during both speech planning and production. In this study, we measured the responsiveness of the somatosensory system during speech planning using light electrical stimulation below the lower lip by comparing perception during mixed speaking and silent reading conditions. Participants were asked to judge whether a constant near-threshold electrical stimulation (subject-specific intensity, 85% detected at rest) was present during different time points relative to an initial visual cue. In the speaking condition, participants overtly produced target words shown on a computer monitor. In the reading condition, participants read the same target words silently to themselves without any movement or sound. We found that detection of the stimulus was attenuated during speaking conditions while remaining at a constant level close to the perceptual threshold throughout the silent reading condition. Perceptual modulation was most intense during speech production and showed some attenuation just prior to speech production during the planning period of speech. This demonstrates that there is a significant decrease in the responsiveness of the somatosensory system during speech production as well as milliseconds before speech is even produced which has implications for speech disorders such as stuttering and schizophrenia with pronounced deficits in the somatosensory system.
ContributorsMcguffin, Brianna Jean (Author) / Daliri, Ayoub (Thesis director) / Liss, Julie (Committee member) / Department of Psychology (Contributor) / School of Life Sciences (Contributor) / Barrett, The Honors College (Contributor)
Created2019-05
134804-Thumbnail Image.png
Description
Previous research has shown that a loud acoustic stimulus can trigger an individual's prepared movement plan. This movement response is referred to as a startle-evoked movement (SEM). SEM has been observed in the stroke survivor population where results have shown that SEM enhances single joint movements that are usually performed

Previous research has shown that a loud acoustic stimulus can trigger an individual's prepared movement plan. This movement response is referred to as a startle-evoked movement (SEM). SEM has been observed in the stroke survivor population where results have shown that SEM enhances single joint movements that are usually performed with difficulty. While the presence of SEM in the stroke survivor population advances scientific understanding of movement capabilities following a stroke, published studies using the SEM phenomenon only examined one joint. The ability of SEM to generate multi-jointed movements is understudied and consequently limits SEM as a potential therapy tool. In order to apply SEM as a therapy tool however, the biomechanics of the arm in multi-jointed movement planning and execution must be better understood. Thus, the objective of our study was to evaluate if SEM could elicit multi-joint reaching movements that were accurate in an unrestrained, two-dimensional workspace. Data was collected from ten subjects with no previous neck, arm, or brain injury. Each subject performed a reaching task to five Targets that were equally spaced in a semi-circle to create a two-dimensional workspace. The subject reached to each Target following a sequence of two non-startling acoustic stimuli cues: "Get Ready" and "Go". A loud acoustic stimuli was randomly substituted for the "Go" cue. We hypothesized that SEM is accessible and accurate for unrestricted multi-jointed reaching tasks in a functional workspace and is therefore independent of movement direction. Our results found that SEM is possible in all five Target directions. The probability of evoking SEM and the movement kinematics (i.e. total movement time, linear deviation, average velocity) to each Target are not statistically different. Thus, we conclude that SEM is possible in a functional workspace and is not dependent on where arm stability is maximized. Moreover, coordinated preparation and storage of a multi-jointed movement is indeed possible.
ContributorsOssanna, Meilin Ryan (Author) / Honeycutt, Claire (Thesis director) / Schaefer, Sydney (Committee member) / Harrington Bioengineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2016-12
135808-Thumbnail Image.png
Description
The premise of the embodied cognition hypothesis is that cognitive processes require emotion, sensory, and motor systems in the brain, rather than using arbitrary symbols divorced from sensorimotor systems. The hypothesis explains many of the mechanisms of mental simulation or imagination and how they facilitate comprehension of concepts. Some forms

The premise of the embodied cognition hypothesis is that cognitive processes require emotion, sensory, and motor systems in the brain, rather than using arbitrary symbols divorced from sensorimotor systems. The hypothesis explains many of the mechanisms of mental simulation or imagination and how they facilitate comprehension of concepts. Some forms of embodied processing can be measured using electroencephalography (EEG), in a particular waveform known as the mu rhythm (8-13 Hz) in the sensorimotor cortex of the brain. Power in the mu band is suppressed (or de-synchronized) when an individual performs an action, as well as when the individual imagines performing the action, thus mu suppression measures embodied imagination. An important question however is whether the sensorimotor cortex involvement while reading, as measured by mu suppression, is part of the comprehension of what is read or if it is arises after comprehension has taken place. To answer this question, participants first took the Gates-MacGinitie reading comprehension test. Then, mu-suppression was measured while participants read experimental materials. The degree of mu-suppression while reading verbs correlated .45 with their score on the Gates-MacGinitie test. This correlation strongly suggests that the sensorimotor system involvement while reading action sentences is part of the comprehension process rather than being an aftereffect.
ContributorsMarino, Annette Webb (Author) / Glenberg, Arthur (Thesis director) / Presson, Clark (Committee member) / Blais, Chris (Committee member) / Department of Psychology (Contributor) / Barrett, The Honors College (Contributor)
Created2016-05
Description

The cocktail party effect describes the brain’s natural ability to attend to a specific voice or audio source in a crowded room. Researchers have recently attempted to recreate this ability in hearing aid design using brain signals from invasive electrocorticography electrodes. The present study aims to find neural signatures of

The cocktail party effect describes the brain’s natural ability to attend to a specific voice or audio source in a crowded room. Researchers have recently attempted to recreate this ability in hearing aid design using brain signals from invasive electrocorticography electrodes. The present study aims to find neural signatures of auditory attention to achieve this same goal with noninvasive electroencephalographic (EEG) methods. Five human participants participated in an auditory attention task. Participants listened to a series of four syllables followed by a fifth syllable (probe syllable). Participants were instructed to indicate whether or not the probe syllable was one of the four syllables played immediately before the probe syllable. Trials of this task were separated into conditions of playing the syllables in silence (Signal) and in background noise (Signal With Noise), and both behavioral and EEG data were recorded. EEG signals were analyzed with event-related potential and time-frequency analysis methods. The behavioral data indicated that participants performed better on the task during the “Signal” condition, which aligns with the challenges demonstrated in the cocktail party effect. The EEG analysis showed that the alpha band’s (9-13 Hz) inter-trial coherence could potentially indicate characteristics of the attended speech signal. These preliminary results suggest that EEG time-frequency analysis has the potential to reveal the neural signatures of auditory attention, which may allow for the design of a noninvasive, EEG-based hearing aid.

ContributorsLaBine, Alyssa (Author) / Daliri, Ayoub (Thesis director) / Chao, Saraching (Committee member) / Barrett, The Honors College (Contributor) / College of Health Solutions (Contributor) / Harrington Bioengineering Program (Contributor)
Created2023-05
131951-Thumbnail Image.png
Description
Previous research has showed that auditory modulation may be affected by pure tone
stimuli played prior to the onset of speech production. In this experiment, we are examining the
specificity of the auditory stimulus by implementing congruent and incongruent speech sounds in
addition to non-speech sound. Electroencephalography (EEG) data was recorded for eleven

Previous research has showed that auditory modulation may be affected by pure tone
stimuli played prior to the onset of speech production. In this experiment, we are examining the
specificity of the auditory stimulus by implementing congruent and incongruent speech sounds in
addition to non-speech sound. Electroencephalography (EEG) data was recorded for eleven adult
subjects in both speaking (speech planning) and silent reading (no speech planning) conditions.
Data analysis was accomplished manually as well as via generation of a MATLAB code to
combine data sets and calculate auditory modulation (suppression). Results of the P200
modulation showed that modulation was larger for incongruent stimuli than congruent stimuli.
However, this was not the case for the N100 modulation. The data for pure tone could not be
analyzed because the intensity of this stimulus was substantially lower than that of the speech
stimuli. Overall, the results indicated that the P200 component plays a significant role in
processing stimuli and determining the relevance of stimuli; this result is consistent with role of
P200 component in high-level analysis of speech and perceptual processing. This experiment is
ongoing, and we hope to obtain data from more subjects to support the current findings.
ContributorsTaylor, Megan Kathleen (Author) / Daliri, Ayoub (Thesis director) / Liss, Julie (Committee member) / School of Life Sciences (Contributor) / School of International Letters and Cultures (Contributor) / Barrett, The Honors College (Contributor)
Created2020-05
131971-Thumbnail Image.png
Description
Previous research demonstrated the overall efficacy of an embodied language intervention (EMBRACE) that taught pre-school children how to simulate (imagine) language in a heard narrative. However, EMBRACE was not effective for every child. To try to explain this variable response to the intervention, the video recordings made during the

Previous research demonstrated the overall efficacy of an embodied language intervention (EMBRACE) that taught pre-school children how to simulate (imagine) language in a heard narrative. However, EMBRACE was not effective for every child. To try to explain this variable response to the intervention, the video recordings made during the four-day intervention sessions were assessed and emotion was coded. Each session was emotion-coded for child emotions and for child-researcher emotions. The child specific emotions were 1) engagement in the task, this included level of participation in the activity, 2) motivation/attention to persist and complete the task, as well as stay focused, and 3) positive affect throughout the session. The child-researcher specific emotions were 1) engagement with each other, this involved how the child interacted with the researcher and under what context, and 2) researcher’s positive affect, this incorporated how enthusiastic and encouraging the researcher was throughout the session. It was hypothesized that effectiveness of the intervention would be directly correlated with the degree that the child displayed positive emotions during the intervention. Thus, the analysis of these emotions should highlight differences between the control and EMBRACE group and help to explain variability in effectiveness of the intervention. The results did indicate that children in the EMBRACE group generally had a significantly higher positive affect compared to the control group, but these results did not influence the ability for the child to effectively recall or moderate the EEG variables in the post-test. The results also showed that children who interacted with the researcher more tended to be in the EMBRACE group, whereas children who did not interact with the researcher more frequently were in the control group, showing that the EMBRACE intervention ended up being a more collaborative task.
ContributorsOtt, Lauren Ruth (Author) / Glenberg, Arthur (Thesis director) / Presson, Clark (Committee member) / Kupfer, Anne (Committee member) / School of Life Sciences (Contributor) / Sanford School of Social and Family Dynamics (Contributor) / Department of Psychology (Contributor) / Barrett, The Honors College (Contributor)
Created2020-05
132035-Thumbnail Image.png
Description
The purpose of this study was to determine the effects of EEG neurofeedback training and vagus nerve stimulation on archery performance in elite recurve bow archers. Archers were assessed using performance measures including, quality of feel, target scoring ring score, heart rate, and electroencephalographic (EEG) measures. Results showed significant changes

The purpose of this study was to determine the effects of EEG neurofeedback training and vagus nerve stimulation on archery performance in elite recurve bow archers. Archers were assessed using performance measures including, quality of feel, target scoring ring score, heart rate, and electroencephalographic (EEG) measures. Results showed significant changes in quality ratings, heart rate and brain activity. Though there was not enough evidence to show a significant change in target ring scores, the results indicated physiological changes that could result in performance score changes with consistent use.
ContributorsRodriguez, Eleanor Marie (Author) / Whitney, Hansen (Thesis director) / Debbie, Crews (Committee member) / Department of Psychology (Contributor) / Barrett, The Honors College (Contributor)
Created2019-12
131570-Thumbnail Image.png
Description
Transcranial Current Stimulation (TCS) is a long-established method of modulating neuronal activity in the brain. One type of this stimulation, transcranial alternating current stimulation (tACS), is able to entrain endogenous oscillations and result in behavioral change. In the present study, we used five stimulation conditions: tACS at three different frequencies

Transcranial Current Stimulation (TCS) is a long-established method of modulating neuronal activity in the brain. One type of this stimulation, transcranial alternating current stimulation (tACS), is able to entrain endogenous oscillations and result in behavioral change. In the present study, we used five stimulation conditions: tACS at three different frequencies (6Hz, 12Hz, and 22Hz), transcranial random noise stimulation (tRNS), and a no-stimulation sham condition. In all stimulation conditions, we recorded electroencephalographic data to investigate the link between different frequencies of tACS and their effects on brain oscillations. We recruited 12 healthy participants. Each participant completed 30 trials of the stimulation conditions. In a given trial, we recorded brain activity for 10 seconds, stimulated for 12 seconds, and recorded an additional 10 seconds of brain activity. The difference between the average oscillation power before and after a stimulation condition indicated change in oscillation amplitude due to the stimulation. Our results showed the stimulation conditions entrained brain activity of a sub-group of participants.
ContributorsChernicky, Jacob Garrett (Author) / Daliri, Ayoub (Thesis director) / Liss, Julie (Committee member) / School of Life Sciences (Contributor, Contributor) / Barrett, The Honors College (Contributor)
Created2020-05