Theses and Dissertations
Filtering by
- All Subjects: EEG
- All Subjects: Music
- Creators: Harrington Bioengineering Program
The distinctions between the neural resources supporting speech and music comprehension have long been studied using contexts like aphasia and amusia, and neuroimaging in control subjects. While many models have emerged to describe the different networks uniquely recruited in response to speech and music stimuli, there are still many questions, especially regarding left-hemispheric strokes that disrupt typical speech-processing brain networks, and how musical training might affect the brain networks recruited for speech after a stroke. Thus, our study aims to explore some questions related to the above topics. We collected task-based functional MRI data from 12 subjects who previously experienced a left-hemispheric stroke. Subjects listened to blocks of spoken sentences and novel piano melodies during scanning to examine the differences in brain activations in response to speech and music. We hypothesized that speech stimuli would activate right frontal regions, and music stimuli would activate the right superior temporal regions more than speech (both findings not seen in previous studies of control subjects), as a result of functional changes in the brain, following the left-hemispheric stroke and particularly the loss of functionality in the left temporal lobe. We also hypothesized that the music stimuli would cause a stronger activation in right temporal cortex for participants who have had musical training than those who have not. Our results indicate that speech stimuli compared to rest activated the anterior superior temporal gyrus bilaterally and activated the right inferior frontal lobe. Music stimuli compared to rest did not activate the brain bilaterally, but rather only activated the right middle temporal gyrus. When the group analysis was performed with music experience as a covariate, we found that musical training did not affect activations to music stimuli specifically, but there was greater right hemisphere activation in several regions in response to speech stimuli as a function of more years of musical training. The results of the study agree with our hypotheses regarding the functional changes in the brain, but they conflict with our hypothesis about musical expertise. Overall, the study has generated interesting starting points for further explorations of how musical neural resources may be recruited for speech processing after damage to typical language networks.
This thesis is a tutorial for a MATLAB user-interface, known as EEGLAB. Cognitive and neural correlates of analytical and insight processes were evaluated and analyzed in the CRAT using EEG. It was hypothesized that different EEG signals will be measured for analytical versus insight problem solving, primarily observed in the gamma wave production. The data was interpreted using EEGLAB, which allows psychological processes to be quantified based on physiological response. I have written a tutorial showing how to process the EEG signal through filtering, extracting epochs, artifact detection, independent component analysis, and the production of a time – frequency plot. This project has combined my interest in psychology with my knowledge of engineering and expand my knowledge of bioinstrumentation.
The cocktail party effect describes the brain’s natural ability to attend to a specific voice or audio source in a crowded room. Researchers have recently attempted to recreate this ability in hearing aid design using brain signals from invasive electrocorticography electrodes. The present study aims to find neural signatures of auditory attention to achieve this same goal with noninvasive electroencephalographic (EEG) methods. Five human participants participated in an auditory attention task. Participants listened to a series of four syllables followed by a fifth syllable (probe syllable). Participants were instructed to indicate whether or not the probe syllable was one of the four syllables played immediately before the probe syllable. Trials of this task were separated into conditions of playing the syllables in silence (Signal) and in background noise (Signal With Noise), and both behavioral and EEG data were recorded. EEG signals were analyzed with event-related potential and time-frequency analysis methods. The behavioral data indicated that participants performed better on the task during the “Signal” condition, which aligns with the challenges demonstrated in the cocktail party effect. The EEG analysis showed that the alpha band’s (9-13 Hz) inter-trial coherence could potentially indicate characteristics of the attended speech signal. These preliminary results suggest that EEG time-frequency analysis has the potential to reveal the neural signatures of auditory attention, which may allow for the design of a noninvasive, EEG-based hearing aid.
With millions of people living with a disease as restraining as migraines, there are no ways to diagnose them before they occur. In this study, a migraine model using nitroglycerin is used in rats to study the awake brain activity during the migraine state. In an attempt to search for a biomarker for the migraine state, we found multiple deviations in EEG brain activity across different bands. Firstly, there was a clear decrease in power in the delta, beta, alpha, and theta bands. A slight increase in power in the gamma and high frequency bands was also found, which is consistent with other pain-related studies12. Additionally, we searched for a decreased pain threshold in this deviation, in which we concluded that more data analysis is needed to eliminate the multiple potential noise influxes throughout each dataset. However, with this study we did find a clear change in brain activity, but a more detailed analysis will narrow down what this change could mean and how it impacts the migraine state.