This collection includes both ASU Theses and Dissertations, submitted by graduate students, and the Barrett, Honors College theses submitted by undergraduate students. 

Displaying 1 - 9 of 9
Filtering by

Clear all filters

Description
Biofeedback music is the integration of physiological signals with audible sound for aesthetic considerations, which an individual’s mental status corresponds to musical output. This project looks into how sounds can be drawn from the meditative and attentive states of the brain using the MindWave Mobile EEG biosensor from NeuroSky. With

Biofeedback music is the integration of physiological signals with audible sound for aesthetic considerations, which an individual’s mental status corresponds to musical output. This project looks into how sounds can be drawn from the meditative and attentive states of the brain using the MindWave Mobile EEG biosensor from NeuroSky. With the MindWave and an Arduino microcontroller processor, sonic output is attained by inputting the data collected by the MindWave, and in real time, outputting code that deciphers it into user constructed sound output. The input is scaled from values 0 to 100, measuring the ‘attentive’ state of the mind by observing alpha waves, and distributing this information to the microcontroller. The output of sound comes from sourcing this into the Musical Instrument Shield and varying the musical tonality with different chords and delay of the notes. The manipulation of alpha states highlights the control or lack thereof for the performer and touches on the question of how much control over the output there really is, much like the experimentalist Alvin Lucier displayed with his concepts in brainwave music.
ContributorsQuach, Andrew Duc (Author) / Helms Tillery, Stephen (Thesis director) / Feisst, Sabine (Committee member) / Barrett, The Honors College (Contributor) / Herberger Institute for Design and the Arts (Contributor) / Harrington Bioengineering Program (Contributor)
Created2014-05
137004-Thumbnail Image.png
Description
Brain-computer interface technology establishes communication between the brain and a computer, allowing users to control devices, machines, or virtual objects using their thoughts. This study investigates optimal conditions to facilitate learning to operate this interface. It compares two biofeedback methods, which dictate the relationship between brain activity and the movement

Brain-computer interface technology establishes communication between the brain and a computer, allowing users to control devices, machines, or virtual objects using their thoughts. This study investigates optimal conditions to facilitate learning to operate this interface. It compares two biofeedback methods, which dictate the relationship between brain activity and the movement of a virtual ball in a target-hitting task. Preliminary results indicate that a method in which the position of the virtual object directly relates to the amplitude of brain signals is most conducive to success. In addition, this research explores learning in the context of neural signals during training with a BCI task. Specifically, it investigates whether subjects can adapt to parameters of the interface without guidance. This experiment prompts subjects to modulate brain signals spectrally, spatially, and temporally, as well differentially to discriminate between two different targets. However, subjects are not given knowledge regarding these desired changes, nor are they given instruction on how to move the virtual ball. Preliminary analysis of signal trends suggests that some successful participants are able to adapt brain wave activity in certain pre-specified locations and frequency bands over time in order to achieve control. Future studies will further explore these phenomena, and future BCI projects will be advised by these methods, which will give insight into the creation of more intuitive and reliable BCI technology.
ContributorsLancaster, Jenessa Mae (Co-author) / Appavu, Brian (Co-author) / Wahnoun, Remy (Co-author, Committee member) / Helms Tillery, Stephen (Thesis director) / Barrett, The Honors College (Contributor) / Harrington Bioengineering Program (Contributor) / Department of Psychology (Contributor)
Created2014-05
148383-Thumbnail Image.png
Description

The distinctions between the neural resources supporting speech and music comprehension have long been studied using contexts like aphasia and amusia, and neuroimaging in control subjects. While many models have emerged to describe the different networks uniquely recruited in response to speech and music stimuli, there are still many questions,

The distinctions between the neural resources supporting speech and music comprehension have long been studied using contexts like aphasia and amusia, and neuroimaging in control subjects. While many models have emerged to describe the different networks uniquely recruited in response to speech and music stimuli, there are still many questions, especially regarding left-hemispheric strokes that disrupt typical speech-processing brain networks, and how musical training might affect the brain networks recruited for speech after a stroke. Thus, our study aims to explore some questions related to the above topics. We collected task-based functional MRI data from 12 subjects who previously experienced a left-hemispheric stroke. Subjects listened to blocks of spoken sentences and novel piano melodies during scanning to examine the differences in brain activations in response to speech and music. We hypothesized that speech stimuli would activate right frontal regions, and music stimuli would activate the right superior temporal regions more than speech (both findings not seen in previous studies of control subjects), as a result of functional changes in the brain, following the left-hemispheric stroke and particularly the loss of functionality in the left temporal lobe. We also hypothesized that the music stimuli would cause a stronger activation in right temporal cortex for participants who have had musical training than those who have not. Our results indicate that speech stimuli compared to rest activated the anterior superior temporal gyrus bilaterally and activated the right inferior frontal lobe. Music stimuli compared to rest did not activate the brain bilaterally, but rather only activated the right middle temporal gyrus. When the group analysis was performed with music experience as a covariate, we found that musical training did not affect activations to music stimuli specifically, but there was greater right hemisphere activation in several regions in response to speech stimuli as a function of more years of musical training. The results of the study agree with our hypotheses regarding the functional changes in the brain, but they conflict with our hypothesis about musical expertise. Overall, the study has generated interesting starting points for further explorations of how musical neural resources may be recruited for speech processing after damage to typical language networks.

ContributorsKarthigeyan, Vishnu R (Author) / Rogalsky, Corianne (Thesis director) / Daliri, Ayoub (Committee member) / Harrington Bioengineering Program (Contributor) / School of Life Sciences (Contributor) / Barrett, The Honors College (Contributor)
Created2021-05
132664-Thumbnail Image.png
Description
Human potential is characterized by our ability to think flexibly and develop novel solutions to problems. In cognitive neuroscience, problem solving is studied using various tasks. For example, IQ can be tested using the RAVEN, which measures abstract reasoning. Analytical problem solving can be tested using algebra, and insight can

Human potential is characterized by our ability to think flexibly and develop novel solutions to problems. In cognitive neuroscience, problem solving is studied using various tasks. For example, IQ can be tested using the RAVEN, which measures abstract reasoning. Analytical problem solving can be tested using algebra, and insight can be tested using a nine-dot test. Our class of problem-solving tasks blends analytical and insight processes. This can be done by measuring multiply-constrained problem solving (MCPS). MCPS occurs when an individual problem has several solutions, but when grouped with simultaneous problems only one correct solution presents itself. The most common test for MCPS is known at the CRAT, or compound remote associate task. For example, when given the three target words “water, skate, and cream” there are many compound associates that can be assigned each of the target words individually (i.e. salt-water, roller-skate, whipped-cream), but only one that works with all three (ice-water, ice-skate, ice-cream).
This thesis is a tutorial for a MATLAB user-interface, known as EEGLAB. Cognitive and neural correlates of analytical and insight processes were evaluated and analyzed in the CRAT using EEG. It was hypothesized that different EEG signals will be measured for analytical versus insight problem solving, primarily observed in the gamma wave production. The data was interpreted using EEGLAB, which allows psychological processes to be quantified based on physiological response. I have written a tutorial showing how to process the EEG signal through filtering, extracting epochs, artifact detection, independent component analysis, and the production of a time – frequency plot. This project has combined my interest in psychology with my knowledge of engineering and expand my knowledge of bioinstrumentation.
ContributorsCobban, Morgan Elizabeth (Author) / Brewer, Gene (Thesis director) / Ellis, Derek (Committee member) / Harrington Bioengineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2019-05
133734-Thumbnail Image.png
Description
Prior expectations can bias evaluative judgments of sensory information. We show that information about a performer's status can bias the evaluation of musical stimuli, reflected by differential activity of the ventromedial prefrontal cortex (vmPFC). Moreover, we demonstrate that decreased susceptibility to this confirmation bias is (a) accompanied by the recruitment

Prior expectations can bias evaluative judgments of sensory information. We show that information about a performer's status can bias the evaluation of musical stimuli, reflected by differential activity of the ventromedial prefrontal cortex (vmPFC). Moreover, we demonstrate that decreased susceptibility to this confirmation bias is (a) accompanied by the recruitment of and (b) correlated with the white-matter structure of the executive control network, particularly related to the dorsolateral prefrontal cortex (dlPFC). By using long-duration musical stimuli, we were able to track the initial biasing, subsequent perception, and ultimate evaluation of the stimuli, examining the full evolution of these biases over time. Our findings confirm the persistence of confirmation bias effects even when ample opportunity exists to gather information about true stimulus quality, and underline the importance of executive control in reducing bias.
ContributorsAydogan, Goekhan (Co-author, Committee member) / Flaig, Nicole (Co-author) / Larg, Edward W. (Co-author) / Margulis, Elizabeth Hellmuth (Co-author) / McClure, Samuel (Co-author, Thesis director) / Nagishetty Ravi, Srekar Krishna (Co-author) / Harrington Bioengineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2018-05
135321-Thumbnail Image.png
Description
The purpose of this study is to analyze the stereotypes surrounding four wind instruments (flutes, oboes, clarinets, and saxophones), and the ways in which those stereotypes propagate through various levels of musical professionalism in Western culture. In order to determine what these stereotypes might entail, several thousand social media and

The purpose of this study is to analyze the stereotypes surrounding four wind instruments (flutes, oboes, clarinets, and saxophones), and the ways in which those stereotypes propagate through various levels of musical professionalism in Western culture. In order to determine what these stereotypes might entail, several thousand social media and blog posts were analyzed, and direct quotations detailing the perceived stereotypical personality profiles for each of the four instruments were collected. From these, the three most commonly mentioned characteristics were isolated for each of the instrument groups as follows: female gender, femininity, and giggliness for flutists, intelligence, studiousness, and demographics (specifically being an Asian male) for clarinetists, quirkiness, eccentricity, and being seen as a misfit for oboists, and overconfidence, attention-seeking behavior, and coolness for saxophonists. From these traits, a survey was drafted which asked participating college-aged musicians various multiple choice, opinion scale, and short-answer questions that gathered how much they agree or disagree with each trait describing the instrument from which it was derived. Their responses were then analyzed to determine how much correlation existed between the researched characteristics and the opinions of modern musicians. From these results, it was determined that 75% of the traits that were isolated for a particular instrument were, in fact, recognized as being true in the survey data, demonstrating that the stereotypes do exist and seem to be widely recognizable across many age groups, locations, and levels of musical skill. Further, 89% of participants admitted that the instrument they play has a certain stereotype associated with it, but only 38% of people identify with that profile. Overall, it was concluded that stereotypes, which are overwhelmingly negative and gendered by nature, are indeed propagated, but musicians do not appear to want to identify with them, and they reflect a more archaic and immature sense that does not correlate to the trends observed in modern, professional music.
ContributorsAllison, Lauren Nicole (Author) / Bhattacharjya, Nilanjana (Thesis director) / Ankeny, Casey (Committee member) / School of Life Sciences (Contributor) / Harrington Bioengineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2016-05
135402-Thumbnail Image.png
Description
It is unknown which regions of the brain are most or least active for golfers during a peak performance state (Flow State or "The Zone") on the putting green. To address this issue, electroencephalographic (EEG) recordings were taken on 10 elite golfers while they performed a putting drill consisting of

It is unknown which regions of the brain are most or least active for golfers during a peak performance state (Flow State or "The Zone") on the putting green. To address this issue, electroencephalographic (EEG) recordings were taken on 10 elite golfers while they performed a putting drill consisting of hitting nine putts spaced uniformly around a hole each five feet away. Data was collected at three time periods, before, during and after the putt. Galvanic Skin Response (GSR) measurements were also recorded on each subject. Three of the subjects performed a visualization of the same putting drill and their brain waves and GSR were recorded and then compared with their actual performance of the drill. EEG data in the Theta (4 \u2014 7 Hz) bandwidth and Alpha (7 \u2014 13 Hz) bandwidth in 11 different locations across the head were analyzed. Relative power spectrum was used to quantify the data. From the results, it was found that there is a higher magnitude of power in both the theta and alpha bandwidths for a missed putt in comparison to a made putt (p<0.05). It was also found that there is a higher average power in the right hemisphere for made putts. There was not a higher power in the occipital region of the brain nor was there a lower power level in the frontal cortical region during made putts. The hypothesis that there would be a difference between the means of the power level in performance compared to visualization techniques was also supported.
ContributorsCarpenter, Andrea (Co-author) / Hool, Nicholas (Co-author) / Muthuswamy, Jitendran (Thesis director) / Crews, Debbie (Committee member) / Harrington Bioengineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2016-05
Description

The cocktail party effect describes the brain’s natural ability to attend to a specific voice or audio source in a crowded room. Researchers have recently attempted to recreate this ability in hearing aid design using brain signals from invasive electrocorticography electrodes. The present study aims to find neural signatures of

The cocktail party effect describes the brain’s natural ability to attend to a specific voice or audio source in a crowded room. Researchers have recently attempted to recreate this ability in hearing aid design using brain signals from invasive electrocorticography electrodes. The present study aims to find neural signatures of auditory attention to achieve this same goal with noninvasive electroencephalographic (EEG) methods. Five human participants participated in an auditory attention task. Participants listened to a series of four syllables followed by a fifth syllable (probe syllable). Participants were instructed to indicate whether or not the probe syllable was one of the four syllables played immediately before the probe syllable. Trials of this task were separated into conditions of playing the syllables in silence (Signal) and in background noise (Signal With Noise), and both behavioral and EEG data were recorded. EEG signals were analyzed with event-related potential and time-frequency analysis methods. The behavioral data indicated that participants performed better on the task during the “Signal” condition, which aligns with the challenges demonstrated in the cocktail party effect. The EEG analysis showed that the alpha band’s (9-13 Hz) inter-trial coherence could potentially indicate characteristics of the attended speech signal. These preliminary results suggest that EEG time-frequency analysis has the potential to reveal the neural signatures of auditory attention, which may allow for the design of a noninvasive, EEG-based hearing aid.

ContributorsLaBine, Alyssa (Author) / Daliri, Ayoub (Thesis director) / Chao, Saraching (Committee member) / Barrett, The Honors College (Contributor) / College of Health Solutions (Contributor) / Harrington Bioengineering Program (Contributor)
Created2023-05
Description

With millions of people living with a disease as restraining as migraines, there are no ways to diagnose them before they occur. In this study, a migraine model using nitroglycerin is used in rats to study the awake brain activity during the migraine state. In an attempt to search for

With millions of people living with a disease as restraining as migraines, there are no ways to diagnose them before they occur. In this study, a migraine model using nitroglycerin is used in rats to study the awake brain activity during the migraine state. In an attempt to search for a biomarker for the migraine state, we found multiple deviations in EEG brain activity across different bands. Firstly, there was a clear decrease in power in the delta, beta, alpha, and theta bands. A slight increase in power in the gamma and high frequency bands was also found, which is consistent with other pain-related studies12. Additionally, we searched for a decreased pain threshold in this deviation, in which we concluded that more data analysis is needed to eliminate the multiple potential noise influxes throughout each dataset. However, with this study we did find a clear change in brain activity, but a more detailed analysis will narrow down what this change could mean and how it impacts the migraine state.

ContributorsStrambi, McKenna (Author) / Muthuswamy, Jitendran (Thesis director) / Greger, Bradley (Committee member) / Barrett, The Honors College (Contributor) / Harrington Bioengineering Program (Contributor)
Created2023-05