Matching Items (4)
135321-Thumbnail Image.png
Description
The purpose of this study is to analyze the stereotypes surrounding four wind instruments (flutes, oboes, clarinets, and saxophones), and the ways in which those stereotypes propagate through various levels of musical professionalism in Western culture. In order to determine what these stereotypes might entail, several thousand social media and

The purpose of this study is to analyze the stereotypes surrounding four wind instruments (flutes, oboes, clarinets, and saxophones), and the ways in which those stereotypes propagate through various levels of musical professionalism in Western culture. In order to determine what these stereotypes might entail, several thousand social media and blog posts were analyzed, and direct quotations detailing the perceived stereotypical personality profiles for each of the four instruments were collected. From these, the three most commonly mentioned characteristics were isolated for each of the instrument groups as follows: female gender, femininity, and giggliness for flutists, intelligence, studiousness, and demographics (specifically being an Asian male) for clarinetists, quirkiness, eccentricity, and being seen as a misfit for oboists, and overconfidence, attention-seeking behavior, and coolness for saxophonists. From these traits, a survey was drafted which asked participating college-aged musicians various multiple choice, opinion scale, and short-answer questions that gathered how much they agree or disagree with each trait describing the instrument from which it was derived. Their responses were then analyzed to determine how much correlation existed between the researched characteristics and the opinions of modern musicians. From these results, it was determined that 75% of the traits that were isolated for a particular instrument were, in fact, recognized as being true in the survey data, demonstrating that the stereotypes do exist and seem to be widely recognizable across many age groups, locations, and levels of musical skill. Further, 89% of participants admitted that the instrument they play has a certain stereotype associated with it, but only 38% of people identify with that profile. Overall, it was concluded that stereotypes, which are overwhelmingly negative and gendered by nature, are indeed propagated, but musicians do not appear to want to identify with them, and they reflect a more archaic and immature sense that does not correlate to the trends observed in modern, professional music.
ContributorsAllison, Lauren Nicole (Author) / Bhattacharjya, Nilanjana (Thesis director) / Ankeny, Casey (Committee member) / School of Life Sciences (Contributor) / Harrington Bioengineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2016-05
133734-Thumbnail Image.png
Description
Prior expectations can bias evaluative judgments of sensory information. We show that information about a performer's status can bias the evaluation of musical stimuli, reflected by differential activity of the ventromedial prefrontal cortex (vmPFC). Moreover, we demonstrate that decreased susceptibility to this confirmation bias is (a) accompanied by the recruitment

Prior expectations can bias evaluative judgments of sensory information. We show that information about a performer's status can bias the evaluation of musical stimuli, reflected by differential activity of the ventromedial prefrontal cortex (vmPFC). Moreover, we demonstrate that decreased susceptibility to this confirmation bias is (a) accompanied by the recruitment of and (b) correlated with the white-matter structure of the executive control network, particularly related to the dorsolateral prefrontal cortex (dlPFC). By using long-duration musical stimuli, we were able to track the initial biasing, subsequent perception, and ultimate evaluation of the stimuli, examining the full evolution of these biases over time. Our findings confirm the persistence of confirmation bias effects even when ample opportunity exists to gather information about true stimulus quality, and underline the importance of executive control in reducing bias.
ContributorsAydogan, Goekhan (Co-author, Committee member) / Flaig, Nicole (Co-author) / Larg, Edward W. (Co-author) / Margulis, Elizabeth Hellmuth (Co-author) / McClure, Samuel (Co-author, Thesis director) / Nagishetty Ravi, Srekar Krishna (Co-author) / Harrington Bioengineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2018-05
Description
Biofeedback music is the integration of physiological signals with audible sound for aesthetic considerations, which an individual’s mental status corresponds to musical output. This project looks into how sounds can be drawn from the meditative and attentive states of the brain using the MindWave Mobile EEG biosensor from NeuroSky. With

Biofeedback music is the integration of physiological signals with audible sound for aesthetic considerations, which an individual’s mental status corresponds to musical output. This project looks into how sounds can be drawn from the meditative and attentive states of the brain using the MindWave Mobile EEG biosensor from NeuroSky. With the MindWave and an Arduino microcontroller processor, sonic output is attained by inputting the data collected by the MindWave, and in real time, outputting code that deciphers it into user constructed sound output. The input is scaled from values 0 to 100, measuring the ‘attentive’ state of the mind by observing alpha waves, and distributing this information to the microcontroller. The output of sound comes from sourcing this into the Musical Instrument Shield and varying the musical tonality with different chords and delay of the notes. The manipulation of alpha states highlights the control or lack thereof for the performer and touches on the question of how much control over the output there really is, much like the experimentalist Alvin Lucier displayed with his concepts in brainwave music.
ContributorsQuach, Andrew Duc (Author) / Helms Tillery, Stephen (Thesis director) / Feisst, Sabine (Committee member) / Barrett, The Honors College (Contributor) / Herberger Institute for Design and the Arts (Contributor) / Harrington Bioengineering Program (Contributor)
Created2014-05
148383-Thumbnail Image.png
Description

The distinctions between the neural resources supporting speech and music comprehension have long been studied using contexts like aphasia and amusia, and neuroimaging in control subjects. While many models have emerged to describe the different networks uniquely recruited in response to speech and music stimuli, there are still many questions,

The distinctions between the neural resources supporting speech and music comprehension have long been studied using contexts like aphasia and amusia, and neuroimaging in control subjects. While many models have emerged to describe the different networks uniquely recruited in response to speech and music stimuli, there are still many questions, especially regarding left-hemispheric strokes that disrupt typical speech-processing brain networks, and how musical training might affect the brain networks recruited for speech after a stroke. Thus, our study aims to explore some questions related to the above topics. We collected task-based functional MRI data from 12 subjects who previously experienced a left-hemispheric stroke. Subjects listened to blocks of spoken sentences and novel piano melodies during scanning to examine the differences in brain activations in response to speech and music. We hypothesized that speech stimuli would activate right frontal regions, and music stimuli would activate the right superior temporal regions more than speech (both findings not seen in previous studies of control subjects), as a result of functional changes in the brain, following the left-hemispheric stroke and particularly the loss of functionality in the left temporal lobe. We also hypothesized that the music stimuli would cause a stronger activation in right temporal cortex for participants who have had musical training than those who have not. Our results indicate that speech stimuli compared to rest activated the anterior superior temporal gyrus bilaterally and activated the right inferior frontal lobe. Music stimuli compared to rest did not activate the brain bilaterally, but rather only activated the right middle temporal gyrus. When the group analysis was performed with music experience as a covariate, we found that musical training did not affect activations to music stimuli specifically, but there was greater right hemisphere activation in several regions in response to speech stimuli as a function of more years of musical training. The results of the study agree with our hypotheses regarding the functional changes in the brain, but they conflict with our hypothesis about musical expertise. Overall, the study has generated interesting starting points for further explorations of how musical neural resources may be recruited for speech processing after damage to typical language networks.

ContributorsKarthigeyan, Vishnu R (Author) / Rogalsky, Corianne (Thesis director) / Daliri, Ayoub (Committee member) / Harrington Bioengineering Program (Contributor) / School of Life Sciences (Contributor) / Barrett, The Honors College (Contributor)
Created2021-05