Matching Items (8)
Filtering by

Clear all filters

133374-Thumbnail Image.png
Description
Waltz, is a collection of poems written to play along the boundaries between sound, language, and meaning. As a vehicle for exploration, the poems in Waltz, commandeer themes of nostalgia, love, loss, and abstraction, all of which build up and break each other down to create something of a nonlinear

Waltz, is a collection of poems written to play along the boundaries between sound, language, and meaning. As a vehicle for exploration, the poems in Waltz, commandeer themes of nostalgia, love, loss, and abstraction, all of which build up and break each other down to create something of a nonlinear narrative, and concomitant sketch of the poet.
ContributorsAieta, Joseph (Author) / Ball, Sally (Thesis director) / Liston, Chelsea (Committee member) / Department of English (Contributor) / Barrett, The Honors College (Contributor)
Created2018-05
135398-Thumbnail Image.png
Description
This paper outlines the development of a software application that explores the plausibility and potential of interacting with three-dimensional sound sources within a virtual environment. The intention of the software application is to allow a user to become engaged with a collection of sound sources that can be perceived both

This paper outlines the development of a software application that explores the plausibility and potential of interacting with three-dimensional sound sources within a virtual environment. The intention of the software application is to allow a user to become engaged with a collection of sound sources that can be perceived both graphically and audibly within a spatial, three-dimensional context. The three-dimensional sound perception is driven primarily by a binaural implementation of a higher order ambisonics framework while graphics and other data are processed by openFrameworks, an interactive media framework for C++. Within the application, sound sources have been given behavioral functions such as flocking or orbit patterns, animating their positions within the environment. The author will summarize the design process and rationale for creating such a system and the chosen approach to implement the software application. The paper will also provide background approaches to spatial audio, gesture and virtual reality embodiment, and future possibilities for the existing project.
ContributorsBurnett, Garrett (Author) / Paine, Garth (Thesis director) / Pavlic, Theodore (Committee member) / School of Humanities, Arts, and Cultural Studies (Contributor) / School of Arts, Media and Engineering (Contributor) / Barrett, The Honors College (Contributor)
Created2016-05
136908-Thumbnail Image.png
Description
Human perceptual dimensions of sound are not necessarily simple representations of the actual physical dimensions that make up sensory input. In particular, research on the perception of interactions between acoustic frequency and intensity has shown that people exhibit a bias to expect the perception of pitch and loudness to change

Human perceptual dimensions of sound are not necessarily simple representations of the actual physical dimensions that make up sensory input. In particular, research on the perception of interactions between acoustic frequency and intensity has shown that people exhibit a bias to expect the perception of pitch and loudness to change together. Researchers have proposed that this perceptual bias occurs because sound sources tend to follow a natural regularity of a correlation between changes in intensity and frequency of sound. They postulate that the auditory system has adapted to expect this naturally occurring relationship to facilitate auditory scene analysis, the tracking and parsing sources of sound as listeners analyze their auditory environments. However, this correlation has only been tested with human speech and musical sounds. The current study explores if animal sounds also exhibit the same natural correlation between intensity and frequency and tests if people exhibit a perceptual bias to assume this correlation when listening to animal calls. Our principal hypotheses are that animal sounds will tend to exhibit a positive correlation between intensity and frequency and that, when hearing such sounds change in intensity, listeners will perceive them to also change in frequency and vice versa. Our tests with 21 animal calls and 8 control stimuli along with our experiment with participants responding to these stimuli supported these hypotheses. This research provides a further example of coupling of perceptual biases with natural regularities in the auditory domain, and provides a framework for understanding perceptual biases as functional adaptations that help perceivers more accurately anticipate and utilize reliable natural patterns to enhance scene analyses in real world environments.
ContributorsWilkinson, Zachary David (Author) / McBeath, Michael (Thesis director) / Glenberg, Arthur (Committee member) / Rutowski, Ronald (Committee member) / Barrett, The Honors College (Contributor) / Department of Psychology (Contributor)
Created2014-05
133018-Thumbnail Image.png
Description
This paper introduces MisophoniAPP, a new website for managing misophonia. It will briefly discuss the nature of this chronic syndrome, which is the experience of reacting strongly to certain everyday sounds, or “triggers”. Various forms of Cognitive Behavioral Therapy and the Neural Repatterning Technique are currently used to treat misophonia,

This paper introduces MisophoniAPP, a new website for managing misophonia. It will briefly discuss the nature of this chronic syndrome, which is the experience of reacting strongly to certain everyday sounds, or “triggers”. Various forms of Cognitive Behavioral Therapy and the Neural Repatterning Technique are currently used to treat misophonia, but they are not guaranteed to work for every patient. Few apps exist to help patients with their therapy, so this paper describes the design and creation of a new website that combines exposure therapy,
relaxation, and gamification to help patients alleviate their misophonic reflexes.
ContributorsNoziglia, Rachel Elisabeth (Author) / McDaniel, Troy (Thesis director) / Anderson, Derrick (Committee member) / Computer Science and Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2019-05
135494-Thumbnail Image.png
Description
Hearing and vision are two senses that most individuals use on a daily basis. The simultaneous presentation of competing visual and auditory stimuli often affects our sensory perception. It is often believed that vision is the more dominant sense over audition in spatial localization tasks. Recent work suggests that visual

Hearing and vision are two senses that most individuals use on a daily basis. The simultaneous presentation of competing visual and auditory stimuli often affects our sensory perception. It is often believed that vision is the more dominant sense over audition in spatial localization tasks. Recent work suggests that visual information can influence auditory localization when the sound is emanating from a physical location or from a phantom location generated through stereophony (the so-called "summing localization"). The present study investigates the role of cross-modal fusion in an auditory localization task. The focuses of the experiments are two-fold: (1) reveal the extent of fusion between auditory and visual stimuli and (2) investigate how fusion is correlated with the amount of visual bias a subject experiences. We found that fusion often occurs when light flash and "summing localization" stimuli were presented from the same hemifield. However, little correlation was observed between the magnitude of visual bias and the extent of perceived fusion between light and sound stimuli. In some cases, subjects reported distinctive locations for light and sound and still experienced visual capture.
ContributorsBalderas, Leslie Ann (Author) / Zhou, Yi (Thesis director) / Yost, William (Committee member) / Department of Speech and Hearing Science (Contributor) / Barrett, The Honors College (Contributor)
Created2016-05
Description

The aim of this project was to create an original sound design and score for the ASU SOMDT production of HEDDATRON, by Elizabeth Meriwether. Composition and sound design was done primarily with a modular synthesizer. All audio editing was done in Reaper, and the cues were programmed in Qlab.

ContributorsJansen, Troy Sherk (Author) / Max, Bernstein (Thesis director) / Lance, Gharavi (Committee member) / School of Music, Dance and Theatre (Contributor) / Barrett, The Honors College (Contributor)
Created2021-05
Description

This work studies the influence on music and sound on visual media. It takes two visual media clips and sets them with several musical and compositions. Each piece of music is different in genre and tone, thus changing the audiences perception of the media. It also studies how different genres

This work studies the influence on music and sound on visual media. It takes two visual media clips and sets them with several musical and compositions. Each piece of music is different in genre and tone, thus changing the audiences perception of the media. It also studies how different genres appeal to different demographics and how this can be used to appeal to them.

ContributorsTanabe, Arion (Author) / Bolanos, Gabriel (Thesis director) / Temple, Alex (Committee member) / Barrett, The Honors College (Contributor)
Created2023-05
Description

In this paper, I propose that taking an embodied approach to music performance can allow for better gestural control over the live sound produced and greater connection between the performer and their audience. I examine the many possibilities of live electronic manipulation of the voice such as those employed by

In this paper, I propose that taking an embodied approach to music performance can allow for better gestural control over the live sound produced and greater connection between the performer and their audience. I examine the many possibilities of live electronic manipulation of the voice such as those employed by past and current vocalists who specialize in live electronic sound manipulation and improvisation. Through extensive research and instrument design, I have sought to produce something that will benefit me in my performances as a vocalist and help me step out from the boundaries of traditional music performance. I will discuss the techniques used for the creation of my gestural instrument through the lens of my experiences as a performer using these tools. I believe that, through use of movement and gesture in the creation and control of sound, it is more than possible to step away from conventional ideas of live vocal performance and create something new and unique, especially through the inclusion of improvisation.

ContributorsEstes, Isabel (Author) / Hayes, Lauren (Thesis director) / Thorn, Seth (Committee member) / Barrett, The Honors College (Contributor) / Arts, Media and Engineering Sch T (Contributor)
Created2021-12