Matching Items (6)
Filtering by

Clear all filters

133102-Thumbnail Image.png
Description
Advances in computational processing have made big data analysis in fields like music information retrieval (MIR) possible. Through MIR techniques researchers have been able to study information on a song, its musical parameters, the metadata generated by the song's listeners, and contextual data regarding the artists and listeners (Schedl, 2014).

Advances in computational processing have made big data analysis in fields like music information retrieval (MIR) possible. Through MIR techniques researchers have been able to study information on a song, its musical parameters, the metadata generated by the song's listeners, and contextual data regarding the artists and listeners (Schedl, 2014). MIR research techniques have been applied within the field of music and emotions research to help analyze the correlative properties between the music information and the emotional output. By pairing methods within music and emotions research with the analysis of the musical features extracted through MIR, researchers have developed predictive models for emotions within a musical piece. This research has increased our understanding of the correlative properties of certain musical features like pitch, timbre, rhythm, dynamics, mel frequency cepstral coefficients (MFCC's), and others, to the emotions evoked by music (Lartillot 2008; Schedl 2014) This understanding of the correlative properties has enabled researchers to generate predictive models of emotion within music based on listeners' emotional response to it. However, robust models that account for a user's individualized emotional experience and the semantic nuances of emotional categorization have eluded the research community (London, 2001). To address these two main issues, more advanced analytical methods have been employed. In this article we will look at two of these more advanced analytical methods, machine learning algorithms and deep learning techniques, and discuss the effect that they have had on music and emotions research (Murthy, 2018). Current trends within MIR research, the application of support vector machines and neural networks, will also be assessed to explain how these methods help to address the two main issues within music and emotion research. Finally, future research within the field of machine and deep learning will be postulated to show how individuate models may be developed from a user or a pool of user's listening libraries. Also how developments of semi-supervised classification models that assess categorization by cluster instead of by nominal data, may be helpful in addressing the nuances of emotional categorization.
ContributorsMcgeehon, Timothy Makoto (Author) / Middleton, James (Thesis director) / Knowles, Kristina (Committee member) / Mechanical and Aerospace Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2018-12
133008-Thumbnail Image.png
Description
Physical activity is something that everyone engages in at varying levels. It has been linked to positively impacting general wellbeing, as well as preparing the mind and body to learn new skills. However, the significance of physical activity remains under-explored in some areas. The purpose of this

Physical activity is something that everyone engages in at varying levels. It has been linked to positively impacting general wellbeing, as well as preparing the mind and body to learn new skills. However, the significance of physical activity remains under-explored in some areas. The purpose of this study was to determine the relationship between physical activity levels and emotional intelligence, navigation and planning skills, motor skills, memory capacity, and one’s perception of the ‘value’ of an object or an experience. During sessions, participants were equipped with two physiological sensors: the EEG B-Alert X10 or X24 headset, and the Shimmer GSR3. In addition to these, two external sensors were used: a web camera for recording and evaluating facial expressions, and the Tobii X2-30, X2-60, or Tobii T60XL eye tracking systems, used to monitor visual attention. These sensors were used to collect data while participants completed a series of tasks: the Self-Report of Emotional Intelligence Test, the Tower of London Test, the Motor Speed Test, the Working Memory Capacity Battery, watching product-centered videos, and watching experience-centered videos. Multiple surveys were also conducted, including a demographic survey, a nutritional and health survey, and a sports preference survey. Utilizing these metrics, this study found that those who exercise more experience and express higher levels of emotion, including joy, sadness, contempt, disgust, confusion, frustration, surprise, anger, and fear. This implies a difference in emotional response modulation between those who exercise more and those who exercise less, which in turn implies a difference in perception between the two groups. There were no significant findings related to navigation and planning skills, motor skills, or memory capacity from this analysis.
ContributorsFalls, Tarryn (Author) / Atkinson, Robert (Thesis director) / Chavez-Echeagaray, Maria Elena (Committee member) / Arts, Media and Engineering Sch T (Contributor) / Computer Science and Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2019-05
147945-Thumbnail Image.png
Description

Affective video games are still a relatively new field of research and entertainment. Even
so, being a form of entertainment media, emotion plays a large role in video games as a whole.
This project seeks to gain an understanding of what emotions are most prominent during game
play. From there, a system will

Affective video games are still a relatively new field of research and entertainment. Even
so, being a form of entertainment media, emotion plays a large role in video games as a whole.
This project seeks to gain an understanding of what emotions are most prominent during game
play. From there, a system will be created wherein the game will record the player’s facial
expressions and interpret those expressions as emotions, allowing the game to adjust its difficulty
to create a more tailored experience.
The first portion of this project, understanding the relationship between emotions and
games, was done by recording myself as I played three different games of different genres for
thirty minutes each. The same system that would be used in the later game I created to evaluate
emotions was used to evaluate these recordings.
After the data was interpreted, I created three different versions of the same game, based
on a template created by Stan’s Assets, which was a version of the arcade game Stacker. The
three versions of the game included one where no changes were made to the gameplay
experience, it simply recorded the player’s face and extrapolated emotions from that recording,
one where the speed increased in an attempt to maintain a certain level of positive emotions, and
a third where, in addition to increasing the speed of the game, it also decreased the speed in an
attempt to minimize negative emotions.
These tests, together, show that the emotional experience of a player is heavily dependent
on how tailored the game is towards that particular emotion. Additionally, in creating a system
meant to interact with these emotions, it is easier to create a one-dimensional system that focuses
on one emotion (or range of emotions) as opposed to a more complex system, as the system
begins to become unstable, and can lead to undesirable gameplay effects.

ContributorsFotias, Demos James (Author) / Selgrad, Justin (Thesis director) / Lahey, Byron (Committee member) / Arts, Media and Engineering Sch T (Contributor) / Barrett, The Honors College (Contributor)
Created2021-05
131385-Thumbnail Image.png
Description
This project analyzes the use of fear appeals in transmitting a moral of self-realization in the drama Oedipus Rex and its adaptations into painting and film. It draws upon earlier work in media ecology, adaptation, and studies of emotions in media. It proposes that what distinguishes media from one another

This project analyzes the use of fear appeals in transmitting a moral of self-realization in the drama Oedipus Rex and its adaptations into painting and film. It draws upon earlier work in media ecology, adaptation, and studies of emotions in media. It proposes that what distinguishes media from one another is the unique way that each medium stimulates the reader to draw from their own experiences with life and literature. Alternatively, what unites media is the cross platform assimilation of author and reader reality. More specifically, it asserts that print stimulates the reader via immersion, that painting achieves this same effect by acting as a proxy for the reader to embody the image before them, and that film stimulates the viewer as a result of emotive focus. Collectively, it concludes that when it comes to Oedipus and its many forms, the plays utilize fear to communicate the moral through both surface and dense texts, while painting adaptations focus on dense texts, and the filmic adaptations emphasize their surface equivalent. The project’s significance rests in its challenge to Marshal McLuhan’s technological determinism. On exposing the effects that a reader’s varied mindset can have on a medium’s ability to communicate its message, the project highlights that the relationship between humankind and media is not so deterministic and is more complex than McLuhan would have us believe.
ContributorsHerrera, Yoslin (Author) / Mack, Robert (Thesis director) / O'Neill, Joseph (Committee member) / Mechanical and Aerospace Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2020-05
131233-Thumbnail Image.png
Description
Although Spotify’s extensive library of songs are often seen broken up by “Top 100” and main lyrical genres, these categories are primarily based on popularity, artist and general mood alone. If a user wanted to create a playlist based on specific or situationally specific qualifiers from their own downloaded library,

Although Spotify’s extensive library of songs are often seen broken up by “Top 100” and main lyrical genres, these categories are primarily based on popularity, artist and general mood alone. If a user wanted to create a playlist based on specific or situationally specific qualifiers from their own downloaded library, he/she would have to hand pick songs that fit the mold and create a new playlist. This is a time consuming process that may not produce the most efficient result due to human error. The objective of this project, therefore, was to develop an application to streamline this process, optimize efficiency, and fill this user need.

Song Sift is an application built using Angular that allows users to filter and sort their song library to create specific playlists using the Spotify Web API. Utilizing the audio feature data that Spotify attaches to every song in their library, users can filter their downloaded Spotify songs based on four main attributes: (1) energy (how energetic a song sounds), (2) danceability (how danceable a song is), (3) valence (how happy a song sounds), and (4) loudness (average volume of a song). Once the user has created a playlist that fits their desired genre, he/she can easily export it to their Spotify account with the click of a button.
ContributorsDiMuro, Louis (Author) / Balasooriya, Janaka (Thesis director) / Chen, Yinong (Committee member) / Arts, Media and Engineering Sch T (Contributor) / Computer Science and Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2020-05
165197-Thumbnail Image.png
ContributorsHaagen, Jordan (Author) / Turaga, Pavan (Thesis director) / Drummond Otten, Caitlin (Committee member) / Barrett, The Honors College (Contributor) / Arts, Media and Engineering Sch T (Contributor)
Created2022-05