Matching Items (6)
187370-Thumbnail Image.png
Description
This project investigates the gleam-glum effect, a well-replicated phonetic emotion association in which words with the [i] vowel-sound (as in “gleam”) are judged more emotionally positive than words with the [Ʌ] vowel-sound (as in “glum”). The effect is observed across different modalities and languages and is moderated by mouth movements

This project investigates the gleam-glum effect, a well-replicated phonetic emotion association in which words with the [i] vowel-sound (as in “gleam”) are judged more emotionally positive than words with the [Ʌ] vowel-sound (as in “glum”). The effect is observed across different modalities and languages and is moderated by mouth movements relevant to word production. This research presents and tests an articulatory explanation for this association in three experiments. Experiment 1 supported the articulatory explanation by comparing recordings of 71 participants completing an emotional recall task and a word read-aloud task, showing that oral movements were more similar between positive emotional expressions and [i] articulation, and negative emotional expressions and [Ʌ] articulation. Experiment 2 partially supported the explanation with 98 YouTube recordings of natural speech. In Experiment 3, 149 participants judged emotions expressed by a speaker during [i] and [Ʌ] articulation. Contradicting the robust phonetic emotion association, participants judged more frequently that the speaker’s [Ʌ] articulatory movements were positive emotional expressions and [i] articulatory movements were negative emotional expressions. This is likely due to other visual emotional cues not related to oral movements and the order of word lists read by the speaker. Findings from the current project overall support an articulatory explanation for the gleam-glum effect, which has major implications for language and communication.
ContributorsYu, Shin-Phing (Author) / Mcbeath, Michael K (Thesis advisor) / Glenberg, Arthur M (Committee member) / Stone, Greg O (Committee member) / Coza, Aurel (Committee member) / Santello, Marco (Committee member) / Arizona State University (Publisher)
Created2023
171646-Thumbnail Image.png
Description
Fatigue in radiology is a readily studied area. Machine learning concepts appliedto the identification of fatigue are also readily available. However, the intersection between the two areas is not a relative commonality. This study looks to explore the intersection of fatigue in radiology and machine learning concepts by analyzing temporal trends in multivariate

Fatigue in radiology is a readily studied area. Machine learning concepts appliedto the identification of fatigue are also readily available. However, the intersection between the two areas is not a relative commonality. This study looks to explore the intersection of fatigue in radiology and machine learning concepts by analyzing temporal trends in multivariate time series data. A novel methodological approach using support vector machines to observe temporal trends in time-based aggregations of time series data is proposed. The data used in the study is captured in a real-world, unconstrained radiology setting where gaze and facial metrics are captured from radiologists performing live image reviews. The captured data is formatted into classes whose labels represent a window of time during the radiologist’s review. Using the labeled classes, the decision function and accuracy of trained, linear support vector machine models are evaluated to produce a visualization of temporal trends and critical inflection points as well as the contribution of individual features. Consequently, the study finds valid potential justification in the methods suggested. The study offers a prospective use of maximummargin classification to demarcate the manipulation of an abstract phenomenon such as fatigue on temporal data. Potential applications are envisioned that could improve the workload distribution of the medical act.
ContributorsHayes, Matthew (Author) / McDaniel, Troy (Thesis advisor) / Coza, Aurel (Committee member) / Venkateswara, Hemanth (Committee member) / Arizona State University (Publisher)
Created2022
Description
This project seeks to mitigate the reduced video quality from data compression due to bandwidth limits, which hinders the transmission of emotional information. The project applies selective compression to a prerecorded video to produce a modified video that compresses the background and preserves important emotional information. The effect of this

This project seeks to mitigate the reduced video quality from data compression due to bandwidth limits, which hinders the transmission of emotional information. The project applies selective compression to a prerecorded video to produce a modified video that compresses the background and preserves important emotional information. The effect of this selective compression was assessed through data collection of user emotional and visual response. The final end goal was to publish a paper summarizing the conclusions drawn from all of the lab data that was collected.
ContributorsSterling, Marcy (Author) / Hakkal, Rachel (Co-author) / Coza, Aurel (Thesis director) / Caviedes, Jorge (Committee member) / Barrett, The Honors College (Contributor) / Electrical Engineering Program (Contributor)
Created2022-12
Description

This project seeks to mitigate the reduced video quality from data compression due to bandwidth limits, which hinders the transmission of emotional information. The project applies selective compression to a prerecorded video to produce a modified video that compresses the background and preserves important emotional information. The effect of this

This project seeks to mitigate the reduced video quality from data compression due to bandwidth limits, which hinders the transmission of emotional information. The project applies selective compression to a prerecorded video to produce a modified video that compresses the background and preserves important emotional information. The effect of this selective compression was assessed through data collection of user emotional and visual response. The final goal was to publish a paper summarizing the conclusions drawn from all of the lab data that was collected.

ContributorsHakkal, Rachel (Author) / Sterling, Marcy (Co-author) / Coza, Aurel (Thesis director) / Caviedes, Jorge (Committee member) / Barrett, The Honors College (Contributor) / Electrical Engineering Program (Contributor)
Created2022-12
165924-Thumbnail Image.png
Description

The importance of nonverbal communication has been well established through several theories including Albert Mehrabian's 7-38-55 rule that proposes the respective importance of semantics, tonality and facial expressions in communication. Although several studies have examined how emotions are expressed and preceived in communication, there is limited research investigating the relationshi

The importance of nonverbal communication has been well established through several theories including Albert Mehrabian's 7-38-55 rule that proposes the respective importance of semantics, tonality and facial expressions in communication. Although several studies have examined how emotions are expressed and preceived in communication, there is limited research investigating the relationship between how emotions are expressed through semantics and facial expressions. Using a facial expression analysis software to deconstruct facial expressions into features and a K-Nearest-Neighbor (KNN) machine learning classifier, we explored if facial expressions can be clustered based on semantics. Our findings indicate that facial expressions can be clustered based on semantics and that there is an inherent congruence between facial expressions and semantics. These results are novel and significant in the context of nonverbal communication and are applicable to several areas of research including the vast field of emotion AI and machine emotional communication.

ContributorsEverett, Lauren (Author) / Coza, Aurel (Thesis director) / Santello, Marco (Committee member) / Barrett, The Honors College (Contributor) / Harrington Bioengineering Program (Contributor) / Dean, W.P. Carey School of Business (Contributor)
Created2022-05
161593-Thumbnail Image.png
Description
In an ever-faster world, products that are designed for enhancing the speed of a certain task can and are being designed in rapid iterations by means of adding or modifying features that impact the energetics, kinematics and kinetics of a given product. Given the ubiquity of said changes and the

In an ever-faster world, products that are designed for enhancing the speed of a certain task can and are being designed in rapid iterations by means of adding or modifying features that impact the energetics, kinematics and kinetics of a given product. Given the ubiquity of said changes and the need to market these products in a very crowded marketplace, it is imperative for the products to communicate the ‘speed’ of the additional features. Thus, it has been hypothesized that adding a few simple changes to the visual representation of a product or the context in which it is being presented could enhance the perception of the product dynamics at a cognitive or emotional level. The present work is aimed at determining the impact of visual elements such as shapes, colors, and textures on the perception of speed. Three hundred and twenty subjects participated in a discrimination task and a reaction task to measure the impact of various patterns, textures, and colors on the perception of speed. Throughout both tasks, the subjects were exposed to a number of various visual patterns or colors presented as a static background or recognizable object for a set amount of time. Based on the subjects’ performance we have identified and quantified the impact of specific visual design patterns and colors on the perception of speed. Primary results indicate promising evidence that certain fundamental visual elements of shape, color, and texture when presented as a static background or object design could induce subtle changes in visual perception that can alter the overall movement dynamics perception.
ContributorsBaldwin, Brooke (Author) / Coza, Aurel (Thesis advisor) / Becker, David (Thesis advisor) / Gray, Rob (Committee member) / Arizona State University (Publisher)
Created2021