Matching Items (6)
130880-Thumbnail Image.png
Description
Neuroscientific research has verified that humans have specialized brain areas used in the production and perception of language. It is speculated that these brain areas may also be involved in the perception and expression of emotions. A recent study supports the idea of an auditory equivalent to visually recognizable emotions,

Neuroscientific research has verified that humans have specialized brain areas used in the production and perception of language. It is speculated that these brain areas may also be involved in the perception and expression of emotions. A recent study supports the idea of an auditory equivalent to visually recognizable emotions, finding that the words containing the phoneme /i:/ as in “beat” were rated more positively and those with the phoneme /^/ as in “but” were rated more negatively. It was theorized that these results support that the same facial musculature used in producing visually recognizable expressions also favors specific phonemic sounds. The present study replicates this prior research using a new methodology in which participants matched verbalized monosyllabic nonsense pseudo-words to positive or negative cartoon pictures. We hypothesized that pseudo-words containing the sounds /i:/ would be matched with pictures that are more emotionally positive and ones containing the sounds /^/ would be matched with pictures that are more negative. Data collected from 119 undergraduate student volunteers from a Southwestern public university confirmed our hypotheses and exhibit the same pattern found in previous research supporting that specific vowel phonemes are matched with emotional valence. Our findings are the first to confirm this phoneme-emotion relationship with verbalized sounds and pictures. The results support the idea that the musculature associated with positive and negative facial expressions also favors production of specific phonemic sounds that listeners recognize and associate with specific emotions.
ContributorsBarnes, Heather Lee (Author) / Benitez, Viridiana (Thesis director) / Corbin, William (Thesis director) / McBeath, Michael K. (Thesis director) / Yu, Christine S.P. (Committee member) / Department of Psychology (Contributor) / Barrett, The Honors College (Contributor)
Created2020-12
132522-Thumbnail Image.png
Description
Recent findings support that facial musculature accounts for a form of phonetic sound symbolism. Yu, McBeath, and Glenberg (2019) found that, in both English words and Mandarin pinyin, words with the middle phoneme /i:/ (as in “gleam”) were rated as more positive than their paired words containing the phoneme /ʌ

Recent findings support that facial musculature accounts for a form of phonetic sound symbolism. Yu, McBeath, and Glenberg (2019) found that, in both English words and Mandarin pinyin, words with the middle phoneme /i:/ (as in “gleam”) were rated as more positive than their paired words containing the phoneme /ʌ/ (as in “glum”). The present study tested whether a second largely orthogonal dimension of vowel phoneme production (represented by the phonemes /æ/ vs /u/), is related to a second dimension perpendicular to emotional valence, arousal. Arousal was chosen because it is the second dimension of the Russell Circumplex Model of Affect. In phonetic similarity mappings, this second dimension is typically characterized by oral aperture size and larynx position, but it also appears to follow the continuum of consonance/dissonance. Our findings supported the hypothesis that one-syllable words with the center vowel phoneme /æ/ were reliably rated as more rousing, and less calming, than matched words with the center vowel phoneme /u/. These results extend the Yu, et al. findings regarding the potential contribution of facial musculature to sounds associated with the emotional dimension of arousal, and further confirm a model of sound symbolism related to emotional expression. These findings support that phonemes are not neutral basic units but rather illustrate an innate relationship between embodied emotional expression and speech production.
ContributorsGreenstein, Ely Conrad (Author) / McBeath, Michael (Thesis director) / Glenberg, Arthur (Committee member) / Patten, Kristopher (Committee member) / Historical, Philosophical & Religious Studies (Contributor) / Department of Psychology (Contributor) / Barrett, The Honors College (Contributor)
Created2019-05
132540-Thumbnail Image.png
Description
This study expands on findings by Yu, McBeath, & Glenberg (2019) which demonstrated a relationship between the pronunciation of English vowel phonemes and emotional valence due to embodied cognition. That study found that single syllable words containing the phoneme /i:/ (as in “gleam”) were reliably rated as more positive than

This study expands on findings by Yu, McBeath, & Glenberg (2019) which demonstrated a relationship between the pronunciation of English vowel phonemes and emotional valence due to embodied cognition. That study found that single syllable words containing the phoneme /i:/ (as in “gleam”) were reliably rated as more positive than matched words containing the phoneme /ʌ/ (as in “glum”). The findings are consistent with the idea that the facial musculature when smiling is more conducive to making the /i:/ sound, while that of frowning or grimacing is more conducive to making the /ʌ/ sound. That study only compared the phonemes /i:/ and /ʌ/, which are opposite extremes of phoneme similarity (second formant frequency). The present study expands on this finding by testing the relative emotional valence ratings of matched single-syllable words containing /i:/ vs /ʌ/ plus two intermediate phonemes, /ɪ/ (as in “bit”), and /ɔ/ (as in “bought”). The new findings replicate the Gleam-Glum effect, and provide support for a weak ordering hypothesis for the intermediate phonemes, but not a strong ordering. The weak ordering hypothesis is that single-syllable words containing a middle vowel phoneme that is intermediate to /i:/ and /ʌ/ in musculature and acoustic features are also generally rated as intermediate in emotional valence. The strong ordering hypothesis is that the intermediate phonemes are each differentially rated in emotional valance in precisely the same order as determined acoustically. The pattern of results found is consistent with the Russell Circumplex Model of emotion at a cursory level, but individual emotions do not fully conform to a simple 2-D model that generalizes to similarity judgments of phonemes. Nevertheless, the work supports that facial musculature associated with visually discernible emotions generally relates to a phonetic acoustic continuum.
ContributorsLobato, Theresa Annette (Author) / McBeath, Michael K. (Thesis director) / Glenberg, Arthur M. (Committee member) / School of International Letters and Cultures (Contributor) / Department of Psychology (Contributor) / College of Health Solutions (Contributor) / Barrett, The Honors College (Contributor)
Created2019-05
193426-Thumbnail Image.png
Description
Sound symbolism—the association between word sounds and meaning—has been shown to be an effective communication tool that promotes language comprehension and word learning. Much of the literature is constrained to investigating sound as it relates to physical characteristics (e.g. size or shape), and research has predominantly studied the phenomenon in

Sound symbolism—the association between word sounds and meaning—has been shown to be an effective communication tool that promotes language comprehension and word learning. Much of the literature is constrained to investigating sound as it relates to physical characteristics (e.g. size or shape), and research has predominantly studied the phenomenon in adults. The current study examined the sound symbolic wham-womb effect, which postulates that words with the /æ/ phoneme are associated with increased arousal while words with the /u/ phoneme are associated with little to no arousal. The effect was tested in both adults and children aged 5-7 years old using a word-to-scene matching task. Participants were presented with two pseudowords (differing only by their vowel phoneme: /æ/ or /u/; e.g. smad and smood) and two scenes depicting an animal in either a more arousing or less arousing situation. Participants were then asked to match which of the scenes fit one of the pseudowords. Results showed that the trial-by-trial performance for adults and children were significantly greater than chance, indicating that the wham-womb effect is exhibited in both adults and children. There was also a significant difference in performance between adults and children, with adults showing a more robust effect. This study provides the first empirical evidence that both children and adults link phonemes to arousal and that this effect may change across development.
ContributorsKuo, Jillian Elaine (Author) / Benitez, Viridiana (Thesis advisor) / McBeath, Michael (Committee member) / Scherer, Nancy (Committee member) / Arizona State University (Publisher)
Created2024
162335-Thumbnail Image.png
Description

Recent studies indicate that words containing /ӕ/ and /u/ vowel phonemes can be mapped onto the emotional dimension of arousal. Specifically, the wham-womb effect describes the inclination to associate words with /ӕ/ vowel-sounds (as in “wham”) with high-arousal emotions and words with /u/ vowel-sounds (as in “womb”) with low-arousal emotions.

Recent studies indicate that words containing /ӕ/ and /u/ vowel phonemes can be mapped onto the emotional dimension of arousal. Specifically, the wham-womb effect describes the inclination to associate words with /ӕ/ vowel-sounds (as in “wham”) with high-arousal emotions and words with /u/ vowel-sounds (as in “womb”) with low-arousal emotions. The objective of this study was to replicate the wham-womb effect using nonsense pseudowords and to test if findings extend with use of a novel methodology that includes verbal auditory and visual pictorial stimuli, which can eventually be used to test young children. We collected data from 99 undergraduate participants through an online survey. Participants heard pre-recorded pairs of monosyllabic pseudowords containing /ӕ/ or /u/ vowel phonemes and then matched individual pseudowords to illustrations portraying high or low arousal emotions. Two t-tests were conducted to analyze the size of the wham-womb effect across pseudowords and across participants, specifically the likelihood that /ӕ/ sounds are paired with high arousal images and /u/ sounds with low arousal images. Our findings robustly confirmed the wham-womb effect. Participants paired /ӕ/ words with high arousal emotion pictures and /u/ words with low arousal ones at a 73.2% rate with a large effect size. The wham-womb effect supports the idea that verbal acoustic signals tend to be tied to embodied facial musculature that is related to human emotions, which supports the adaptive value of sound symbolism in language evolution and development.

ContributorsZapp, Tatum (Author) / McBeath, Michael (Thesis director) / Benitez, Viridiana (Committee member) / Corbin, William (Committee member) / Yu, Shin-Phing (Committee member) / Barrett, The Honors College (Contributor) / Department of Psychology (Contributor) / School of Life Sciences (Contributor)
Created2021-12
153419-Thumbnail Image.png
Description
A multitude of individuals across the globe suffer from hearing loss and that number continues to grow. Cochlear implants, while having limitations, provide electrical input for users enabling them to "hear" and more fully interact socially with their environment. There has been a clinical shift to the

A multitude of individuals across the globe suffer from hearing loss and that number continues to grow. Cochlear implants, while having limitations, provide electrical input for users enabling them to "hear" and more fully interact socially with their environment. There has been a clinical shift to the bilateral placement of implants in both ears and to bimodal placement of a hearing aid in the contralateral ear if residual hearing is present. However, there is potentially more to subsequent speech perception for bilateral and bimodal cochlear implant users than the electric and acoustic input being received via these modalities. For normal listeners vision plays a role and Rosenblum (2005) points out it is a key feature of an integrated perceptual process. Logically, cochlear implant users should also benefit from integrated visual input. The question is how exactly does vision provide benefit to bilateral and bimodal users. Eight (8) bilateral and 5 bimodal participants received randomized experimental phrases previously generated by Liss et al. (1998) in auditory and audiovisual conditions. The participants recorded their perception of the input. Data were consequently analyzed for percent words correct, consonant errors, and lexical boundary error types. Overall, vision was found to improve speech perception for bilateral and bimodal cochlear implant participants. Each group experienced a significant increase in percent words correct when visual input was added. With vision bilateral participants reduced consonant place errors and demonstrated increased use of syllabic stress cues used in lexical segmentation. Therefore, results suggest vision might provide perceptual benefits for bilateral cochlear implant users by granting access to place information and by augmenting cues for syllabic stress in the absence of acoustic input. On the other hand vision did not provide the bimodal participants significantly increased access to place and stress cues. Therefore the exact mechanism by which bimodal implant users improved speech perception with the addition of vision is unknown. These results point to the complexities of audiovisual integration during speech perception and the need for continued research regarding the benefit vision provides to bilateral and bimodal cochlear implant users.
ContributorsLudwig, Cimarron (Author) / Liss, Julie (Thesis advisor) / Dorman, Michael (Committee member) / Azuma, Tamiko (Committee member) / Arizona State University (Publisher)
Created2015