Oral Movement Similarities between [i] vs. [Λ] Word Articulation and Emotional Expressions Explain the Gleam-Glum Effect

187370-Thumbnail Image.png
Description
This project investigates the gleam-glum effect, a well-replicated phonetic emotion association in which words with the [i] vowel-sound (as in “gleam”) are judged more emotionally positive than words with the [Ʌ] vowel-sound (as in “glum”). The effect is observed across

This project investigates the gleam-glum effect, a well-replicated phonetic emotion association in which words with the [i] vowel-sound (as in “gleam”) are judged more emotionally positive than words with the [Ʌ] vowel-sound (as in “glum”). The effect is observed across different modalities and languages and is moderated by mouth movements relevant to word production. This research presents and tests an articulatory explanation for this association in three experiments. Experiment 1 supported the articulatory explanation by comparing recordings of 71 participants completing an emotional recall task and a word read-aloud task, showing that oral movements were more similar between positive emotional expressions and [i] articulation, and negative emotional expressions and [Ʌ] articulation. Experiment 2 partially supported the explanation with 98 YouTube recordings of natural speech. In Experiment 3, 149 participants judged emotions expressed by a speaker during [i] and [Ʌ] articulation. Contradicting the robust phonetic emotion association, participants judged more frequently that the speaker’s [Ʌ] articulatory movements were positive emotional expressions and [i] articulatory movements were negative emotional expressions. This is likely due to other visual emotional cues not related to oral movements and the order of word lists read by the speaker. Findings from the current project overall support an articulatory explanation for the gleam-glum effect, which has major implications for language and communication.
Date Created
2023
Agent

Top Versus Bottom Saliency Bias in Object and Scene Perception

168282-Thumbnail Image.png
Description
Research has demonstrated observers have a generic bias for top saliency in object identification, such that random shapes appear more similar to ones that share the same tops versus same bottoms (Chambers et al., 1999). These findings are consistent with

Research has demonstrated observers have a generic bias for top saliency in object identification, such that random shapes appear more similar to ones that share the same tops versus same bottoms (Chambers et al., 1999). These findings are consistent with the idea that in nature, the tops of most important objects and living things tend to be the most informative locations with respect to intentionality and functionality, leading observers to favor attending to top. Yet, such a bias also may imply a generic downward vantage bias, suggesting that unlike natural objects, the more informative aspects of scenes tend to lie below their horizon midpoints. In two experiments, saliency bias was investigated for objects and scenes with both information-balanced and naturalistic stimuli. Experiment 1 replicates and extends the study of the top-saliency effect for information-balanced objects. Here 91 participants made 80 similarity judgments between an information-balanced object and two comparison objects that contain either the same top or the same bottom. Participants also made 80 similarity judgments of information-balanced scenes in which the coordinates of the vertices of the random shapes were replaced with little objects to create a scene. Experiment 2 extends Chambers et al. (1999) by examining top-saliency bias in naturalistic object perception when 91 participants made similarity judgments between a photographed test object and two comparison objects which contain either the same top or the same bottom. Experiment 2 also tests the idea of a downward vantage bias by predicting that naturalistic scenes will be judged more similar when the portions that lie below the horizon are identical versus when the portions above are the same. Results of the two experiments confirm that observers tend to assume a downward vantage when viewing pictures of objects and objects within scenes, which supports that saliency varies as a function of the informative aspect of the visually attended component.
Date Created
2021
Agent

Temporal Color Perception

161818-Thumbnail Image.png
Description
Color perception has been widely studied and well modeled with respect to combining visible electromagnetic frequencies, yet new technology provides the means to better explore and test novel temporal frequency characteristics of color perception. Experiment 1 tests how reliably participants

Color perception has been widely studied and well modeled with respect to combining visible electromagnetic frequencies, yet new technology provides the means to better explore and test novel temporal frequency characteristics of color perception. Experiment 1 tests how reliably participants categorize static spectral rainbow colors, which can be a useful tool for efficiently identifying those with functional dichromacy, trichromacy, and tetrachromacy. The findings confirm that all individuals discern the four principal opponent process colors, red, yellow, green, and blue, with normal and potential tetrachromats seeing more distinct colors than color blind individuals. Experiment 2 tests the moving flicker fusion rate of the central electromagnetic frequencies within each color category found in Experiment 1 as a test of the Where system. It then compares this to the maximum temporal processing rate for discriminating direction of hue change with colors displayed serially as a test of the What system. The findings confirm respective processing thresholds of about 20 Hz for Where and 2-7 Hz for What processing systems. Experiment 3 tests conditions that optimize false colors based on the spinning Benham’s Top illusion. Findings indicate the same four principal colors emerge as in Experiment 1, but at low saturation levels for trichromats that diminish further for dichromats. Taken together, the three experiments provide an overview of the common categorical boundaries and temporal processing limits of human color vision.
Date Created
2021
Agent

Action, Prediction, or Attention: Does the “Egocentric Temporal Order Bias” Support a Constructive Model of Perception?

158795-Thumbnail Image.png
Description
Temporal-order judgments can require integration of self-generated action-events and external sensory information. In a previous study, it was found that participants are biased to perceive one’s own action-events to occur prior to simultaneous external events. This phenomenon, named the “Egocentric

Temporal-order judgments can require integration of self-generated action-events and external sensory information. In a previous study, it was found that participants are biased to perceive one’s own action-events to occur prior to simultaneous external events. This phenomenon, named the “Egocentric Temporal Order Bias”, or ETO bias, was demonstrated as a 67% probability for participants to report self-generated events as occurring prior to simultaneous externally-determined events. These results were interpreted as supporting a feed-forward, constructive model of perception. However, the empirical data could support many potential mechanisms. The present study tests whether the ETO bias is driven by attentional differences, feed-forward predictability, or action. These findings support that participants exhibit a bias due to both feed-forward predictability and action, and a Bayesian analysis supports that these effects are quantitatively unique. Therefore, the results indicate that the ETO bias is largely driven by one’s own action, over and above feed-forward predictability.
Date Created
2020
Agent

Natural Correlations of Spectral Envelope and their Contribution to Auditory Scene Analysis

156081-Thumbnail Image.png
Description
Auditory scene analysis (ASA) is the process through which listeners parse and organize their acoustic environment into relevant auditory objects. ASA functions by exploiting natural regularities in the structure of auditory information. The current study investigates spectral envelope and its

Auditory scene analysis (ASA) is the process through which listeners parse and organize their acoustic environment into relevant auditory objects. ASA functions by exploiting natural regularities in the structure of auditory information. The current study investigates spectral envelope and its contribution to the perception of changes in pitch and loudness. Experiment 1 constructs a perceptual continuum of twelve f0- and intensity-matched vowel phonemes (i.e. a pure timbre manipulation) and reveals spectral envelope as a primary organizational dimension. The extremes of this dimension are i (as in “bee”) and Ʌ (“bun”). Experiment 2 measures the strength of the relationship between produced f0 and the previously observed phonetic-pitch continuum at three different levels of phonemic constraint. Scat performances and, to a lesser extent, recorded interviews were found to exhibit changes in accordance with the natural regularity; specifically, f0 changes were correlated with the phoneme pitch-height continuum. The more constrained case of lyrical singing did not exhibit the natural regularity. Experiment 3 investigates participant ratings of pitch and loudness as stimuli vary in f0, intensity, and the phonetic-pitch continuum. Psychophysical functions derived from the results reveal that moving from i to Ʌ is equivalent to a .38 semitone decrease in f0 and a .75 dB decrease in intensity. Experiment 4 examines the potentially functional aspect of the pitch, loudness, and spectral envelope relationship. Detection thresholds of stimuli in which all three dimensions change congruently (f0 increase, intensity increase, Ʌ to i) or incongruently (no f0 change, intensity increase, i to Ʌ) are compared using an objective version of the method of limits. Congruent changes did not provide a detection benefit over incongruent changes; however, when the contribution of phoneme change was removed, congruent changes did offer a slight detection benefit, as in previous research. While this relationship does not offer a detection benefit at threshold, there is a natural regularity for humans to produce phonemes at higher f0s according to their relative position on the pitch height continuum. Likewise, humans have a bias to detect pitch and loudness changes in phoneme sweeps in accordance with the natural regularity.
Date Created
2017
Agent

Delineating the "task-irrelevant" perceptual learning paradigm in the context of temporal pairing, auditory pitch, and the reading disabled

154351-Thumbnail Image.png
Description
Watanabe, Náñez, and Sasaki (2001) introduced a phenomenon they named “task-irrelevant perceptual learning” in which near-threshold stimuli that are not essential to a given task can be associatively learned when consistently and concurrently paired with the focal task. The

Watanabe, Náñez, and Sasaki (2001) introduced a phenomenon they named “task-irrelevant perceptual learning” in which near-threshold stimuli that are not essential to a given task can be associatively learned when consistently and concurrently paired with the focal task. The present study employs a visual paired-shapes recognition task, using colored polygon targets as salient attended focal stimuli, with the goal of comparing the increases in perceptual sensitivity observed when near-threshold stimuli are temporally paired in varying manners with focal targets. Experiment 1 separated and compared the target-acquisition and target-recognition phases and revealed that sensitivity improved most when the near-threshold motion stimuli were paired with the focal target-acquisition phase. The parameters of sensitivity improvement were motion detection, critical flicker fusion threshold (CFFT), and letter-orientation decoding. Experiment 2 tested perceptual learning of near-threshold stimuli when they were offset from the focal stimuli presentation by ±350 ms. Performance improvements in motion detection, CFFT, and decoding were significantly greater for the group in which near-threshold motion was presented after the focal target. Experiment 3 showed that participants with reading difficulties who were exposed to focal target-acquisition training improved in sensitivity in all visual measures. Experiment 4 tested whether near-threshold stimulus learning occurred cross-modally with auditory stimuli and served as an active control for the first, second, and third experiments. Here, a tone was paired with all focal stimuli, but the tone was 1 Hz higher or lower when paired with the targeted focal stimuli associated with recognition. In Experiment 4, there was no improvement in visual sensitivity, but there was significant improvement in tone discrimination. Thus, this study, as a whole, confirms that pairing near-threshold stimuli with focal stimuli can improve performance in just tone discrimination, or in motion detection, CFFT, and letter decoding. Findings further support the thesis that the act of trying to remember a focal target also elicited greater associative learning of correlated near-threshold stimulus than the act of recognizing a target. Finally, these findings support that we have developed a visual learning paradigm that may potentially mitigate some of the visual deficits that are often experienced by the reading disabled.
Date Created
2016
Agent

Psychophysical and neural correlates of auditory attraction and aversion

153277-Thumbnail Image.png
Description
This study explores the psychophysical and neural processes associated with the perception of sounds as either pleasant or aversive. The underlying psychophysical theory is based on auditory scene analysis, the process through which listeners parse auditory signals into individual acoustic

This study explores the psychophysical and neural processes associated with the perception of sounds as either pleasant or aversive. The underlying psychophysical theory is based on auditory scene analysis, the process through which listeners parse auditory signals into individual acoustic sources. The first experiment tests and confirms that a self-rated pleasantness continuum reliably exists for 20 various stimuli (r = .48). In addition, the pleasantness continuum correlated with the physical acoustic characteristics of consonance/dissonance (r = .78), which can facilitate auditory parsing processes. The second experiment uses an fMRI block design to test blood oxygen level dependent (BOLD) changes elicited by a subset of 5 exemplar stimuli chosen from Experiment 1 that are evenly distributed over the pleasantness continuum. Specifically, it tests and confirms that the pleasantness continuum produces systematic changes in brain activity for unpleasant acoustic stimuli beyond what occurs with pleasant auditory stimuli. Results revealed that the combination of two positively and two negatively valenced experimental sounds compared to one neutral baseline control elicited BOLD increases in the primary auditory cortex, specifically the bilateral superior temporal gyrus, and left dorsomedial prefrontal cortex; the latter being consistent with a frontal decision-making process common in identification tasks. The negatively-valenced stimuli yielded additional BOLD increases in the left insula, which typically indicates processing of visceral emotions. The positively-valenced stimuli did not yield any significant BOLD activation, consistent with consonant, harmonic stimuli being the prototypical acoustic pattern of auditory objects that is optimal for auditory scene analysis. Both the psychophysical findings of Experiment 1 and the neural processing findings of Experiment 2 support that consonance is an important dimension of sound that is processed in a manner that aids auditory parsing and functional representation of acoustic objects and was found to be a principal feature of pleasing auditory stimuli.
Date Created
2014
Agent

Curvilinear impetus bias: a general heuristic to favor natural regularities of motion

152072-Thumbnail Image.png
Description
When a rolling ball exits a spiral tube, it typically maintains its final inertial state and travels along straight line in concordance with Newton's first law of motion. Yet, most people predict that the ball will curve, a "naive physics"

When a rolling ball exits a spiral tube, it typically maintains its final inertial state and travels along straight line in concordance with Newton's first law of motion. Yet, most people predict that the ball will curve, a "naive physics" misconception called the curvilinear impetus (CI) bias. In the current paper, we explore the ecological hypothesis that the CI bias arises from overgeneralization of correct motion of biological agents. Previous research has established that humans curve when exiting a spiral maze, and college students believe this motion is the same for balls and humans. The current paper consists of two follow up experiments. The first experiment tested the exiting behavior of rodents from a spiral rat maze. Though there were weaknesses in design and procedures of the maze, the findings support that rats do not behave like humans who exhibit the CI bias when exiting a spiral maze. These results are consistent with the CI bias being an overgeneralization of human motion, rather than generic biological motion. The second experiment tested physics teachers on their conception of how a humans and balls behave when exiting a spiral tube. Teachers demonstrated correct knowledge of the straight trajectory of a ball, but generalized the ball's behavior to human motion. Thus physics teachers exhibit the opposite bias from college students and presume that all motion is like inanimate motion. This evidence supports that this type of naive physics inertial bias is at least partly due to participants overgeneralizing both inanimate and animate motion to be the same, perhaps in an effort to minimize cognitive reference memory load. In short, physics training appears not to eliminate the bias, but rather to simply shift it from the presumption of stereotypical animate to stereotypical inanimate behavior.
Date Created
2013
Agent

Transfer of motor learning from a virtual to real task using EEG signals resulting from embodied and abstract thoughts

151742-Thumbnail Image.png
Description
This research is focused on two separate but related topics. The first uses an electroencephalographic (EEG) brain-computer interface (BCI) to explore the phenomenon of motor learning transfer. The second takes a closer look at the EEG-BCI itself and tests an

This research is focused on two separate but related topics. The first uses an electroencephalographic (EEG) brain-computer interface (BCI) to explore the phenomenon of motor learning transfer. The second takes a closer look at the EEG-BCI itself and tests an alternate way of mapping EEG signals into machine commands. We test whether motor learning transfer is more related to use of shared neural structures between imagery and motor execution or to more generalized cognitive factors. Using an EEG-BCI, we train one group of participants to control the movements of a cursor using embodied motor imagery. A second group is trained to control the cursor using abstract motor imagery. A third control group practices moving the cursor using an arm and finger on a touch screen. We hypothesized that if motor learning transfer is related to the use of shared neural structures then the embodied motor imagery group would show more learning transfer than the abstract imaging group. If, on the other hand, motor learning transfer results from more general cognitive processes, then the abstract motor imagery group should also demonstrate motor learning transfer to the manual performance of the same task. Our findings support that motor learning transfer is due to the use of shared neural structures between imaging and motor execution of a task. The abstract group showed no motor learning transfer despite being better at EEG-BCI control than the embodied group. The fact that more participants were able to learn EEG-BCI control using abstract imagery suggests that abstract imagery may be more suitable for EEG-BCIs for some disabilities, while embodied imagery may be more suitable for others. In Part 2, EEG data collected in the above experiment was used to train an artificial neural network (ANN) to map EEG signals to machine commands. We found that our open-source ANN using spectrograms generated from SFFTs is fundamentally different and in some ways superior to Emotiv's proprietary method. Our use of novel combinations of existing technologies along with abstract and embodied imagery facilitates adaptive customization of EEG-BCI control to meet needs of individual users.
Date Created
2013
Agent