Filtering by
- Creators: Department of Psychology
- Creators: Panchanathan, Sethuraman
- Resource Type: Text
emotion dimensions like arousal and valence are gaining popularity within the research
community due to an increase in the availability of datasets annotated with these
emotions. Unlike the discrete emotions, continuous emotions allow modeling of subtle
and complex affect dimensions but are difficult to predict.
Dimension reduction techniques form the core of emotion recognition systems and
help create a new feature space that is more helpful in predicting emotions. But these
techniques do not necessarily guarantee a better predictive capability as most of them
are unsupervised, especially in regression learning. In emotion recognition literature,
supervised dimension reduction techniques have not been explored much and in this
work a solution is provided through probabilistic topic models. Topic models provide
a strong probabilistic framework to embed new learning paradigms and modalities.
In this thesis, the graphical structure of Latent Dirichlet Allocation has been explored
and new models tuned to emotion recognition and change detection have been built.
In this work, it has been shown that the double mixture structure of topic models
helps 1) to visualize feature patterns, and 2) to project features onto a topic simplex
that is more predictive of human emotions, when compared to popular techniques
like PCA and KernelPCA. Traditionally, topic models have been used on quantized
features but in this work, a continuous topic model called the Dirichlet Gaussian
Mixture model has been proposed. Evaluation of DGMM has shown that while modeling
videos, performance of LDA models can be replicated even without quantizing
the features. Until now, topic models have not been explored in a supervised context
of video analysis and thus a Regularized supervised topic model (RSLDA) that
models video and audio features is introduced. RSLDA learning algorithm performs
both dimension reduction and regularized linear regression simultaneously, and has outperformed supervised dimension reduction techniques like SPCA and Correlation
based feature selection algorithms. In a first of its kind, two new topic models, Adaptive
temporal topic model (ATTM) and SLDA for change detection (SLDACD) have
been developed for predicting concept drift in time series data. These models do not
assume independence of consecutive frames and outperform traditional topic models
in detecting local and global changes respectively.
Recent studies indicate that words containing /ӕ/ and /u/ vowel phonemes can be mapped onto the emotional dimension of arousal. Specifically, the wham-womb effect describes the inclination to associate words with /ӕ/ vowel-sounds (as in “wham”) with high-arousal emotions and words with /u/ vowel-sounds (as in “womb”) with low-arousal emotions. The objective of this study was to replicate the wham-womb effect using nonsense pseudowords and to test if findings extend with use of a novel methodology that includes verbal auditory and visual pictorial stimuli, which can eventually be used to test young children. We collected data from 99 undergraduate participants through an online survey. Participants heard pre-recorded pairs of monosyllabic pseudowords containing /ӕ/ or /u/ vowel phonemes and then matched individual pseudowords to illustrations portraying high or low arousal emotions. Two t-tests were conducted to analyze the size of the wham-womb effect across pseudowords and across participants, specifically the likelihood that /ӕ/ sounds are paired with high arousal images and /u/ sounds with low arousal images. Our findings robustly confirmed the wham-womb effect. Participants paired /ӕ/ words with high arousal emotion pictures and /u/ words with low arousal ones at a 73.2% rate with a large effect size. The wham-womb effect supports the idea that verbal acoustic signals tend to be tied to embodied facial musculature that is related to human emotions, which supports the adaptive value of sound symbolism in language evolution and development.