Filtering by
- Creators: Turaga, Pavan
![155059-Thumbnail Image.png](https://d1rbsgppyrdqq4.cloudfront.net/s3fs-public/styles/width_400/public/2021-09/155059-Thumbnail%20Image.png?versionId=XKohmmxhqgCFaZ9d5Zox9gBEFV1i83cZ&X-Amz-Content-Sha256=UNSIGNED-PAYLOAD&X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIASBVQ3ZQ42ZLA5CUJ/20240613/us-west-2/s3/aws4_request&X-Amz-Date=20240613T145856Z&X-Amz-SignedHeaders=host&X-Amz-Expires=120&X-Amz-Signature=9e3860fe60c9ce12cb37eccefaa4757fd3e45ac86e78862ccd9eb8e2fa9cf19b&itok=3fZPTqHF)
![153488-Thumbnail Image.png](https://d1rbsgppyrdqq4.cloudfront.net/s3fs-public/styles/width_400/public/2021-09/153488-Thumbnail%20Image.png?versionId=w815_j5X.Pw_5dB0pWQliaDTiS10tUab&X-Amz-Content-Sha256=UNSIGNED-PAYLOAD&X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIASBVQ3ZQ42ZLA5CUJ/20240613/us-west-2/s3/aws4_request&X-Amz-Date=20240613T182510Z&X-Amz-SignedHeaders=host&X-Amz-Expires=120&X-Amz-Signature=c6153fae23e6657c48052bc7cc01c36d4c87c1960d6d395618e48174887ba774&itok=Z0jXG1JF)
The expression and perception of emotions varies across speakers and cultures, thus, determining features and classification methods that generalize well to different conditions is strongly desired. A latent topic models-based method is proposed to learn supra-segmental features from low-level acoustic descriptors. The derived features outperform state-of-the-art approaches over multiple databases. Cross-corpus studies are conducted to determine the ability of these features to generalize well across different databases. The proposed method is also applied to derive features from facial expressions; a multi-modal fusion overcomes the deficiencies of a speech only approach and further improves the recognition performance.
Besides affecting the acoustic properties of speech, emotions have a strong influence over speech articulation kinematics. A learning approach, which constrains a classifier trained over acoustic descriptors, to also model articulatory data is proposed here. This method requires articulatory information only during the training stage, thus overcoming the challenges inherent to large-scale data collection, while simultaneously exploiting the correlations between articulation kinematics and acoustic descriptors to improve the accuracy of emotion recognition systems.
Identifying context from ambient sounds in a lifelogging scenario requires feature extraction, segmentation and annotation techniques capable of efficiently handling long duration audio recordings; a complete framework for such applications is presented. The performance is evaluated on real world data and accompanied by a prototypical Android-based user interface.
The proposed methods are also assessed in terms of computation and implementation complexity. Software and field programmable gate array based implementations are considered for emotion recognition, while virtual platforms are used to model the complexities of lifelogging. The derived metrics are used to determine the feasibility of these methods for applications requiring real-time capabilities and low power consumption.
![127818-Thumbnail Image.png](https://d1rbsgppyrdqq4.cloudfront.net/s3fs-public/styles/width_400/public/2021-04/127818-Thumbnail%20Image.png?versionId=qLGCQ1VFCpbdSwaAFkgkSfSweOno43nm&X-Amz-Content-Sha256=UNSIGNED-PAYLOAD&X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIASBVQ3ZQ42ZLA5CUJ/20240611/us-west-2/s3/aws4_request&X-Amz-Date=20240611T184835Z&X-Amz-SignedHeaders=host&X-Amz-Expires=120&X-Amz-Signature=c220e408244b4f0b758e60f10525cc25cc6831fe2307ce77bdc80c2fecc75947&itok=XSz86-di)
This chapter is not a guide to embodied thinking, but rather a critical call to action. It highlights the deep history of embodied practice within the fields of dance and somatics, and outlines the value of embodied thinking within human-computer interaction (HCI) design and, more specifically, wearable technology (WT) design. What this chapter does not do is provide a guide or framework for embodied practice. As a practitioner and scholar grounded in the fields of dance and somatics, I argue that a guide to embodiment cannot be written in a book. To fully understand embodied thinking, one must act, move, and do. Terms such as embodiment and embodied thinking are often discussed and analyzed in writing; but if the purpose is to learn how to engage in embodied thinking, then the answers will not come from a text. The answers come from movement-based exploration, active trial-and-error, and improvisation practices crafted to cultivate physical attunement to one's own body. To this end, my "call to action" is for the reader to move beyond a text-based understanding of embodiment to active engagement in embodied methodologies. Only then, I argue, can one understand how to apply embodied thinking to a design process.