Barrett, The Honors College at Arizona State University proudly showcases the work of undergraduate honors students by sharing this collection exclusively with the ASU community.

Barrett accepts high performing, academically engaged undergraduate students and works with them in collaboration with all of the other academic units at Arizona State University. All Barrett students complete a thesis or creative project which is an opportunity to explore an intellectual interest and produce an original piece of scholarly research. The thesis or creative project is supervised and defended in front of a faculty committee. Students are able to engage with professors who are nationally recognized in their fields and committed to working with honors students. Completing a Barrett thesis or creative project is an opportunity for undergraduate honors students to contribute to the ASU academic community in a meaningful way.

Displaying 1 - 3 of 3
Filtering by

Clear all filters

135660-Thumbnail Image.png
Description
This paper presents work that was done to create a system capable of facial expression recognition (FER) using deep convolutional neural networks (CNNs) and test multiple configurations and methods. CNNs are able to extract powerful information about an image using multiple layers of generic feature detectors. The extracted information can

This paper presents work that was done to create a system capable of facial expression recognition (FER) using deep convolutional neural networks (CNNs) and test multiple configurations and methods. CNNs are able to extract powerful information about an image using multiple layers of generic feature detectors. The extracted information can be used to understand the image better through recognizing different features present within the image. Deep CNNs, however, require training sets that can be larger than a million pictures in order to fine tune their feature detectors. For the case of facial expression datasets, none of these large datasets are available. Due to this limited availability of data required to train a new CNN, the idea of using naïve domain adaptation is explored. Instead of creating and using a new CNN trained specifically to extract features related to FER, a previously trained CNN originally trained for another computer vision task is used. Work for this research involved creating a system that can run a CNN, can extract feature vectors from the CNN, and can classify these extracted features. Once this system was built, different aspects of the system were tested and tuned. These aspects include the pre-trained CNN that was used, the layer from which features were extracted, normalization used on input images, and training data for the classifier. Once properly tuned, the created system returned results more accurate than previous attempts on facial expression recognition. Based on these positive results, naïve domain adaptation is shown to successfully leverage advantages of deep CNNs for facial expression recognition.
ContributorsEusebio, Jose Miguel Ang (Author) / Panchanathan, Sethuraman (Thesis director) / McDaniel, Troy (Committee member) / Venkateswara, Hemanth (Committee member) / Computer Science and Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2016-05
136785-Thumbnail Image.png
Description
This paper presents the design and evaluation of a haptic interface for augmenting human-human interpersonal interactions by delivering facial expressions of an interaction partner to an individual who is blind using a visual-to-tactile mapping of facial action units and emotions. Pancake shaftless vibration motors are mounted on the back of

This paper presents the design and evaluation of a haptic interface for augmenting human-human interpersonal interactions by delivering facial expressions of an interaction partner to an individual who is blind using a visual-to-tactile mapping of facial action units and emotions. Pancake shaftless vibration motors are mounted on the back of a chair to provide vibrotactile stimulation in the context of a dyadic (one-on-one) interaction across a table. This work explores the design of spatiotemporal vibration patterns that can be used to convey the basic building blocks of facial movements according to the Facial Action Unit Coding System. A behavioral study was conducted to explore the factors that influence the naturalness of conveying affect using vibrotactile cues.
ContributorsBala, Shantanu (Author) / Panchanathan, Sethuraman (Thesis director) / McDaniel, Troy (Committee member) / Barrett, The Honors College (Contributor) / Computer Science and Engineering Program (Contributor) / Department of Psychology (Contributor)
Created2014-05
Description
In order to regain functional use of affected limbs, stroke patients must undergo intense, repetitive, and sustained exercises. For this reason, it is a common occurrence for the recovery of stroke patients to suffer as a result of mental fatigue and boredom. For this reason, serious games aimed at reproducing

In order to regain functional use of affected limbs, stroke patients must undergo intense, repetitive, and sustained exercises. For this reason, it is a common occurrence for the recovery of stroke patients to suffer as a result of mental fatigue and boredom. For this reason, serious games aimed at reproducing the movements patients practice during rehabilitation sessions, present a promising solution to mitigating patient psychological exhaustion. This paper presents a system developed at the Center for Cognitive Ubiquitous Computing (CubiC) at Arizona State University which provides a platform for the development of serious games for stroke rehabilitation. The system consists of a network of nodes called Smart Cubes based on the Raspberry Pi (model B) computer which have an array of sensors and actuators as well as communication modules that are used in-game. The Smart Cubes are modular, taking advantage of the Raspberry Pi's General Purpose Input/Output header, and can be augmented with additional sensors or actuators in response to the desires of game developers and stroke rehabilitation therapists. Smart Cubes present advantages over traditional exercises such as having the capacity to provide many different forms of feedback and allowing for dynamically adapting games. Smart Cubes also present advantages over modern serious gaming platforms in the form of their modularity, flexibility resulting from their wireless network topology, and their independence of a monitor. Our contribution is a prototype of a Smart Cube network, a programmable computing platform, and a software framework specifically designed for the creation of serious games for stroke rehabilitation.
ContributorsFakhri, Bijan (Author) / Panchanathan, Sethuraman (Thesis director) / McDaniel, Troy L. (Committee member) / Tadayon, Ramin (Committee member) / Barrett, The Honors College (Contributor) / Computer Science and Engineering Program (Contributor)
Created2014-05