Barrett, The Honors College at Arizona State University proudly showcases the work of undergraduate honors students by sharing this collection exclusively with the ASU community.

Barrett accepts high performing, academically engaged undergraduate students and works with them in collaboration with all of the other academic units at Arizona State University. All Barrett students complete a thesis or creative project which is an opportunity to explore an intellectual interest and produce an original piece of scholarly research. The thesis or creative project is supervised and defended in front of a faculty committee. Students are able to engage with professors who are nationally recognized in their fields and committed to working with honors students. Completing a Barrett thesis or creative project is an opportunity for undergraduate honors students to contribute to the ASU academic community in a meaningful way.

Displaying 1 - 8 of 8
Filtering by

Clear all filters

135660-Thumbnail Image.png
Description
This paper presents work that was done to create a system capable of facial expression recognition (FER) using deep convolutional neural networks (CNNs) and test multiple configurations and methods. CNNs are able to extract powerful information about an image using multiple layers of generic feature detectors. The extracted information can

This paper presents work that was done to create a system capable of facial expression recognition (FER) using deep convolutional neural networks (CNNs) and test multiple configurations and methods. CNNs are able to extract powerful information about an image using multiple layers of generic feature detectors. The extracted information can be used to understand the image better through recognizing different features present within the image. Deep CNNs, however, require training sets that can be larger than a million pictures in order to fine tune their feature detectors. For the case of facial expression datasets, none of these large datasets are available. Due to this limited availability of data required to train a new CNN, the idea of using naïve domain adaptation is explored. Instead of creating and using a new CNN trained specifically to extract features related to FER, a previously trained CNN originally trained for another computer vision task is used. Work for this research involved creating a system that can run a CNN, can extract feature vectors from the CNN, and can classify these extracted features. Once this system was built, different aspects of the system were tested and tuned. These aspects include the pre-trained CNN that was used, the layer from which features were extracted, normalization used on input images, and training data for the classifier. Once properly tuned, the created system returned results more accurate than previous attempts on facial expression recognition. Based on these positive results, naïve domain adaptation is shown to successfully leverage advantages of deep CNNs for facial expression recognition.
ContributorsEusebio, Jose Miguel Ang (Author) / Panchanathan, Sethuraman (Thesis director) / McDaniel, Troy (Committee member) / Venkateswara, Hemanth (Committee member) / Computer Science and Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2016-05
136785-Thumbnail Image.png
Description
This paper presents the design and evaluation of a haptic interface for augmenting human-human interpersonal interactions by delivering facial expressions of an interaction partner to an individual who is blind using a visual-to-tactile mapping of facial action units and emotions. Pancake shaftless vibration motors are mounted on the back of

This paper presents the design and evaluation of a haptic interface for augmenting human-human interpersonal interactions by delivering facial expressions of an interaction partner to an individual who is blind using a visual-to-tactile mapping of facial action units and emotions. Pancake shaftless vibration motors are mounted on the back of a chair to provide vibrotactile stimulation in the context of a dyadic (one-on-one) interaction across a table. This work explores the design of spatiotemporal vibration patterns that can be used to convey the basic building blocks of facial movements according to the Facial Action Unit Coding System. A behavioral study was conducted to explore the factors that influence the naturalness of conveying affect using vibrotactile cues.
ContributorsBala, Shantanu (Author) / Panchanathan, Sethuraman (Thesis director) / McDaniel, Troy (Committee member) / Barrett, The Honors College (Contributor) / Computer Science and Engineering Program (Contributor) / Department of Psychology (Contributor)
Created2014-05
137492-Thumbnail Image.png
Description
This paper presents an overview of The Dyadic Interaction Assistant for Individuals with Visual Impairments with a focus on the software component. The system is designed to communicate facial information (facial Action Units, facial expressions, and facial features) to an individual with visual impairments in a dyadic interaction between two

This paper presents an overview of The Dyadic Interaction Assistant for Individuals with Visual Impairments with a focus on the software component. The system is designed to communicate facial information (facial Action Units, facial expressions, and facial features) to an individual with visual impairments in a dyadic interaction between two people sitting across from each other. Comprised of (1) a webcam, (2) software, and (3) a haptic device, the system can also be described as a series of input, processing, and output stages, respectively. The processing stage of the system builds on the open source FaceTracker software and the application Computer Expression Recognition Toolbox (CERT). While these two sources provide the facial data, the program developed through the IDE Qt Creator and several AppleScripts are used to adapt the information to a Graphical User Interface (GUI) and output the data to a comma-separated values (CSV) file. It is the first software to convey all 3 types of facial information at once in real-time. Future work includes testing and evaluating the quality of the software with human subjects (both sighted and blind/low vision), integrating the haptic device to complete the system, and evaluating the entire system with human subjects (sighted and blind/low vision).
ContributorsBrzezinski, Chelsea Victoria (Author) / Balasubramanian, Vineeth (Thesis director) / McDaniel, Troy (Committee member) / Venkateswara, Hemanth (Committee member) / Barrett, The Honors College (Contributor) / Computer Science and Engineering Program (Contributor)
Created2013-05
133291-Thumbnail Image.png
DescriptionFresh15 is an iOS application geared towards helping college students eat healthier. This is based on a user's preferences of price range, food restrictions, and favorite ingredients. Our application also considers the fact that students may have to order their ingredients online since they don't have access to transportation.
ContributorsBailey, Reece (Co-author) / Fallah-Adl, Sarah (Co-author) / Meuth, Ryan (Thesis director) / McDaniel, Troy (Committee member) / Computer Science and Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2018-05
Description

The Oasis app is a self-appraisal tool for potential or current problem gamblers to take control of their habits by providing periodic check-in notifications during a gambling session and allowing users to see their progress over time. Oasis is backed by substantial background research surrounding addiction intervention methods, especially in

The Oasis app is a self-appraisal tool for potential or current problem gamblers to take control of their habits by providing periodic check-in notifications during a gambling session and allowing users to see their progress over time. Oasis is backed by substantial background research surrounding addiction intervention methods, especially in the field of self-appraisal messaging, and applies this messaging in a familiar mobile notification form that can effectively change user’s behavior. User feedback was collected and used to improve the app, and the results show a promising tool that could help those who need it in the future.

ContributorsBlunt, Thomas (Author) / Meuth, Ryan (Thesis director) / McDaniel, Troy (Committee member) / Barrett, The Honors College (Contributor) / Computer Science and Engineering Program (Contributor)
Created2023-05
131208-Thumbnail Image.png
Description
In this project, I investigated the impact of virtual reality on memory retention. The investigative approach to see the impact of virtual reality on memory retention, I utilized the memorization technique called the memory palace in a virtual reality environment. For the experiment, due to Covid-19, I was forced to

In this project, I investigated the impact of virtual reality on memory retention. The investigative approach to see the impact of virtual reality on memory retention, I utilized the memorization technique called the memory palace in a virtual reality environment. For the experiment, due to Covid-19, I was forced to be the only subject. To get effective data, I tested myself within randomly generated environments with a completely unique set of objects, both outside of a virtual reality environment and within one. First I conducted a set of 10 tests on myself by going through a virtual environment on my laptop and recalling as many objects I could within that environment. I recorded the accuracy of my own recollection as well as how long it took me to get through the data. Next I conducted a set of 10 tests on myself by going through the same virtual environment, but this time with an immersive virtual reality(VR) headset and a completely new set of objects. At the start of the project it was hypothesized that virtual reality would result in a higher memory retention rate versus simply going through the environment in a non-immersive environment. In the end, the results, albeit with a low test rate, leaned more toward showing the hypothesis to be true rather than not.
ContributorsDu, Michael Shan (Author) / Kobayashi, Yoshihiro (Thesis director) / McDaniel, Troy (Committee member) / Computer Science and Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2020-05
131212-Thumbnail Image.png
Description
In recent years, the development of new Machine Learning models has allowed for new technological advancements to be introduced for practical use across the world. Multiple studies and experiments have been conducted to create new variations of Machine Learning models with different algorithms to determine if potential systems would prove

In recent years, the development of new Machine Learning models has allowed for new technological advancements to be introduced for practical use across the world. Multiple studies and experiments have been conducted to create new variations of Machine Learning models with different algorithms to determine if potential systems would prove to be successful. Even today, there are still many research initiatives that are continuing to develop new models in the hopes to discover potential solutions for problems such as autonomous driving or determining the emotional value from a single sentence. One of the current popular research topics for Machine Learning is the development of Facial Expression Recognition systems. These Machine Learning models focus on classifying images of human faces that are expressing different emotions through facial expressions. In order to develop effective models to perform Facial Expression Recognition, researchers have gone on to utilize Deep Learning models, which are a more advanced implementation of Machine Learning models, known as Neural Networks. More specifically, the use of Convolutional Neural Networks has proven to be the most effective models for achieving highly accurate results at classifying images of various facial expressions. Convolutional Neural Networks are Deep Learning models that are capable of processing visual data, such as images and videos, and can be used to identify various facial expressions. The purpose of this project, I focused on learning about the important concepts of Machine Learning, Deep Learning, and Convolutional Neural Networks to implement a Convolutional Neural Network that was previously developed by a recommended research paper.
ContributorsFrace, Douglas R (Author) / Demakethepalli Venkateswara, Hemanth Kumar (Thesis director) / McDaniel, Troy (Committee member) / Computer Science and Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2020-05
130936-Thumbnail Image.png
Description
Learning a new language can be very challenging. One significant aspect of learning a language is learning how to have fluent verbal and written conversations with other people in that language. However, it can be difficult to find other people available with whom to practice conversations. Additionally, total beginners may

Learning a new language can be very challenging. One significant aspect of learning a language is learning how to have fluent verbal and written conversations with other people in that language. However, it can be difficult to find other people available with whom to practice conversations. Additionally, total beginners may feel uncomfortable and self-conscious when speaking the language with others. In this paper, I present Hana, a chatbot application powered by deep learning for practicing open-domain verbal and written conversations in a variety of different languages. Hana uses a pre-trained medium-sized instance of Microsoft's DialoGPT in order to generate English responses to user input translated into English. Google Cloud Platform's Translation API is used to handle translation to and from the language selected by the user. The chatbot is presented in the form of a browser-based web application, allowing users to interact with the chatbot in both a verbal or text-based manner. Overall, the chatbot is capable of having interesting open-domain conversations with the user in languages supported by the Google Cloud Translation API, but response generation can be delayed by several seconds, and the conversations and their translations do not necessarily take into account linguistic and cultural nuances associated with a given language.
ContributorsBudiman, Matthew Aaron (Author) / Venkateswara, Hemanth Kumar Demakethepalli (Thesis director) / McDaniel, Troy (Committee member) / Computer Science and Engineering Program (Contributor, Contributor) / Barrett, The Honors College (Contributor)
Created2020-12