Matching Items (53)
158180-Thumbnail Image.png
Description
Humans have an excellent ability to analyze and process information from multiple domains. They also possess the ability to apply the same decision-making process when the situation is familiar with their previous experience.

Inspired by human's ability to remember past experiences and apply the same when a similar situation occurs,

Humans have an excellent ability to analyze and process information from multiple domains. They also possess the ability to apply the same decision-making process when the situation is familiar with their previous experience.

Inspired by human's ability to remember past experiences and apply the same when a similar situation occurs, the research community has attempted to augment memory with Neural Network to store the previously learned information. Together with this, the community has also developed mechanisms to perform domain-specific weight switching to handle multiple domains using a single model. Notably, the two research fields work independently, and the goal of this dissertation is to combine their capabilities.

This dissertation introduces a Neural Network module augmented with two external memories, one allowing the network to read and write the information and another to perform domain-specific weight switching. Two learning tasks are proposed in this work to investigate the model performance - solving mathematics operations sequence and action based on color sequence identification. A wide range of experiments with these two tasks verify the model's learning capabilities.
ContributorsPatel, Deep Chittranjan (Author) / Ben Amor, Hani (Thesis advisor) / Banerjee, Ayan (Committee member) / McDaniel, Troy (Committee member) / Arizona State University (Publisher)
Created2020
161425-Thumbnail Image.png
Description
Touch plays a vital role in maintaining human relationships through social andemotional communications. This research proposes a multi-modal haptic display capable of generating vibrotactile and thermal haptic signals individually and simultaneously. The main objective for creating this device is to explore the importance of touch in social communication, which is absent in traditional

Touch plays a vital role in maintaining human relationships through social andemotional communications. This research proposes a multi-modal haptic display capable of generating vibrotactile and thermal haptic signals individually and simultaneously. The main objective for creating this device is to explore the importance of touch in social communication, which is absent in traditional communication modes like a phone call or a video call. By studying how humans interpret haptically generated messages, this research aims to create a new communication channel for humans. This novel device will be worn on the user's forearm and has a broad scope of applications such as navigation, social interactions, notifications, health care, and education. The research methods include testing patterns in the vibro-thermal modality while noting its realizability and accuracy. Different patterns can be controlled and generated through an Android application connected to the proposed device via Bluetooth. Experimental results indicate that the patterns SINGLE TAP and HOLD/SQUEEZE were easily identifiable and more relatable to social interactions. In contrast, other patterns like UP-DOWN, DOWN-UP, LEFTRIGHT, LEFT-RIGHT, LEFT-DIAGONAL, and RIGHT-DIAGONAL were less identifiable and less relatable to social interactions. Finally, design modifications are required if complex social patterns are needed to be displayed on the forearm.
ContributorsGharat, Shubham Shriniwas (Author) / McDaniel, Troy (Thesis advisor) / Redkar, Sangram (Thesis advisor) / Zhang, Wenlong (Committee member) / Arizona State University (Publisher)
Created2021
161834-Thumbnail Image.png
Description
The knee joint has essential functions to support the body weight and maintain normal walking. Neurological diseases like stroke and musculoskeletal disorders like osteoarthritis can affect the function of the knee. Besides physical therapy, robot-assisted therapy using wearable exoskeletons and exosuits has shown the potential as an efficient therapy that

The knee joint has essential functions to support the body weight and maintain normal walking. Neurological diseases like stroke and musculoskeletal disorders like osteoarthritis can affect the function of the knee. Besides physical therapy, robot-assisted therapy using wearable exoskeletons and exosuits has shown the potential as an efficient therapy that helps patients restore their limbs’ functions. Exoskeletons and exosuits are being developed for either human performance augmentation or medical purposes like rehabilitation. Although, the research on exoskeletons started early before exosuits, the research and development on exosuits have recently grown rapidly as exosuits have advantages that exoskeletons lack. The objective of this research is to develop a soft exosuit for knee flexion assistance and validate its ability to reduce the EMG activity of the knee flexor muscles. The exosuit has been developed with a novel soft fabric actuator and novel 3D printed adjustable braces to attach the actuator aligned with the knee. A torque analytical model has been derived and validate experimentally to characterize and predict the torque output of the actuator. In addition to that, the actuator’s deflation and inflation time has been experimentally characterized and a controller has been implemented and the exosuit has been tested on a healthy human subject. It is found that the analytical torque model succeeded to predict the torque output in flexion angle range from 0° to 60° more precisely than analytical models in the literature. Deviations existed beyond 60° might have happened because some factors like fabric extensibility and actuator’s bending behavior. After human testing, results showed that, for the human subject tested, the exosuit gave the best performance when the controller was tuned to inflate at 31.9 % of the gait cycle. At this inflation timing, the biceps femoris, the semitendinosus and the vastus lateralis muscles showed average electromyography (EMG) reduction of - 32.02 %, - 23.05 % and - 2.85 % respectively. Finally, it is concluded that the developed exosuit may assist the knee flexion of more diverse healthy human subjects and it may potentially be used in the future in human performance augmentation and rehabilitation of people with disabilities.
ContributorsHasan, Ibrahim Mohammed Ibrahim (Author) / Zhang, Wenlong (Thesis advisor) / Aukes, Daniel (Committee member) / McDaniel, Troy (Committee member) / Arizona State University (Publisher)
Created2021
161949-Thumbnail Image.png
Description
Working memory plays an important role in human activities across academic,professional, and social settings. Working memory is dened as the memory extensively involved in goal-directed behaviors in which information must be retained and manipulated to ensure successful task execution. The aim of this research is to understand the effect of image captioning with

Working memory plays an important role in human activities across academic,professional, and social settings. Working memory is dened as the memory extensively involved in goal-directed behaviors in which information must be retained and manipulated to ensure successful task execution. The aim of this research is to understand the effect of image captioning with image description on an individual's working memory. A study was conducted with eight neutral images comprising situations relatable to daily life such that each image could have a positive or negative description associated with the outcome of the situation in the image. The study consisted of three rounds where the first and second round involved two parts and the third round consisted of one part. The image was captioned a total of five times across the entire study. The findings highlighted that only 25% of participants were able to recall the captions which they captioned for an image after a span of 9-15 days; when comparing the recall rate of the captions, 50% of participants were able to recall the image caption from the previous round in the present round; and out of the positive and negative description associated with the image, 65% of participants recalled the former description rather than the latter. The conclusions drawn from the study are participants tend to retain information for longer periods than the expected duration for working memory, which may be because participants were able to relate the images with their everyday life situations and given a situation with positive and negative information, the human brain is aligned towards positive information over negative information.
ContributorsUppara, Nithiya Shree (Author) / McDaniel, Troy (Thesis advisor) / Venkateswara, Hemanth (Thesis advisor) / Bryan, Chris (Committee member) / Arizona State University (Publisher)
Created2021
131996-Thumbnail Image.png
Description
Although many data visualization diagrams can be made accessible for individuals who are blind or visually impaired, they often do not present the information in a way that intuitively allows readers to easily discern patterns in the data. In particular, accessible node graphs tend to use speech to describe the

Although many data visualization diagrams can be made accessible for individuals who are blind or visually impaired, they often do not present the information in a way that intuitively allows readers to easily discern patterns in the data. In particular, accessible node graphs tend to use speech to describe the transitions between nodes. While the speech is easy to understand, readers can be overwhelmed by too much speech and may not be able to discern any structural patterns which occur in the graphs. Considering these limitations, this research seeks to find ways to better present transitions in node graphs.

This study aims to gain knowledge on how sequence patterns in node graphs can be perceived through speech and nonspeech audio. Users listened to short audio clips describing a sequence of transitions occurring in a node graph. User study results were evaluated based on accuracy and user feedback. Five common techniques were identified through the study, and the results will be used to help design a node graph tool to improve accessibility of node graph creation and exploration for individuals that are blind or visually impaired.
ContributorsDarmawaskita, Nicole (Author) / McDaniel, Troy (Thesis director) / Duarte, Bryan (Committee member) / Computer Science and Engineering Program (Contributor, Contributor) / Barrett, The Honors College (Contributor)
Created2019-12
132065-Thumbnail Image.png
Description
This paper presents a study done to gain knowledge on the communication of an object’s relative 3-dimensional position in relation to individuals who are visually impaired and blind. The HapBack, a continuation of the HaptWrap V1.0 (Duarte et al., 2018), focused on the perception of objects and their distances in

This paper presents a study done to gain knowledge on the communication of an object’s relative 3-dimensional position in relation to individuals who are visually impaired and blind. The HapBack, a continuation of the HaptWrap V1.0 (Duarte et al., 2018), focused on the perception of objects and their distances in 3-dimensional space using haptic communication. The HapBack is a device that consists of two elastic bands wrapped horizontally secured around the user’s torso and two backpack straps secured along the user’s back. The backpack straps are embedded with 10 vibrotactile motors evenly positioned along the spine. This device is designed to provide a wearable interface for blind and visually impaired individuals in order to understand how the position of objects in 3-dimensional space are perceived through haptic communication. We were able to analyze the accuracy of the HapBack device through three vectors (1) Two different modes of vibration – absolute and relative (2) the location of the vibrotactile motors when in absolute mode (3) and the location of the vibrotactile motors when in relative mode. The results provided support that the HapBack provided vibrotactile patterns that were intuitively mapped to distances represented in the study. We were able to gain a better understanding on how distance can be perceived through haptic communication in individuals who are blind through analyzing the intuitiveness of the vibro-tactile patterns and the accuracy of the user’s responses.
ContributorsLow, Allison Xin Ming (Author) / McDaniel, Troy (Thesis director) / Duarte, Bryan (Committee member) / School of Mathematical and Statistical Sciences (Contributor) / Computer Science and Engineering Program (Contributor, Contributor) / Barrett, The Honors College (Contributor)
Created2019-12
131208-Thumbnail Image.png
Description
In this project, I investigated the impact of virtual reality on memory retention. The investigative approach to see the impact of virtual reality on memory retention, I utilized the memorization technique called the memory palace in a virtual reality environment. For the experiment, due to Covid-19, I was forced to

In this project, I investigated the impact of virtual reality on memory retention. The investigative approach to see the impact of virtual reality on memory retention, I utilized the memorization technique called the memory palace in a virtual reality environment. For the experiment, due to Covid-19, I was forced to be the only subject. To get effective data, I tested myself within randomly generated environments with a completely unique set of objects, both outside of a virtual reality environment and within one. First I conducted a set of 10 tests on myself by going through a virtual environment on my laptop and recalling as many objects I could within that environment. I recorded the accuracy of my own recollection as well as how long it took me to get through the data. Next I conducted a set of 10 tests on myself by going through the same virtual environment, but this time with an immersive virtual reality(VR) headset and a completely new set of objects. At the start of the project it was hypothesized that virtual reality would result in a higher memory retention rate versus simply going through the environment in a non-immersive environment. In the end, the results, albeit with a low test rate, leaned more toward showing the hypothesis to be true rather than not.
ContributorsDu, Michael Shan (Author) / Kobayashi, Yoshihiro (Thesis director) / McDaniel, Troy (Committee member) / Computer Science and Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2020-05
131212-Thumbnail Image.png
Description
In recent years, the development of new Machine Learning models has allowed for new technological advancements to be introduced for practical use across the world. Multiple studies and experiments have been conducted to create new variations of Machine Learning models with different algorithms to determine if potential systems would prove

In recent years, the development of new Machine Learning models has allowed for new technological advancements to be introduced for practical use across the world. Multiple studies and experiments have been conducted to create new variations of Machine Learning models with different algorithms to determine if potential systems would prove to be successful. Even today, there are still many research initiatives that are continuing to develop new models in the hopes to discover potential solutions for problems such as autonomous driving or determining the emotional value from a single sentence. One of the current popular research topics for Machine Learning is the development of Facial Expression Recognition systems. These Machine Learning models focus on classifying images of human faces that are expressing different emotions through facial expressions. In order to develop effective models to perform Facial Expression Recognition, researchers have gone on to utilize Deep Learning models, which are a more advanced implementation of Machine Learning models, known as Neural Networks. More specifically, the use of Convolutional Neural Networks has proven to be the most effective models for achieving highly accurate results at classifying images of various facial expressions. Convolutional Neural Networks are Deep Learning models that are capable of processing visual data, such as images and videos, and can be used to identify various facial expressions. The purpose of this project, I focused on learning about the important concepts of Machine Learning, Deep Learning, and Convolutional Neural Networks to implement a Convolutional Neural Network that was previously developed by a recommended research paper.
ContributorsFrace, Douglas R (Author) / Demakethepalli Venkateswara, Hemanth Kumar (Thesis director) / McDaniel, Troy (Committee member) / Computer Science and Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2020-05
Description
As modern advancements in medical technology continue to increase overall life expectancy, hospitals and healthcare systems are finding new and more efficient ways of storing extensive amounts of patient healthcare information. This progression finds people increasingly dependent on hospitals as the primary providers of medical data, ranging from immunization records

As modern advancements in medical technology continue to increase overall life expectancy, hospitals and healthcare systems are finding new and more efficient ways of storing extensive amounts of patient healthcare information. This progression finds people increasingly dependent on hospitals as the primary providers of medical data, ranging from immunization records to surgical history. However, the benefits of carrying a copy of personal health information are becoming increasingly evident. This project aims to create a simple, secure, and cohesive application that stores and retrieves user health information backed by Google’s Firebase cloud infrastructure. Data was collected to both explore the current need for such an application, and to test the usability of the product. The former was done using a multiple-choice survey distributed through social media to understand the necessity for a patient-held health file (PHF). Subsequently, user testing was performed with the intent to track the success of our application in meeting those needs. According to the data, there was a trend that suggested a significant need for a healthcare information storage device. This application, allowing for efficient and simple medical information storage and retrieval, was created for a target audience of those seeking to improve their medical information awareness, with a primary focus on the elderly population. Specific correlations between the frequency of physician visits and app usage were identified to target the potential use cases of our app. The outcome of this project succeeded in meeting the significant need for increased patient medical awareness in the healthcare community.
ContributorsUpponi, Rohan Sachin (Co-author) / Somayaji, Vasishta (Co-author) / McDaniel, Troy (Thesis director) / Meuth, Ryan (Committee member) / Computer Science and Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2019-05
130936-Thumbnail Image.png
Description
Learning a new language can be very challenging. One significant aspect of learning a language is learning how to have fluent verbal and written conversations with other people in that language. However, it can be difficult to find other people available with whom to practice conversations. Additionally, total beginners may

Learning a new language can be very challenging. One significant aspect of learning a language is learning how to have fluent verbal and written conversations with other people in that language. However, it can be difficult to find other people available with whom to practice conversations. Additionally, total beginners may feel uncomfortable and self-conscious when speaking the language with others. In this paper, I present Hana, a chatbot application powered by deep learning for practicing open-domain verbal and written conversations in a variety of different languages. Hana uses a pre-trained medium-sized instance of Microsoft's DialoGPT in order to generate English responses to user input translated into English. Google Cloud Platform's Translation API is used to handle translation to and from the language selected by the user. The chatbot is presented in the form of a browser-based web application, allowing users to interact with the chatbot in both a verbal or text-based manner. Overall, the chatbot is capable of having interesting open-domain conversations with the user in languages supported by the Google Cloud Translation API, but response generation can be delayed by several seconds, and the conversations and their translations do not necessarily take into account linguistic and cultural nuances associated with a given language.
ContributorsBudiman, Matthew Aaron (Author) / Venkateswara, Hemanth Kumar Demakethepalli (Thesis director) / McDaniel, Troy (Committee member) / Computer Science and Engineering Program (Contributor, Contributor) / Barrett, The Honors College (Contributor)
Created2020-12