Matching Items (11)
Filtering by

Clear all filters

136785-Thumbnail Image.png
Description
This paper presents the design and evaluation of a haptic interface for augmenting human-human interpersonal interactions by delivering facial expressions of an interaction partner to an individual who is blind using a visual-to-tactile mapping of facial action units and emotions. Pancake shaftless vibration motors are mounted on the back of

This paper presents the design and evaluation of a haptic interface for augmenting human-human interpersonal interactions by delivering facial expressions of an interaction partner to an individual who is blind using a visual-to-tactile mapping of facial action units and emotions. Pancake shaftless vibration motors are mounted on the back of a chair to provide vibrotactile stimulation in the context of a dyadic (one-on-one) interaction across a table. This work explores the design of spatiotemporal vibration patterns that can be used to convey the basic building blocks of facial movements according to the Facial Action Unit Coding System. A behavioral study was conducted to explore the factors that influence the naturalness of conveying affect using vibrotactile cues.
ContributorsBala, Shantanu (Author) / Panchanathan, Sethuraman (Thesis director) / McDaniel, Troy (Committee member) / Barrett, The Honors College (Contributor) / Computer Science and Engineering Program (Contributor) / Department of Psychology (Contributor)
Created2014-05
148244-Thumbnail Image.png
Description

In this experiment, a haptic glove with vibratory motors on the fingertips was tested against the standard HTC Vive controller to see if the additional vibrations provided by the glove increased immersion in common gaming scenarios where haptic feedback is provided. Specifically, two scenarios were developed: an explosion scene containing

In this experiment, a haptic glove with vibratory motors on the fingertips was tested against the standard HTC Vive controller to see if the additional vibrations provided by the glove increased immersion in common gaming scenarios where haptic feedback is provided. Specifically, two scenarios were developed: an explosion scene containing a small and large explosion and a box interaction scene that allowed the participants to touch the box virtually with their hand. At the start of this project, it was hypothesized that the haptic glove would have a significant positive impact in at least one of these scenarios. Nine participants took place in the study and immersion was measured through a post-experiment questionnaire. Statistical analysis on the results showed that the haptic glove did have a significant impact on immersion in the box interaction scene, but not in the explosion scene. In the end, I conclude that since this haptic glove does not significantly increase immersion across all scenarios when compared to the standard Vive controller, it should not be used at a replacement in its current state.

ContributorsGriffieth, Alan P (Author) / McDaniel, Troy (Thesis director) / Selgrad, Justin (Committee member) / Computing and Informatics Program (Contributor) / School of Mathematical and Statistical Sciences (Contributor) / Economics Program in CLAS (Contributor) / Computer Science and Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2021-05
156281-Thumbnail Image.png
Description
Currently, one of the biggest limiting factors for long-term deployment of autonomous systems is the power constraints of a platform. In particular, for aerial robots such as unmanned aerial vehicles (UAVs), the energy resource is the main driver of mission planning and operation definitions, as everything revolved around flight time.

Currently, one of the biggest limiting factors for long-term deployment of autonomous systems is the power constraints of a platform. In particular, for aerial robots such as unmanned aerial vehicles (UAVs), the energy resource is the main driver of mission planning and operation definitions, as everything revolved around flight time. The focus of this work is to develop a new method of energy storage and charging for autonomous UAV systems, for use during long-term deployments in a constrained environment. We developed a charging solution that allows pre-equipped UAV system to land on top of designated charging pads and rapidly replenish their battery reserves, using a contact charging point. This system is designed to work with all types of rechargeable batteries, focusing on Lithium Polymer (LiPo) packs, that incorporate a battery management system for increased reliability. The project also explores optimization methods for fleets of UAV systems, to increase charging efficiency and extend battery lifespans. Each component of this project was first designed and tested in computer simulation. Following positive feedback and results, prototypes for each part of this system were developed and rigorously tested. Results show that the contact charging method is able to charge LiPo batteries at a 1-C rate, which is the industry standard rate, maintaining the same safety and efficiency standards as modern day direct connection chargers. Control software for these base stations was also created, to be integrated with a fleet management system, and optimizes UAV charge levels and distribution to extend LiPo battery lifetimes while still meeting expected mission demand. Each component of this project (hardware/software) was designed for manufacturing and implementation using industry standard tools, making it ideal for large-scale implementations. This system has been successfully tested with a fleet of UAV systems at Arizona State University, and is currently being integrated into an Arizona smart city environment for deployment.
ContributorsMian, Sami (Author) / Panchanathan, Sethuraman (Thesis advisor) / Berman, Spring (Committee member) / Yang, Yezhou (Committee member) / McDaniel, Troy (Committee member) / Arizona State University (Publisher)
Created2018
171832-Thumbnail Image.png
Description
Visual Odometry is one of the key aspects of robotic localization and mapping. Visual Odometry consists of many geometric-based approaches that convert visual data (images) into pose estimates of where the robot is in space. The classical geometric methods have shown promising results; they are carefully crafted and built explicitly

Visual Odometry is one of the key aspects of robotic localization and mapping. Visual Odometry consists of many geometric-based approaches that convert visual data (images) into pose estimates of where the robot is in space. The classical geometric methods have shown promising results; they are carefully crafted and built explicitly for these tasks. However, such geometric methods require extreme fine-tuning and extensive prior knowledge to set up these systems for different scenarios. Classical Geometric approaches also require significant post-processing and optimization to minimize the error between the estimated pose and the global truth. In this body of work, the deep learning model was formed by combining SuperPoint and SuperGlue. The resulting model does not require any prior fine-tuning. It has been trained to enable both outdoor and indoor settings. The proposed deep learning model is applied to the Karlsruhe Institute of Technology and Toyota Technological Institute dataset along with other classical geometric visual odometry models. The proposed deep learning model has not been trained on the Karlsruhe Institute of Technology and Toyota Technological Institute dataset. It is only during experimentation that the deep learning model is first introduced to the Karlsruhe Institute of Technology and Toyota Technological Institute dataset. Using the monocular grayscale images from the visual odometer files of the Karlsruhe Institute of Technology and Toyota Technological Institute dataset, through the experiment to test the viability of the models for different sequences. The experiment has been performed on eight different sequences and has obtained the Absolute Trajectory Error and the time taken for each sequence to finish the computation. From the obtained results, there are inferences drawn from the classical and deep learning approaches.
ContributorsVaidyanathan, Venkatesh (Author) / Venkateswara, Hemanth (Thesis advisor) / McDaniel, Troy (Thesis advisor) / Michael, Katina (Committee member) / Arizona State University (Publisher)
Created2022
171933-Thumbnail Image.png
Description
As people begin to live longer and the population shifts to having more olderadults on Earth than young children, radical solutions will be needed to ease the burden on society. It will be essential to develop technology that can age with the individual. One solution is to keep older adults in their

As people begin to live longer and the population shifts to having more olderadults on Earth than young children, radical solutions will be needed to ease the burden on society. It will be essential to develop technology that can age with the individual. One solution is to keep older adults in their homes longer through smart home and smart living technology, allowing them to age in place. People have many choices when choosing where to age in place, including their own homes, assisted living facilities, nursing homes, or family members. No matter where people choose to age, they may face isolation and financial hardships. It is crucial to keep finances in mind when developing Smart Home technology. Smart home technologies seek to allow individuals to stay inside their homes for as long as possible, yet little work looks at how we can use technology in different life stages. Robots are poised to impact society and ease burns at home and in the workforce. Special attention has been given to social robots to ease isolation. As social robots become accepted into society, researchers need to understand how these robots should mimic natural conversation. My work attempts to answer this question within social robotics by investigating how to make conversational robots natural and reciprocal. I investigated this through a 2x2 Wizard of Oz between-subjects user study. The study lasted four months, testing four different levels of interactivity with the robot. None of the levels were significantly different from the others, an unexpected result. I then investigated the robot’s personality, the participant’s trust, and the participant’s acceptance of the robot and how that influenced the study.
ContributorsMiller, Jordan (Author) / McDaniel, Troy (Thesis advisor) / Michael, Katina (Committee member) / Cooke, Nancy (Committee member) / Bryan, Chris (Committee member) / Li, Baoxin (Committee member) / Arizona State University (Publisher)
Created2022
171660-Thumbnail Image.png
Description
With an aging population, the number of later in life health related incidents like stroke stand to become more prevalent. Unfortunately, the majority those who are most at risk for debilitating heath episodes are either uninsured or under insured when it comes to long term physical/occupational therapy. As insurance companies

With an aging population, the number of later in life health related incidents like stroke stand to become more prevalent. Unfortunately, the majority those who are most at risk for debilitating heath episodes are either uninsured or under insured when it comes to long term physical/occupational therapy. As insurance companies lower coverage and/or raise prices of plans with sufficient coverage, it can be expected that the proportion of uninsured/under insured to fully insured people will rise. To address this, lower cost alternative methods of treatment must be developed so people can obtain the treated required for a sufficient recovery. The presented robotic glove employs low cost fabric soft pneumatic actuators which use a closed loop feedback controller based on readings from embedded soft sensors. This provides the device with proprioceptive abilities for the dynamic control of each independent actuator. Force and fatigue tests were performed to determine the viability of the actuator design. A Box and Block test along with a motion capture study was completed to study the performance of the device. This paper presents the design and classification of a soft robotic glove with a feedback controller as a at-home stroke rehabilitation device.
ContributorsAxman, Reed C (Author) / Zhang, Wenlong (Thesis advisor) / Santello, Marco (Committee member) / McDaniel, Troy (Committee member) / Arizona State University (Publisher)
Created2022
158792-Thumbnail Image.png
Description
Access to real-time situational information including the relative position and motion of surrounding objects is critical for safe and independent travel. Object or obstacle (OO) detection at a distance is primarily a task of the visual system due to the high resolution information the eyes are able to receive from

Access to real-time situational information including the relative position and motion of surrounding objects is critical for safe and independent travel. Object or obstacle (OO) detection at a distance is primarily a task of the visual system due to the high resolution information the eyes are able to receive from afar. As a sensory organ in particular, the eyes have an unparalleled ability to adjust to varying degrees of light, color, and distance. Therefore, in the case of a non-visual traveler, someone who is blind or low vision, access to visual information is unattainable if it is positioned beyond the reach of the preferred mobility device or outside the path of travel. Although, the area of assistive technology in terms of electronic travel aids (ETA’s) has received considerable attention over the last two decades; surprisingly, the field has seen little work in the area focused on augmenting rather than replacing current non-visual travel techniques, methods, and tools. Consequently, this work describes the design of an intuitive tactile language and series of wearable tactile interfaces (the Haptic Chair, HaptWrap, and HapBack) to deliver real-time spatiotemporal data. The overall intuitiveness of the haptic mappings conveyed through the tactile interfaces are evaluated using a combination of absolute identification accuracy of a series of patterns and subjective feedback through post-experiment surveys. Two types of spatiotemporal representations are considered: static patterns representing object location at a single time instance, and dynamic patterns, added in the HaptWrap, which represent object movement over a time interval. Results support the viability of multi-dimensional haptics applied to the body to yield an intuitive understanding of dynamic interactions occurring around the navigator during travel. Lastly, it is important to point out that the guiding principle of this work centered on providing the navigator with spatial knowledge otherwise unattainable through current mobility techniques, methods, and tools, thus, providing the \emph{navigator} with the information necessary to make informed navigation decisions independently, at a distance.
ContributorsDuarte, Bryan Joiner (Author) / McDaniel, Troy (Thesis advisor) / Davulcu, Hasan (Committee member) / Li, Baoxin (Committee member) / Venkateswara, Hemanth (Committee member) / Arizona State University (Publisher)
Created2020
161425-Thumbnail Image.png
Description
Touch plays a vital role in maintaining human relationships through social andemotional communications. This research proposes a multi-modal haptic display capable of generating vibrotactile and thermal haptic signals individually and simultaneously. The main objective for creating this device is to explore the importance of touch in social communication, which is absent in traditional

Touch plays a vital role in maintaining human relationships through social andemotional communications. This research proposes a multi-modal haptic display capable of generating vibrotactile and thermal haptic signals individually and simultaneously. The main objective for creating this device is to explore the importance of touch in social communication, which is absent in traditional communication modes like a phone call or a video call. By studying how humans interpret haptically generated messages, this research aims to create a new communication channel for humans. This novel device will be worn on the user's forearm and has a broad scope of applications such as navigation, social interactions, notifications, health care, and education. The research methods include testing patterns in the vibro-thermal modality while noting its realizability and accuracy. Different patterns can be controlled and generated through an Android application connected to the proposed device via Bluetooth. Experimental results indicate that the patterns SINGLE TAP and HOLD/SQUEEZE were easily identifiable and more relatable to social interactions. In contrast, other patterns like UP-DOWN, DOWN-UP, LEFTRIGHT, LEFT-RIGHT, LEFT-DIAGONAL, and RIGHT-DIAGONAL were less identifiable and less relatable to social interactions. Finally, design modifications are required if complex social patterns are needed to be displayed on the forearm.
ContributorsGharat, Shubham Shriniwas (Author) / McDaniel, Troy (Thesis advisor) / Redkar, Sangram (Thesis advisor) / Zhang, Wenlong (Committee member) / Arizona State University (Publisher)
Created2021
161834-Thumbnail Image.png
Description
The knee joint has essential functions to support the body weight and maintain normal walking. Neurological diseases like stroke and musculoskeletal disorders like osteoarthritis can affect the function of the knee. Besides physical therapy, robot-assisted therapy using wearable exoskeletons and exosuits has shown the potential as an efficient therapy that

The knee joint has essential functions to support the body weight and maintain normal walking. Neurological diseases like stroke and musculoskeletal disorders like osteoarthritis can affect the function of the knee. Besides physical therapy, robot-assisted therapy using wearable exoskeletons and exosuits has shown the potential as an efficient therapy that helps patients restore their limbs’ functions. Exoskeletons and exosuits are being developed for either human performance augmentation or medical purposes like rehabilitation. Although, the research on exoskeletons started early before exosuits, the research and development on exosuits have recently grown rapidly as exosuits have advantages that exoskeletons lack. The objective of this research is to develop a soft exosuit for knee flexion assistance and validate its ability to reduce the EMG activity of the knee flexor muscles. The exosuit has been developed with a novel soft fabric actuator and novel 3D printed adjustable braces to attach the actuator aligned with the knee. A torque analytical model has been derived and validate experimentally to characterize and predict the torque output of the actuator. In addition to that, the actuator’s deflation and inflation time has been experimentally characterized and a controller has been implemented and the exosuit has been tested on a healthy human subject. It is found that the analytical torque model succeeded to predict the torque output in flexion angle range from 0° to 60° more precisely than analytical models in the literature. Deviations existed beyond 60° might have happened because some factors like fabric extensibility and actuator’s bending behavior. After human testing, results showed that, for the human subject tested, the exosuit gave the best performance when the controller was tuned to inflate at 31.9 % of the gait cycle. At this inflation timing, the biceps femoris, the semitendinosus and the vastus lateralis muscles showed average electromyography (EMG) reduction of - 32.02 %, - 23.05 % and - 2.85 % respectively. Finally, it is concluded that the developed exosuit may assist the knee flexion of more diverse healthy human subjects and it may potentially be used in the future in human performance augmentation and rehabilitation of people with disabilities.
ContributorsHasan, Ibrahim Mohammed Ibrahim (Author) / Zhang, Wenlong (Thesis advisor) / Aukes, Daniel (Committee member) / McDaniel, Troy (Committee member) / Arizona State University (Publisher)
Created2021
132065-Thumbnail Image.png
Description
This paper presents a study done to gain knowledge on the communication of an object’s relative 3-dimensional position in relation to individuals who are visually impaired and blind. The HapBack, a continuation of the HaptWrap V1.0 (Duarte et al., 2018), focused on the perception of objects and their distances in

This paper presents a study done to gain knowledge on the communication of an object’s relative 3-dimensional position in relation to individuals who are visually impaired and blind. The HapBack, a continuation of the HaptWrap V1.0 (Duarte et al., 2018), focused on the perception of objects and their distances in 3-dimensional space using haptic communication. The HapBack is a device that consists of two elastic bands wrapped horizontally secured around the user’s torso and two backpack straps secured along the user’s back. The backpack straps are embedded with 10 vibrotactile motors evenly positioned along the spine. This device is designed to provide a wearable interface for blind and visually impaired individuals in order to understand how the position of objects in 3-dimensional space are perceived through haptic communication. We were able to analyze the accuracy of the HapBack device through three vectors (1) Two different modes of vibration – absolute and relative (2) the location of the vibrotactile motors when in absolute mode (3) and the location of the vibrotactile motors when in relative mode. The results provided support that the HapBack provided vibrotactile patterns that were intuitively mapped to distances represented in the study. We were able to gain a better understanding on how distance can be perceived through haptic communication in individuals who are blind through analyzing the intuitiveness of the vibro-tactile patterns and the accuracy of the user’s responses.
ContributorsLow, Allison Xin Ming (Author) / McDaniel, Troy (Thesis director) / Duarte, Bryan (Committee member) / School of Mathematical and Statistical Sciences (Contributor) / Computer Science and Engineering Program (Contributor, Contributor) / Barrett, The Honors College (Contributor)
Created2019-12