Matching Items (99)
156390-Thumbnail Image.png
Description
This work presents the integration of user intent detection and control in the development of the fluid-driven, wearable, and continuum, Soft Poly-Limb (SPL). The SPL utilizes the numerous traits of soft robotics to enable a novel approach to provide safe and compliant mobile manipulation assistance to healthy and impaired users.

This work presents the integration of user intent detection and control in the development of the fluid-driven, wearable, and continuum, Soft Poly-Limb (SPL). The SPL utilizes the numerous traits of soft robotics to enable a novel approach to provide safe and compliant mobile manipulation assistance to healthy and impaired users. This wearable system equips the user with an additional limb made of soft materials that can be controlled to produce complex three-dimensional motion in space, like its biological counterparts with hydrostatic muscles. Similar to the elephant trunk, the SPL is able to manipulate objects using various end effectors, such as suction adhesion or a soft grasper, and can also wrap its entire length around objects for manipulation. User control of the limb is demonstrated using multiple user intent detection modalities. Further, the performance of the SPL studied by testing its capability to interact safely and closely around a user through a spatial mobility test. Finally, the limb’s ability to assist the user is explored through multitasking scenarios and pick and place tests with varying mounting locations of the arm around the user’s body. The results of these assessments demonstrate the SPL’s ability to safely interact with the user while exhibiting promising performance in assisting the user with a wide variety of tasks, in both work and general living scenarios.
ContributorsVale, Nicholas Marshall (Author) / Polygerinos, Panagiotis (Thesis advisor) / Zhang, Wenlong (Committee member) / Artemiadis, Panagiotis (Committee member) / Arizona State University (Publisher)
Created2018
157469-Thumbnail Image.png
Description
What if there is a way to integrate prosthetics seamlessly with the human body and robots could help improve the lives of children with disabilities? With physical human-robot interaction being seen in multiple aspects of life, including industry, medical, and social, how these robots are interacting with human becomes

What if there is a way to integrate prosthetics seamlessly with the human body and robots could help improve the lives of children with disabilities? With physical human-robot interaction being seen in multiple aspects of life, including industry, medical, and social, how these robots are interacting with human becomes even more important. Therefore, how smoothly the robot can interact with a person will determine how safe and efficient this relationship will be. This thesis investigates adaptive control method that allows a robot to adapt to the human's actions based on the interaction force. Allowing the relationship to become more effortless and less strained when the robot has a different goal than the human, as seen in Game Theory, using multiple techniques that adapts the system. Few applications this could be used for include robots in physical therapy, manufacturing robots that can adapt to a changing environment, and robots teaching people something new like dancing or learning how to walk after surgery.

The experience gained is the understanding of how a cost function of a system works, including the tracking error, speed of the system, the robot’s effort, and the human’s effort. Also, this two-agent system, results into a two-agent adaptive impedance model with an input for each agent of the system. This leads to a nontraditional linear quadratic regulator (LQR), that must be separated and then added together. Thus, creating a traditional LQR. This new experience can be used in the future to help build better safety protocols on manufacturing robots. In the future the knowledge learned from this research could be used to develop technologies for a robot to allow to adapt to help counteract human error.
ContributorsBell, Rebecca C (Author) / Zhang, Wenlong (Thesis advisor) / Chiou, Erin (Committee member) / Aukes, Daniel (Committee member) / Arizona State University (Publisher)
Created2019
157421-Thumbnail Image.png
Description
Human-robot interaction has expanded immensely within dynamic environments. The goals of human-robot interaction are to increase productivity, efficiency and safety. In order for the integration of human-robot interaction to be seamless and effective humans must be willing to trust the capabilities of assistive robots. A major priority for human-robot interaction

Human-robot interaction has expanded immensely within dynamic environments. The goals of human-robot interaction are to increase productivity, efficiency and safety. In order for the integration of human-robot interaction to be seamless and effective humans must be willing to trust the capabilities of assistive robots. A major priority for human-robot interaction should be to understand how human dyads have been historically effective within a joint-task setting. This will ensure that all goals can be met in human robot settings. The aim of the present study was to examine human dyads and the effects of an unexpected interruption. Humans’ interpersonal and individual levels of trust were studied in order to draw appropriate conclusions. Seventeen undergraduate and graduate level dyads were collected from Arizona State University. Participants were broken up into either a surprise condition or a baseline condition. Participants individually took two surveys in order to have an accurate understanding of levels of dispositional and individual levels of trust. The findings showed that participant levels of interpersonal trust were average. Surprisingly, participants who participated in the surprise condition afterwards, showed moderate to high levels of dyad trust. This effect showed that participants became more reliant on their partners when interrupted by a surprising event. Future studies will take this knowledge and apply it to human-robot interaction, in order to mimic the seamless team-interaction shown in historically effective dyads, specifically human team interaction.
ContributorsShaw, Alexandra Luann (Author) / Chiou, Erin (Thesis advisor) / Cooke, Nancy J. (Committee member) / Craig, Scotty (Committee member) / Arizona State University (Publisher)
Created2019
157435-Thumbnail Image.png
Description
Providing the user with good user experience is complex and involves multiple factors. One of the factors that can impact the user experience is animation. Animation can be tricky to get right and needs to be understood by designers. Animations that are too fast might not accomplish anything and having

Providing the user with good user experience is complex and involves multiple factors. One of the factors that can impact the user experience is animation. Animation can be tricky to get right and needs to be understood by designers. Animations that are too fast might not accomplish anything and having them too slow could slow the user down causing them to get frustrated.

This study explores the subject of animation and its speed by trying to answer the following questions – 1) Do people notice whether an animation is present 2) Does animation affect the enjoyment of a transition? and 3) If animation does affect enjoyment, what is the effect of different animation speeds?

The study was conducted using 3 prototypes of an application to order bottled water in which the transitions between different brands of bottled water were animated at 0ms, 300ms and 650ms. A survey was conducted to see if the participants were able to spot any difference between the prototypes and if they did, which one they preferred.

It was found that most people did not recognize any difference between the prototypes. Even people who recognized a difference between the prototypes did not have any preference of speed.
ContributorsIjari, Kusum (Author) / Branaghan, Russell (Thesis advisor) / Chiou, Erin (Committee member) / Roscoe, Rod (Committee member) / Arizona State University (Publisher)
Created2019
156724-Thumbnail Image.png
Description
The world population is aging. Age-related disorders such as stroke and spinal cord injury are increasing rapidly, and such patients often suffer from mobility impairment. Wearable robotic exoskeletons are developed that serve as rehabilitation devices for these patients. In this thesis, a knee exoskeleton design with higher torque output compared

The world population is aging. Age-related disorders such as stroke and spinal cord injury are increasing rapidly, and such patients often suffer from mobility impairment. Wearable robotic exoskeletons are developed that serve as rehabilitation devices for these patients. In this thesis, a knee exoskeleton design with higher torque output compared to the first version, is designed and fabricated.

A series elastic actuator is one of the many actuation mechanisms employed in exoskeletons. In this mechanism a torsion spring is used between the actuator and human joint. It serves as torque sensor and energy buffer, making it compact and

safe.

A version of knee exoskeleton was developed using the SEA mechanism. It uses worm gear and spur gear combination to amplify the assistive torque generated from the DC motor. It weighs 1.57 kg and provides a maximum assistive torque of 11.26 N·m. It can be used as a rehabilitation device for patients affected with knee joint impairment.

A new version of exoskeleton design is proposed as an improvement over the first version. It consists of components such as brushless DC motor and planetary gear that are selected to meet the design requirements and biomechanical considerations. All the other components such as bevel gear and torsion spring are selected to be compatible with the exoskeleton. The frame of the exoskeleton is modeled in SolidWorks to be modular and easy to assemble. It is fabricated using sheet metal aluminum. It is designed to provide a maximum assistive torque of 23 N·m, two times over the present exoskeleton. A simple brace is 3D printed, making it easy to wear and use. It weighs 2.4 kg.

The exoskeleton is equipped with encoders that are used to measure spring deflection and motor angle. They act as sensors for precise control of the exoskeleton.

An impedance-based control is implemented using NI MyRIO, a FPGA based controller. The motor is controlled using a motor driver and powered using an external battery source. The bench tests and walking tests are presented. The new version of exoskeleton is compared with first version and state of the art devices.
ContributorsJhawar, Vaibhav (Author) / Zhang, Wenlong (Thesis advisor) / Sugar, Thomas G. (Committee member) / Lee, Hyunglae (Committee member) / Marvi, Hamidreza (Committee member) / Arizona State University (Publisher)
Created2018
156924-Thumbnail Image.png
Description
Highly automated vehicles require drivers to remain aware enough to takeover

during critical events. Driver distraction is a key factor that prevents drivers from reacting

adequately, and thus there is need for an alert to help drivers regain situational awareness

and be able to act quickly and successfully should a

Highly automated vehicles require drivers to remain aware enough to takeover

during critical events. Driver distraction is a key factor that prevents drivers from reacting

adequately, and thus there is need for an alert to help drivers regain situational awareness

and be able to act quickly and successfully should a critical event arise. This study

examines two aspects of alerts that could help facilitate driver takeover: mode (auditory

and tactile) and direction (towards and away). Auditory alerts appear to be somewhat

more effective than tactile alerts, though both modes produce significantly faster reaction

times than no alert. Alerts moving towards the driver also appear to be more effective

than alerts moving away from the driver. Future research should examine how

multimodal alerts differ from single mode, and see if higher fidelity alerts influence

takeover times.
ContributorsBrogdon, Michael A (Author) / Gray, Robert (Thesis advisor) / Branaghan, Russell (Committee member) / Chiou, Erin (Committee member) / Arizona State University (Publisher)
Created2018
157150-Thumbnail Image.png
Description
This study was undertaken to ascertain to what degree, if any, virtual reality training was superior to monitor based training. By analyzing the results in a 2x3 ANOVA it was found that little difference in training resulted from using virtual reality or monitor interaction to facilitate training. The data did

This study was undertaken to ascertain to what degree, if any, virtual reality training was superior to monitor based training. By analyzing the results in a 2x3 ANOVA it was found that little difference in training resulted from using virtual reality or monitor interaction to facilitate training. The data did suggest that training involving rich textured environments might be more beneficial under virtual reality conditions, however nothing significant was found in the analysis. It might be possible that significance could be obtained by comparing a virtual reality set-up with higher fidelity to a monitor trial.
ContributorsWhitson, Richard (Author) / Gray, Robert (Thesis advisor) / Branaghan, Russell (Committee member) / Chiou, Erin (Committee member) / Arizona State University (Publisher)
Created2019
157313-Thumbnail Image.png
Description
Allocating tasks for a day's or week's schedule is known to be a challenging and difficult problem. The problem intensifies by many folds in multi-agent settings. A planner or group of planners who decide such kind of task association schedule must have a comprehensive perspective on (1) the entire array

Allocating tasks for a day's or week's schedule is known to be a challenging and difficult problem. The problem intensifies by many folds in multi-agent settings. A planner or group of planners who decide such kind of task association schedule must have a comprehensive perspective on (1) the entire array of tasks to be scheduled (2) idea on constraints like importance cum order of tasks and (3) the individual abilities of the operators. One example of such kind of scheduling is the crew scheduling done for astronauts who will spend time at International Space Station (ISS). The schedule for the crew of ISS is decided before the mission starts. Human planners take part in the decision-making process to determine the timing of activities for multiple days for multiple crew members at ISS. Given the unpredictability of individual assignments and limitations identified with the various operators, deciding upon a satisfactory timetable is a challenging task. The objective of the current work is to develop an automated decision assistant that would assist human planners in coming up with an acceptable task schedule for the crew. At the same time, the decision assistant will also ensure that human planners are always in the driver's seat throughout this process of decision-making.

The decision assistant will make use of automated planning technology to assist human planners. The guidelines of Naturalistic Decision Making (NDM) and the Human-In-The -Loop decision making were followed to make sure that the human is always in the driver's seat. The use cases considered are standard situations which come up during decision-making in crew-scheduling. The effectiveness of automated decision assistance was evaluated by setting it up for domain experts on a comparable domain of scheduling courses for master students. The results of the user study evaluating the effectiveness of automated decision support were subsequently published.
ContributorsMIshra, Aditya Prasad (Author) / Kambhampati, Subbarao (Thesis advisor) / Chiou, Erin (Committee member) / Demakethepalli Venkateswara, Hemanth Kumar (Committee member) / Arizona State University (Publisher)
Created2019
157284-Thumbnail Image.png
Description
Previous literature was reviewed in an effort to further investigate the link between notification levels of a cell phone and their effects on driver distraction. Mind-wandering has been suggested as an explanation for distraction and has been previously operationalized with oculomotor movement. Mind-wandering’s definition is debated, but in this research

Previous literature was reviewed in an effort to further investigate the link between notification levels of a cell phone and their effects on driver distraction. Mind-wandering has been suggested as an explanation for distraction and has been previously operationalized with oculomotor movement. Mind-wandering’s definition is debated, but in this research it was defined as off task thoughts that occur due to the task not requiring full cognitive capacity. Drivers were asked to operate a driving simulator and follow audio turn by turn directions while experiencing each of three cell phone notification levels: Control (no texts), Airplane (texts with no notifications), and Ringer (audio notifications). Measures of Brake Reaction Time, Headway Variability, and Average Speed were used to operationalize driver distraction. Drivers experienced higher Brake Reaction Time and Headway Variability with a lower Average Speed in both experimental conditions when compared to the Control Condition. This is consistent with previous research in the field of implying a distracted state. Oculomotor movement was measured as the percent time the participant was looking at the road. There was no significant difference between the conditions in this measure. The results of this research indicate that not, while not interacting with a cell phone, no audio notification is required to induce a state of distraction. This phenomenon was unable to be linked to mind-wandering.
ContributorsRadina, Earl (Author) / Gray, Robert (Thesis advisor) / Chiou, Erin (Committee member) / Branaghan, Russell (Committee member) / Arizona State University (Publisher)
Created2019
Description
For a conventional quadcopter system with 4 planar rotors, flight times vary between 10 to 20 minutes depending on the weight of the quadcopter and the size of the battery used. In order to increase the flight time, either the weight of the quadcopter should be reduced or the battery

For a conventional quadcopter system with 4 planar rotors, flight times vary between 10 to 20 minutes depending on the weight of the quadcopter and the size of the battery used. In order to increase the flight time, either the weight of the quadcopter should be reduced or the battery size should be increased. Another way is to increase the efficiency of the propellers. Previous research shows that ducting a propeller can cause an increase of up to 94 % in the thrust produced by the rotor-duct system. This research focused on developing and testing a quadcopter having a centrally ducted rotor which produces 60 % of the total system thrust and 3 other peripheral rotors. This quadcopter will provide longer flight times while having the same maneuvering flexibility in planar movements.
ContributorsLal, Harsh (Author) / Artemiadis, Panagiotis (Thesis advisor) / Lee, Hyunglae (Committee member) / Zhang, Wenlong (Committee member) / Arizona State University (Publisher)
Created2019