Matching Items (38)
Filtering by

Clear all filters

152324-Thumbnail Image.png
Description
With robots being used extensively in various areas, a certain degree of robot autonomy has always been found desirable. In applications like planetary exploration, autonomous path planning and navigation are considered essential. But every now and then, a need to modify the robot's operation arises, a need for a human

With robots being used extensively in various areas, a certain degree of robot autonomy has always been found desirable. In applications like planetary exploration, autonomous path planning and navigation are considered essential. But every now and then, a need to modify the robot's operation arises, a need for a human to provide it some supervisory parameters that modify the degree of autonomy or allocate extra tasks to the robot. In this regard, this thesis presents an approach to include a provision to accept and incorporate such human inputs and modify the navigation functions of the robot accordingly. Concepts such as applying kinematical constraints while planning paths, traversing of unknown areas with an intent of maximizing field of view, performing complex tasks on command etc. have been examined and implemented. The approaches have been tested in Robot Operating System (ROS), using robots such as the iRobot Create, Personal Robotics (PR2) etc. Simulations and experimental demonstrations have proved that this approach is feasible for solving some of the existing problems and that it certainly can pave way to further research for enhancing functionality.
ContributorsVemprala, Sai Hemachandra (Author) / Saripalli, Srikanth (Thesis advisor) / Fainekos, Georgios (Committee member) / Turaga, Pavan (Committee member) / Arizona State University (Publisher)
Created2013
151793-Thumbnail Image.png
Description
Linear Temporal Logic is gaining increasing popularity as a high level specification language for robot motion planning due to its expressive power and scalability of LTL control synthesis algorithms. This formalism, however, requires expert knowledge and makes it inaccessible to non-expert users. This thesis introduces a graphical specification environment to

Linear Temporal Logic is gaining increasing popularity as a high level specification language for robot motion planning due to its expressive power and scalability of LTL control synthesis algorithms. This formalism, however, requires expert knowledge and makes it inaccessible to non-expert users. This thesis introduces a graphical specification environment to create high level motion plans to control robots in the field by converting a visual representation of the motion/task plan into a Linear Temporal Logic (LTL) specification. The visual interface is built on the Android tablet platform and provides functionality to create task plans through a set of well defined gestures and on screen controls. It uses the notion of waypoints to quickly and efficiently describe the motion plan and enables a variety of complex Linear Temporal Logic specifications to be described succinctly and intuitively by the user without the need for the knowledge and understanding of LTL specification. Thus, it opens avenues for its use by personnel in military, warehouse management, and search and rescue missions. This thesis describes the construction of LTL for various scenarios used for robot navigation using the visual interface developed and leverages the use of existing LTL based motion planners to carry out the task plan by a robot.
ContributorsSrinivas, Shashank (Author) / Fainekos, Georgios (Thesis advisor) / Baral, Chitta (Committee member) / Burleson, Winslow (Committee member) / Arizona State University (Publisher)
Created2013
152234-Thumbnail Image.png
Description
One of the main challenges in planetary robotics is to traverse the shortest path through a set of waypoints. The shortest distance between any two waypoints is a direct linear traversal. Often times, there are physical restrictions that prevent a rover form traversing straight to a waypoint. Thus, knowledge of

One of the main challenges in planetary robotics is to traverse the shortest path through a set of waypoints. The shortest distance between any two waypoints is a direct linear traversal. Often times, there are physical restrictions that prevent a rover form traversing straight to a waypoint. Thus, knowledge of the terrain is needed prior to traversal. The Digital Terrain Model (DTM) provides information about the terrain along with waypoints for the rover to traverse. However, traversing a set of waypoints linearly is burdensome, as the rovers would constantly need to modify their orientation as they successively approach waypoints. Although there are various solutions to this problem, this research paper proposes the smooth traversability of the rover using splines as a quick and easy implementation to traverse a set of waypoints. In addition, a rover was used to compare the smoothness of the linear traversal along with the spline interpolations. The data collected illustrated that spline traversals had a less rate of change in the velocity over time, indicating that the rover performed smoother than with linear paths.
ContributorsKamasamudram, Anurag (Author) / Saripalli, Srikanth (Thesis advisor) / Fainekos, Georgios (Thesis advisor) / Turaga, Pavan (Committee member) / Arizona State University (Publisher)
Created2013
152179-Thumbnail Image.png
Description
As the complexity of robotic systems and applications grows rapidly, development of high-performance, easy to use, and fully integrated development environments for those systems is inevitable. Model-Based Design (MBD) of dynamic systems using engineering software such as Simulink® from MathWorks®, SciCos from Metalau team and SystemModeler® from Wolfram® is quite

As the complexity of robotic systems and applications grows rapidly, development of high-performance, easy to use, and fully integrated development environments for those systems is inevitable. Model-Based Design (MBD) of dynamic systems using engineering software such as Simulink® from MathWorks®, SciCos from Metalau team and SystemModeler® from Wolfram® is quite popular nowadays. They provide tools for modeling, simulation, verification and in some cases automatic code generation for desktop applications, embedded systems and robots. For real-world implementation of models on the actual hardware, those models should be converted into compilable machine code either manually or automatically. Due to the complexity of robotic systems, manual code translation from model to code is not a feasible optimal solution so we need to move towards automated code generation for such systems. MathWorks® offers code generation facilities called Coder® products for this purpose. However in order to fully exploit the power of model-based design and code generation tools for robotic applications, we need to enhance those software systems by adding and modifying toolboxes, files and other artifacts as well as developing guidelines and procedures. In this thesis, an effort has been made to propose a guideline as well as a Simulink® library, StateFlow® interface API and a C/C++ interface API to complete this toolchain for NAO humanoid robots. Thus the model of the hierarchical control architecture can be easily and properly converted to code and built for implementation.
ContributorsRaji Kermani, Ramtin (Author) / Fainekos, Georgios (Thesis advisor) / Lee, Yann-Hang (Committee member) / Sarjoughian, Hessam S. (Committee member) / Arizona State University (Publisher)
Created2013
135645-Thumbnail Image.png
Description
This thesis proposes the concept of soft robotic supernumerary limbs to assist the wearer in the execution of tasks, whether it be to share loads or replace an assistant. These controllable extra arms are made using soft robotics to reduce the weight and cost of the device, and are not

This thesis proposes the concept of soft robotic supernumerary limbs to assist the wearer in the execution of tasks, whether it be to share loads or replace an assistant. These controllable extra arms are made using soft robotics to reduce the weight and cost of the device, and are not limited in size and location to the user's arm as with exoskeletal devices. Soft robotics differ from traditional robotics in that they are made using soft materials such as silicone elastomers rather than hard materials such as metals or plastics. This thesis presents the design, fabrication, and testing of the arm, including the joints and the actuators to move them, as well as the design and fabrication of the human-body interface to unite man and machine. This prototype utilizes two types of pneumatically-driven actuators, pneumatic artificial muscles and fiber-reinforced actuators, to actuate the elbow and shoulder joints, respectively. The robotic limb is mounted at the waist on a backpack frame to avoid interfering with the wearer's biological arm. Through testing and evaluation, this prototype device proves the feasibility of soft supernumerary limbs, and opens up opportunities for further development into the field.
ContributorsOlson, Weston Roscoe (Author) / Polygerinos, Panagiotis (Thesis director) / Zhang, Wenlong (Committee member) / Engineering Programs (Contributor) / Barrett, The Honors College (Contributor)
Created2016-05
130894-Thumbnail Image.png
Description
The use of Artificial Intelligence in assistive systems is growing in application and efficiency. From self-driving cars, to medical and surgical robots and industrial tasked unsupervised co-robots; the use of AI and robotics to eliminate human error in high-stress environments and perform automated tasks is something that is advancing society’s

The use of Artificial Intelligence in assistive systems is growing in application and efficiency. From self-driving cars, to medical and surgical robots and industrial tasked unsupervised co-robots; the use of AI and robotics to eliminate human error in high-stress environments and perform automated tasks is something that is advancing society’s status quo. Not only has the understanding of co-robotics exploded in the industrial world, but in research as well. The National Science Foundation (NSF) defines co-robots as the following: “...a robot whose main purpose is to work with people or other robots to accomplish a goal” (NSF, 1). The latest iteration of their National Robotics Initiative, NRI-2.0, focuses on efforts of creating co-robots optimized for ‘scalability, customizability, lowering barriers to entry, and societal impact’(NSF, 1). While many avenues have been explored for the implementation of co-robotics to create more efficient processes and sustainable lifestyles, this project’s focus was on societal impact co-robotics in the field of human safety and well-being. Introducing a co-robotics and computer vision AI solution for first responder assistance would help bring awareness and efficiency to public safety. The use of real-time identification techniques would create a greater range of awareness for first responders in high-stress situations. A combination of environmental features collected through sensors (camera and radar) could be used to identify people and objects within certain environments where visual impairments and obstructions are high (eg. burning buildings, smoke-filled rooms, ect.). Information about situational conditions (environmental readings, locations of other occupants, etc.) could be transmitted to first responders in emergency situations, maximizing situational awareness. This would not only aid first responders in the evaluation of emergency situations, but it would provide useful data for the first responder that would help materialize the most effective course of action for said situation.
ContributorsScott, Kylel D (Author) / Benjamin, Victor (Thesis director) / Liu, Xiao (Committee member) / Engineering Programs (Contributor) / College of Integrative Sciences and Arts (Contributor) / Department of Information Systems (Contributor) / Barrett, The Honors College (Contributor)
Created2020-12
132352-Thumbnail Image.png
Description
This is a report on an experiment that examines if the principles of multimedia learning outlined in Richard E. Mayer’s journal article, “Using multimedia for e-learning”, located in the Journal of Computer Assisted Learning would apply to haptic feedback used for haptic robotic operation. This was tested by developing

This is a report on an experiment that examines if the principles of multimedia learning outlined in Richard E. Mayer’s journal article, “Using multimedia for e-learning”, located in the Journal of Computer Assisted Learning would apply to haptic feedback used for haptic robotic operation. This was tested by developing and using a haptic robotic manipulator known as the Haptic Testbed (HTB). The HTB is a manipulator designed to emulate human hand movement for haptic testing purposes and features an index finger and thumb for the right hand. Control is conducted through a Leap Motion Controller, a visual sensor that uses infrared lights and cameras to gather various data about hands it can see. The goal of the experiment was to have test subjects complete a task where they shifted objects along a circuit of positions where they were measured on time to complete the circuit as well as accuracy in reaching the individual points. Analysis of subject responses to surveys as well as performance during the experiment showed haptic feedback during training improving initial performance of individuals as well as lowering mental effort and mental demand during said training. The findings of this experiment showed support for the hypothesis that Mayer’s principles do apply to haptic feedback in training for haptic robotic manipulation. One of the implications of this experiment would be the possibility for haptics and tactile senses to be an applicable sense for Mayer’s principles of multimedia learning as most of the current work in the field is mostly focused on visual or auditory senses. If the results of the experiment were replicated in a future experiment it would provide support to the hypothesis that the principles of multimedia learning can be utilized to improve the training of haptic robotic operation.
ContributorsGiam, Connor Dallas (Author) / Craig, Scotty (Thesis director) / Sodemann, Angela (Committee member) / Engineering Programs (Contributor, Contributor) / Barrett, The Honors College (Contributor)
Created2019-05
132015-Thumbnail Image.png
Description
The mean age of the world’s population is rapidly increasing and with that growth in an aging population a large number of elderly people are in need of walking assistance. In addition, a number of medical conditions contribute to gait disorders that require gait rehabilitation. Wearable robotics can be used

The mean age of the world’s population is rapidly increasing and with that growth in an aging population a large number of elderly people are in need of walking assistance. In addition, a number of medical conditions contribute to gait disorders that require gait rehabilitation. Wearable robotics can be used to improve functional outcomes in the gait rehabilitation process. The ankle push-off phase of an individual’s gait is vital to their ability to walk and propel themselves forward. During the ankle push-off phase of walking, plantar flexors are required to providing a large amount of force to power the heel off the ground.

The purpose of this project is to improve upon the passive ankle foot orthosis originally designed in the ASU’s Robotics and Intelligent Systems Laboratory (RISE Lab). This device utilizes springs positioned parallel to the user’s Achilles tendon which store energy to be released during the push off phase of the user’s gait cycle. Goals of the project are to improve the speed and reliability of the ratchet and pawl mechanism, design the device to fit a wider range of shoe sizes, and reduce the overall mass and size of the device. The resulting system is semi-passive and only utilizes a single solenoid to unlock the ratcheting mechanism when the spring’s potential force is required. The device created also utilizes constant force springs rather than traditional linear springs which allows for a more predictable level of force. A healthy user tested the device on a treadmill and surface electromyography (sEMG) sensors were placed on the user’s plantar flexor muscles to monitor potential reductions in muscular activity resulting from the assistance provided by the AFO device. The data demonstrates the robotic shoe was able to assist during the heel-off stage and reduced activation in the plantar flexor muscles was evident from the EMG data collected. As this is an ongoing research project, this thesis will also recommend possible design upgrades and changes to be made to the device in the future. These upgrades include utilizing a carbon fiber or lightweight plastic frame such as many of the traditional ankle foot-orthosis sold today and introducing a system to regulate the amount of spring force applied as a function of the force required at specific times of the heel off gait phase.
ContributorsSchaller, Marcus Frank (Author) / Zhang, Wenlong (Thesis director) / Sugar, Thomas (Committee member) / Engineering Programs (Contributor) / Barrett, The Honors College (Contributor)
Created2019-12
131607-Thumbnail Image.png
Description
The objective of this project was to research and experimentally test methods of localization, waypoint following, and actuation for high-speed driving by an autonomous vehicle. This thesis describes the implementation of LiDAR localization techniques, Model Predictive Control waypoint following, and communication for actuation on a 2016 Chevrolet Camaro, Arizona State

The objective of this project was to research and experimentally test methods of localization, waypoint following, and actuation for high-speed driving by an autonomous vehicle. This thesis describes the implementation of LiDAR localization techniques, Model Predictive Control waypoint following, and communication for actuation on a 2016 Chevrolet Camaro, Arizona State University’s former EcoCAR. The LiDAR localization techniques include the NDT Mapping and Matching algorithms from the open-source autonomous vehicle platform, Autoware. The mapping algorithm was supplemented by that of Google Cartographer due to the limitations of map size in Autoware’s algorithms. The Model Predictive Control for waypoint following and the computer-microcontroller-actuator communication line are described. In addition to this experimental work, the thesis discusses an investigation of alternative approaches for each problem.
ContributorsCopenhaver, Bryce Stone (Author) / Berman, Spring (Thesis director) / Yong, Sze Zheng (Committee member) / Dean, W.P. Carey School of Business (Contributor) / Engineering Programs (Contributor) / Barrett, The Honors College (Contributor)
Created2020-05
132724-Thumbnail Image.png
Description
Multi-material manufacturing combines multiple fabrication processes to produce individual parts that can be made up of several different materials. These processes can include both additive and subtractive manufacturing methods as well as embedding other components during manufacturing. This yields opportunities for creating single parts that can take the

Multi-material manufacturing combines multiple fabrication processes to produce individual parts that can be made up of several different materials. These processes can include both additive and subtractive manufacturing methods as well as embedding other components during manufacturing. This yields opportunities for creating single parts that can take the place of an assembly of parts produced using conventional techniques. Some example applications of multi-material manufacturing include parts that are produced using one process then machined to tolerance using another, parts with integrated flexible joints, or parts that contain discrete embedded components such as reinforcing materials or electronics.

Multi-material manufacturing has applications in robotics because, with it, mechanisms can be built into a design without adding additional moving parts. This allows for robot designs that are both robust and low cost, making it a particularly attractive method for education or research. 3D printing is of particular interest in this area because it is low cost, readily available, and capable of easily producing complicated part geometries. Some machines are also capable of depositing multiple materials during a single process. However, up to this point, planning the steps to create a part using multi-material manufacturing has been done manually, requiring specialized knowledge of the tools used. The difficulty of this planning procedure can prevent many students and researchers from using multi-material manufacturing.

This project studied methods of automating the planning of multi-material manufacturing processes through the development of a computational framework for processing 3D models and automatically generating viable manufacturing sequences. This framework includes solid operations and algorithms which assist the designer in computing manufacturing steps for multi-material models. This research is informing the development of a software planning tool which will simplify the planning needed by multi-material fabrication, making it more accessible for use in education or research.

In our paper, Voxel-Based Cad Framework for Planning Functionally Graded and Multi-Step Rapid Fabrication Processes, we present a new framework for representing and computing functionally-graded materials for use in rapid prototyping applications. We introduce the material description itself, low-level operations which can be used to combine one or more geometries together, and algorithms which assist the designer in computing manufacturing-compatible sequences. We then apply these techniques to several example scenarios. First, we demonstrate the use of a Gaussian blur to add graded material transitions to a model which can then be produced using a multi-material 3D printing process. Our second example highlights our solution to the problem of inserting a discrete, off-the-shelf part into a 3D printed model during the printing sequence. Finally, we implement this second example and manufacture two example components.
ContributorsBrauer, Cole D (Author) / Aukes, Daniel (Thesis director) / Sodemann, Angela (Committee member) / Engineering Programs (Contributor, Contributor) / Barrett, The Honors College (Contributor)
Created2019-05