Matching Items (26)
Filtering by

Clear all filters

Description

Robots are often used in long-duration scenarios, such as on the surface of Mars,where they may need to adapt to environmental changes. Typically, robots have been built specifically for single tasks, such as moving boxes in a warehouse

Robots are often used in long-duration scenarios, such as on the surface of Mars,where they may need to adapt to environmental changes. Typically, robots have been built specifically for single tasks, such as moving boxes in a warehouse or surveying construction sites. However, there is a modern trend away from human hand-engineering and toward robot learning. To this end, the ideal robot is not engineered,but automatically designed for a specific task. This thesis focuses on robots which learn path-planning algorithms for specific environments. Learning is accomplished via genetic programming. Path-planners are represented as Python code, which is optimized via Pareto evolution. These planners are encouraged to explore curiously and efficiently. This research asks the questions: “How can robots exhibit life-long learning where they adapt to changing environments in a robust way?”, and “How can robots learn to be curious?”.

ContributorsSaldyt, Lucas P (Author) / Ben Amor, Heni (Thesis director) / Pavlic, Theodore (Committee member) / Computer Science and Engineering Program (Contributor, Contributor) / Barrett, The Honors College (Contributor)
Created2021-05
135645-Thumbnail Image.png
Description
This thesis proposes the concept of soft robotic supernumerary limbs to assist the wearer in the execution of tasks, whether it be to share loads or replace an assistant. These controllable extra arms are made using soft robotics to reduce the weight and cost of the device, and are not

This thesis proposes the concept of soft robotic supernumerary limbs to assist the wearer in the execution of tasks, whether it be to share loads or replace an assistant. These controllable extra arms are made using soft robotics to reduce the weight and cost of the device, and are not limited in size and location to the user's arm as with exoskeletal devices. Soft robotics differ from traditional robotics in that they are made using soft materials such as silicone elastomers rather than hard materials such as metals or plastics. This thesis presents the design, fabrication, and testing of the arm, including the joints and the actuators to move them, as well as the design and fabrication of the human-body interface to unite man and machine. This prototype utilizes two types of pneumatically-driven actuators, pneumatic artificial muscles and fiber-reinforced actuators, to actuate the elbow and shoulder joints, respectively. The robotic limb is mounted at the waist on a backpack frame to avoid interfering with the wearer's biological arm. Through testing and evaluation, this prototype device proves the feasibility of soft supernumerary limbs, and opens up opportunities for further development into the field.
ContributorsOlson, Weston Roscoe (Author) / Polygerinos, Panagiotis (Thesis director) / Zhang, Wenlong (Committee member) / Engineering Programs (Contributor) / Barrett, The Honors College (Contributor)
Created2016-05
135981-Thumbnail Image.png
Description
Education in computer science is a difficult endeavor, with learning a new programing language being a barrier to entry, especially for college freshman and high school students. Learning a first programming language requires understanding the syntax of the language, the algorithms to use, and any additional complexities the language carries.

Education in computer science is a difficult endeavor, with learning a new programing language being a barrier to entry, especially for college freshman and high school students. Learning a first programming language requires understanding the syntax of the language, the algorithms to use, and any additional complexities the language carries. Often times this becomes a deterrent from learning computer science at all. Especially in high school, students may not want to spend a year or more simply learning the syntax of a programming language. In order to overcome these issues, as well as to mitigate the issues caused by Microsoft discontinuing their Visual Programming Language (VPL), we have decided to implement a new VPL, ASU-VPL, based on Microsoft's VPL. ASU-VPL provides an environment where users can focus on algorithms and worry less about syntactic issues. ASU-VPL was built with the concepts of Robot as a Service and workflow based development in mind. As such, ASU-VPL is designed with the intention of allowing web services to be added to the toolbox (e.g. WSDL and REST services). ASU-VPL has strong support for multithreaded operations, including event driven development, and is built with Microsoft VPL users in mind. It provides support for many different robots, including Lego's third generation robots, i.e. EV3, and any open platform robots. To demonstrate the capabilities of ASU-VPL, this paper details the creation of an Intel Edison based robot and the use of ASU-VPL for programming both the Intel based robot and an EV3 robot. This paper will also discuss differences between ASU-VPL and Microsoft VPL as well as differences between developing for the EV3 and for an open platform robot.
ContributorsDe Luca, Gennaro (Author) / Chen, Yinong (Thesis director) / Cheng, Calvin (Committee member) / Computer Science and Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2015-12
130894-Thumbnail Image.png
Description
The use of Artificial Intelligence in assistive systems is growing in application and efficiency. From self-driving cars, to medical and surgical robots and industrial tasked unsupervised co-robots; the use of AI and robotics to eliminate human error in high-stress environments and perform automated tasks is something that is advancing society’s

The use of Artificial Intelligence in assistive systems is growing in application and efficiency. From self-driving cars, to medical and surgical robots and industrial tasked unsupervised co-robots; the use of AI and robotics to eliminate human error in high-stress environments and perform automated tasks is something that is advancing society’s status quo. Not only has the understanding of co-robotics exploded in the industrial world, but in research as well. The National Science Foundation (NSF) defines co-robots as the following: “...a robot whose main purpose is to work with people or other robots to accomplish a goal” (NSF, 1). The latest iteration of their National Robotics Initiative, NRI-2.0, focuses on efforts of creating co-robots optimized for ‘scalability, customizability, lowering barriers to entry, and societal impact’(NSF, 1). While many avenues have been explored for the implementation of co-robotics to create more efficient processes and sustainable lifestyles, this project’s focus was on societal impact co-robotics in the field of human safety and well-being. Introducing a co-robotics and computer vision AI solution for first responder assistance would help bring awareness and efficiency to public safety. The use of real-time identification techniques would create a greater range of awareness for first responders in high-stress situations. A combination of environmental features collected through sensors (camera and radar) could be used to identify people and objects within certain environments where visual impairments and obstructions are high (eg. burning buildings, smoke-filled rooms, ect.). Information about situational conditions (environmental readings, locations of other occupants, etc.) could be transmitted to first responders in emergency situations, maximizing situational awareness. This would not only aid first responders in the evaluation of emergency situations, but it would provide useful data for the first responder that would help materialize the most effective course of action for said situation.
ContributorsScott, Kylel D (Author) / Benjamin, Victor (Thesis director) / Liu, Xiao (Committee member) / Engineering Programs (Contributor) / College of Integrative Sciences and Arts (Contributor) / Department of Information Systems (Contributor) / Barrett, The Honors College (Contributor)
Created2020-12
132352-Thumbnail Image.png
Description
This is a report on an experiment that examines if the principles of multimedia learning outlined in Richard E. Mayer’s journal article, “Using multimedia for e-learning”, located in the Journal of Computer Assisted Learning would apply to haptic feedback used for haptic robotic operation. This was tested by developing

This is a report on an experiment that examines if the principles of multimedia learning outlined in Richard E. Mayer’s journal article, “Using multimedia for e-learning”, located in the Journal of Computer Assisted Learning would apply to haptic feedback used for haptic robotic operation. This was tested by developing and using a haptic robotic manipulator known as the Haptic Testbed (HTB). The HTB is a manipulator designed to emulate human hand movement for haptic testing purposes and features an index finger and thumb for the right hand. Control is conducted through a Leap Motion Controller, a visual sensor that uses infrared lights and cameras to gather various data about hands it can see. The goal of the experiment was to have test subjects complete a task where they shifted objects along a circuit of positions where they were measured on time to complete the circuit as well as accuracy in reaching the individual points. Analysis of subject responses to surveys as well as performance during the experiment showed haptic feedback during training improving initial performance of individuals as well as lowering mental effort and mental demand during said training. The findings of this experiment showed support for the hypothesis that Mayer’s principles do apply to haptic feedback in training for haptic robotic manipulation. One of the implications of this experiment would be the possibility for haptics and tactile senses to be an applicable sense for Mayer’s principles of multimedia learning as most of the current work in the field is mostly focused on visual or auditory senses. If the results of the experiment were replicated in a future experiment it would provide support to the hypothesis that the principles of multimedia learning can be utilized to improve the training of haptic robotic operation.
ContributorsGiam, Connor Dallas (Author) / Craig, Scotty (Thesis director) / Sodemann, Angela (Committee member) / Engineering Programs (Contributor, Contributor) / Barrett, The Honors College (Contributor)
Created2019-05
132414-Thumbnail Image.png
Description
A common design of multi-agent robotic systems requires a centralized master node, which coordinates the actions of all the agents. The multi-agent system designed in this project enables coordination between the robots and reduces the dependence on a single node in the system. This design change reduces the complexity of

A common design of multi-agent robotic systems requires a centralized master node, which coordinates the actions of all the agents. The multi-agent system designed in this project enables coordination between the robots and reduces the dependence on a single node in the system. This design change reduces the complexity of the central node, and makes the system more adaptable to changes in its topology. The final goal of this project was to have a group of robots collaboratively claim positions in pre-defined formations, and navigate to the position using pose data transmitted by a localization server.
Planning coordination between robots in a multi-agent system requires each robot to know the position of the other robots. To address this, the localization server tracked visual fiducial markers attached to the robots and relayed their pose to every robot at a rate of 20Hz using the MQTT communication protocol. The robots used this data to inform a potential fields path planning algorithm and navigate to their target position.
This project was unable to address all of the challenges facing true distributed multi-agent coordination and needed to make concessions in order to meet deadlines. Further research would focus on shoring up these deficiencies and developing a more robust system.
ContributorsThibeault, Quinn (Author) / Meuth, Ryan (Thesis director) / Chen, Yinong (Committee member) / Computer Science and Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2019-05
132015-Thumbnail Image.png
Description
The mean age of the world’s population is rapidly increasing and with that growth in an aging population a large number of elderly people are in need of walking assistance. In addition, a number of medical conditions contribute to gait disorders that require gait rehabilitation. Wearable robotics can be used

The mean age of the world’s population is rapidly increasing and with that growth in an aging population a large number of elderly people are in need of walking assistance. In addition, a number of medical conditions contribute to gait disorders that require gait rehabilitation. Wearable robotics can be used to improve functional outcomes in the gait rehabilitation process. The ankle push-off phase of an individual’s gait is vital to their ability to walk and propel themselves forward. During the ankle push-off phase of walking, plantar flexors are required to providing a large amount of force to power the heel off the ground.

The purpose of this project is to improve upon the passive ankle foot orthosis originally designed in the ASU’s Robotics and Intelligent Systems Laboratory (RISE Lab). This device utilizes springs positioned parallel to the user’s Achilles tendon which store energy to be released during the push off phase of the user’s gait cycle. Goals of the project are to improve the speed and reliability of the ratchet and pawl mechanism, design the device to fit a wider range of shoe sizes, and reduce the overall mass and size of the device. The resulting system is semi-passive and only utilizes a single solenoid to unlock the ratcheting mechanism when the spring’s potential force is required. The device created also utilizes constant force springs rather than traditional linear springs which allows for a more predictable level of force. A healthy user tested the device on a treadmill and surface electromyography (sEMG) sensors were placed on the user’s plantar flexor muscles to monitor potential reductions in muscular activity resulting from the assistance provided by the AFO device. The data demonstrates the robotic shoe was able to assist during the heel-off stage and reduced activation in the plantar flexor muscles was evident from the EMG data collected. As this is an ongoing research project, this thesis will also recommend possible design upgrades and changes to be made to the device in the future. These upgrades include utilizing a carbon fiber or lightweight plastic frame such as many of the traditional ankle foot-orthosis sold today and introducing a system to regulate the amount of spring force applied as a function of the force required at specific times of the heel off gait phase.
ContributorsSchaller, Marcus Frank (Author) / Zhang, Wenlong (Thesis director) / Sugar, Thomas (Committee member) / Engineering Programs (Contributor) / Barrett, The Honors College (Contributor)
Created2019-12
131607-Thumbnail Image.png
Description
The objective of this project was to research and experimentally test methods of localization, waypoint following, and actuation for high-speed driving by an autonomous vehicle. This thesis describes the implementation of LiDAR localization techniques, Model Predictive Control waypoint following, and communication for actuation on a 2016 Chevrolet Camaro, Arizona State

The objective of this project was to research and experimentally test methods of localization, waypoint following, and actuation for high-speed driving by an autonomous vehicle. This thesis describes the implementation of LiDAR localization techniques, Model Predictive Control waypoint following, and communication for actuation on a 2016 Chevrolet Camaro, Arizona State University’s former EcoCAR. The LiDAR localization techniques include the NDT Mapping and Matching algorithms from the open-source autonomous vehicle platform, Autoware. The mapping algorithm was supplemented by that of Google Cartographer due to the limitations of map size in Autoware’s algorithms. The Model Predictive Control for waypoint following and the computer-microcontroller-actuator communication line are described. In addition to this experimental work, the thesis discusses an investigation of alternative approaches for each problem.
ContributorsCopenhaver, Bryce Stone (Author) / Berman, Spring (Thesis director) / Yong, Sze Zheng (Committee member) / Dean, W.P. Carey School of Business (Contributor) / Engineering Programs (Contributor) / Barrett, The Honors College (Contributor)
Created2020-05
132724-Thumbnail Image.png
Description
Multi-material manufacturing combines multiple fabrication processes to produce individual parts that can be made up of several different materials. These processes can include both additive and subtractive manufacturing methods as well as embedding other components during manufacturing. This yields opportunities for creating single parts that can take the

Multi-material manufacturing combines multiple fabrication processes to produce individual parts that can be made up of several different materials. These processes can include both additive and subtractive manufacturing methods as well as embedding other components during manufacturing. This yields opportunities for creating single parts that can take the place of an assembly of parts produced using conventional techniques. Some example applications of multi-material manufacturing include parts that are produced using one process then machined to tolerance using another, parts with integrated flexible joints, or parts that contain discrete embedded components such as reinforcing materials or electronics.

Multi-material manufacturing has applications in robotics because, with it, mechanisms can be built into a design without adding additional moving parts. This allows for robot designs that are both robust and low cost, making it a particularly attractive method for education or research. 3D printing is of particular interest in this area because it is low cost, readily available, and capable of easily producing complicated part geometries. Some machines are also capable of depositing multiple materials during a single process. However, up to this point, planning the steps to create a part using multi-material manufacturing has been done manually, requiring specialized knowledge of the tools used. The difficulty of this planning procedure can prevent many students and researchers from using multi-material manufacturing.

This project studied methods of automating the planning of multi-material manufacturing processes through the development of a computational framework for processing 3D models and automatically generating viable manufacturing sequences. This framework includes solid operations and algorithms which assist the designer in computing manufacturing steps for multi-material models. This research is informing the development of a software planning tool which will simplify the planning needed by multi-material fabrication, making it more accessible for use in education or research.

In our paper, Voxel-Based Cad Framework for Planning Functionally Graded and Multi-Step Rapid Fabrication Processes, we present a new framework for representing and computing functionally-graded materials for use in rapid prototyping applications. We introduce the material description itself, low-level operations which can be used to combine one or more geometries together, and algorithms which assist the designer in computing manufacturing-compatible sequences. We then apply these techniques to several example scenarios. First, we demonstrate the use of a Gaussian blur to add graded material transitions to a model which can then be produced using a multi-material 3D printing process. Our second example highlights our solution to the problem of inserting a discrete, off-the-shelf part into a 3D printed model during the printing sequence. Finally, we implement this second example and manufacture two example components.
ContributorsBrauer, Cole D (Author) / Aukes, Daniel (Thesis director) / Sodemann, Angela (Committee member) / Engineering Programs (Contributor, Contributor) / Barrett, The Honors College (Contributor)
Created2019-05
132967-Thumbnail Image.png
Description
Classical planning is a field of Artificial Intelligence concerned with allowing autonomous agents to make reasonable decisions in complex environments. This work investigates
the application of deep learning and planning techniques, with the aim of constructing generalized plans capable of solving multiple problem instances. We construct a Deep Neural Network that,

Classical planning is a field of Artificial Intelligence concerned with allowing autonomous agents to make reasonable decisions in complex environments. This work investigates
the application of deep learning and planning techniques, with the aim of constructing generalized plans capable of solving multiple problem instances. We construct a Deep Neural Network that, given an abstract problem state, predicts both (i) the best action to be taken from that state and (ii) the generalized “role” of the object being manipulated. The neural network was tested on two classical planning domains: the blocks world domain and the logistic domain. Results indicate that neural networks are capable of making such
predictions with high accuracy, indicating a promising new framework for approaching generalized planning problems.
ContributorsNakhleh, Julia Blair (Author) / Srivastava, Siddharth (Thesis director) / Fainekos, Georgios (Committee member) / Computer Science and Engineering Program (Contributor) / School of International Letters and Cultures (Contributor) / Barrett, The Honors College (Contributor)
Created2019-05