Matching Items (19)
Filtering by

Clear all filters

Description

Robots are often used in long-duration scenarios, such as on the surface of Mars,where they may need to adapt to environmental changes. Typically, robots have been built specifically for single tasks, such as moving boxes in a warehouse

Robots are often used in long-duration scenarios, such as on the surface of Mars,where they may need to adapt to environmental changes. Typically, robots have been built specifically for single tasks, such as moving boxes in a warehouse or surveying construction sites. However, there is a modern trend away from human hand-engineering and toward robot learning. To this end, the ideal robot is not engineered,but automatically designed for a specific task. This thesis focuses on robots which learn path-planning algorithms for specific environments. Learning is accomplished via genetic programming. Path-planners are represented as Python code, which is optimized via Pareto evolution. These planners are encouraged to explore curiously and efficiently. This research asks the questions: “How can robots exhibit life-long learning where they adapt to changing environments in a robust way?”, and “How can robots learn to be curious?”.

ContributorsSaldyt, Lucas P (Author) / Ben Amor, Heni (Thesis director) / Pavlic, Theodore (Committee member) / Computer Science and Engineering Program (Contributor, Contributor) / Barrett, The Honors College (Contributor)
Created2021-05
135981-Thumbnail Image.png
Description
Education in computer science is a difficult endeavor, with learning a new programing language being a barrier to entry, especially for college freshman and high school students. Learning a first programming language requires understanding the syntax of the language, the algorithms to use, and any additional complexities the language carries.

Education in computer science is a difficult endeavor, with learning a new programing language being a barrier to entry, especially for college freshman and high school students. Learning a first programming language requires understanding the syntax of the language, the algorithms to use, and any additional complexities the language carries. Often times this becomes a deterrent from learning computer science at all. Especially in high school, students may not want to spend a year or more simply learning the syntax of a programming language. In order to overcome these issues, as well as to mitigate the issues caused by Microsoft discontinuing their Visual Programming Language (VPL), we have decided to implement a new VPL, ASU-VPL, based on Microsoft's VPL. ASU-VPL provides an environment where users can focus on algorithms and worry less about syntactic issues. ASU-VPL was built with the concepts of Robot as a Service and workflow based development in mind. As such, ASU-VPL is designed with the intention of allowing web services to be added to the toolbox (e.g. WSDL and REST services). ASU-VPL has strong support for multithreaded operations, including event driven development, and is built with Microsoft VPL users in mind. It provides support for many different robots, including Lego's third generation robots, i.e. EV3, and any open platform robots. To demonstrate the capabilities of ASU-VPL, this paper details the creation of an Intel Edison based robot and the use of ASU-VPL for programming both the Intel based robot and an EV3 robot. This paper will also discuss differences between ASU-VPL and Microsoft VPL as well as differences between developing for the EV3 and for an open platform robot.
ContributorsDe Luca, Gennaro (Author) / Chen, Yinong (Thesis director) / Cheng, Calvin (Committee member) / Computer Science and Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2015-12
147715-Thumbnail Image.png
Description

A description of the robotics principles, actuators, materials, and programming used to test the durability of dendritic identifiers to be used in the produce supply chain. This includes the application of linear and rotational servo motors, PWM control of a DC motor, and hall effect sensors to create an encoder.

ContributorsRobertson, Stephen (Author) / Kozicki, Michael (Thesis director) / Manfredo, Mark (Committee member) / Electrical Engineering Program (Contributor, Contributor) / Barrett, The Honors College (Contributor)
Created2021-05
132326-Thumbnail Image.png
Description
The focus of this project investigates high mobility robotics by developing a fully integrated framework for a ball-balancing robot. Using Lagrangian mechanics, a model for the robot was derived and used to conduct trade studies on significant system parameters. With a broad understanding of system dynamics, controllers were designed using

The focus of this project investigates high mobility robotics by developing a fully integrated framework for a ball-balancing robot. Using Lagrangian mechanics, a model for the robot was derived and used to conduct trade studies on significant system parameters. With a broad understanding of system dynamics, controllers were designed using LQR methodology. A prototype was then built and tested to exhibit desired reference command following and disturbance attenuation.
ContributorsKapron, Mark Andrew (Author) / Rodriguez, Armando (Thesis director) / Artemiadis, Panagiotis (Committee member) / Industrial, Systems & Operations Engineering Prgm (Contributor) / Electrical Engineering Program (Contributor, Contributor) / Barrett, The Honors College (Contributor)
Created2019-05
132352-Thumbnail Image.png
Description
This is a report on an experiment that examines if the principles of multimedia learning outlined in Richard E. Mayer’s journal article, “Using multimedia for e-learning”, located in the Journal of Computer Assisted Learning would apply to haptic feedback used for haptic robotic operation. This was tested by developing

This is a report on an experiment that examines if the principles of multimedia learning outlined in Richard E. Mayer’s journal article, “Using multimedia for e-learning”, located in the Journal of Computer Assisted Learning would apply to haptic feedback used for haptic robotic operation. This was tested by developing and using a haptic robotic manipulator known as the Haptic Testbed (HTB). The HTB is a manipulator designed to emulate human hand movement for haptic testing purposes and features an index finger and thumb for the right hand. Control is conducted through a Leap Motion Controller, a visual sensor that uses infrared lights and cameras to gather various data about hands it can see. The goal of the experiment was to have test subjects complete a task where they shifted objects along a circuit of positions where they were measured on time to complete the circuit as well as accuracy in reaching the individual points. Analysis of subject responses to surveys as well as performance during the experiment showed haptic feedback during training improving initial performance of individuals as well as lowering mental effort and mental demand during said training. The findings of this experiment showed support for the hypothesis that Mayer’s principles do apply to haptic feedback in training for haptic robotic manipulation. One of the implications of this experiment would be the possibility for haptics and tactile senses to be an applicable sense for Mayer’s principles of multimedia learning as most of the current work in the field is mostly focused on visual or auditory senses. If the results of the experiment were replicated in a future experiment it would provide support to the hypothesis that the principles of multimedia learning can be utilized to improve the training of haptic robotic operation.
ContributorsGiam, Connor Dallas (Author) / Craig, Scotty (Thesis director) / Sodemann, Angela (Committee member) / Engineering Programs (Contributor, Contributor) / Barrett, The Honors College (Contributor)
Created2019-05
132414-Thumbnail Image.png
Description
A common design of multi-agent robotic systems requires a centralized master node, which coordinates the actions of all the agents. The multi-agent system designed in this project enables coordination between the robots and reduces the dependence on a single node in the system. This design change reduces the complexity of

A common design of multi-agent robotic systems requires a centralized master node, which coordinates the actions of all the agents. The multi-agent system designed in this project enables coordination between the robots and reduces the dependence on a single node in the system. This design change reduces the complexity of the central node, and makes the system more adaptable to changes in its topology. The final goal of this project was to have a group of robots collaboratively claim positions in pre-defined formations, and navigate to the position using pose data transmitted by a localization server.
Planning coordination between robots in a multi-agent system requires each robot to know the position of the other robots. To address this, the localization server tracked visual fiducial markers attached to the robots and relayed their pose to every robot at a rate of 20Hz using the MQTT communication protocol. The robots used this data to inform a potential fields path planning algorithm and navigate to their target position.
This project was unable to address all of the challenges facing true distributed multi-agent coordination and needed to make concessions in order to meet deadlines. Further research would focus on shoring up these deficiencies and developing a more robust system.
ContributorsThibeault, Quinn (Author) / Meuth, Ryan (Thesis director) / Chen, Yinong (Committee member) / Computer Science and Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2019-05
132724-Thumbnail Image.png
Description
Multi-material manufacturing combines multiple fabrication processes to produce individual parts that can be made up of several different materials. These processes can include both additive and subtractive manufacturing methods as well as embedding other components during manufacturing. This yields opportunities for creating single parts that can take the

Multi-material manufacturing combines multiple fabrication processes to produce individual parts that can be made up of several different materials. These processes can include both additive and subtractive manufacturing methods as well as embedding other components during manufacturing. This yields opportunities for creating single parts that can take the place of an assembly of parts produced using conventional techniques. Some example applications of multi-material manufacturing include parts that are produced using one process then machined to tolerance using another, parts with integrated flexible joints, or parts that contain discrete embedded components such as reinforcing materials or electronics.

Multi-material manufacturing has applications in robotics because, with it, mechanisms can be built into a design without adding additional moving parts. This allows for robot designs that are both robust and low cost, making it a particularly attractive method for education or research. 3D printing is of particular interest in this area because it is low cost, readily available, and capable of easily producing complicated part geometries. Some machines are also capable of depositing multiple materials during a single process. However, up to this point, planning the steps to create a part using multi-material manufacturing has been done manually, requiring specialized knowledge of the tools used. The difficulty of this planning procedure can prevent many students and researchers from using multi-material manufacturing.

This project studied methods of automating the planning of multi-material manufacturing processes through the development of a computational framework for processing 3D models and automatically generating viable manufacturing sequences. This framework includes solid operations and algorithms which assist the designer in computing manufacturing steps for multi-material models. This research is informing the development of a software planning tool which will simplify the planning needed by multi-material fabrication, making it more accessible for use in education or research.

In our paper, Voxel-Based Cad Framework for Planning Functionally Graded and Multi-Step Rapid Fabrication Processes, we present a new framework for representing and computing functionally-graded materials for use in rapid prototyping applications. We introduce the material description itself, low-level operations which can be used to combine one or more geometries together, and algorithms which assist the designer in computing manufacturing-compatible sequences. We then apply these techniques to several example scenarios. First, we demonstrate the use of a Gaussian blur to add graded material transitions to a model which can then be produced using a multi-material 3D printing process. Our second example highlights our solution to the problem of inserting a discrete, off-the-shelf part into a 3D printed model during the printing sequence. Finally, we implement this second example and manufacture two example components.
ContributorsBrauer, Cole D (Author) / Aukes, Daniel (Thesis director) / Sodemann, Angela (Committee member) / Engineering Programs (Contributor, Contributor) / Barrett, The Honors College (Contributor)
Created2019-05
132967-Thumbnail Image.png
Description
Classical planning is a field of Artificial Intelligence concerned with allowing autonomous agents to make reasonable decisions in complex environments. This work investigates
the application of deep learning and planning techniques, with the aim of constructing generalized plans capable of solving multiple problem instances. We construct a Deep Neural Network that,

Classical planning is a field of Artificial Intelligence concerned with allowing autonomous agents to make reasonable decisions in complex environments. This work investigates
the application of deep learning and planning techniques, with the aim of constructing generalized plans capable of solving multiple problem instances. We construct a Deep Neural Network that, given an abstract problem state, predicts both (i) the best action to be taken from that state and (ii) the generalized “role” of the object being manipulated. The neural network was tested on two classical planning domains: the blocks world domain and the logistic domain. Results indicate that neural networks are capable of making such
predictions with high accuracy, indicating a promising new framework for approaching generalized planning problems.
ContributorsNakhleh, Julia Blair (Author) / Srivastava, Siddharth (Thesis director) / Fainekos, Georgios (Committee member) / Computer Science and Engineering Program (Contributor) / School of International Letters and Cultures (Contributor) / Barrett, The Honors College (Contributor)
Created2019-05
132079-Thumbnail Image.png
Description
In this update to the ESPBot, we have introduced new libraries for a small OLED display and a beeper. This functionality can be easily expanded to multiple beepers and displays, but requires more GPIO pins, or for the user to not use some of the infrared sensors or the ultrasonic

In this update to the ESPBot, we have introduced new libraries for a small OLED display and a beeper. This functionality can be easily expanded to multiple beepers and displays, but requires more GPIO pins, or for the user to not use some of the infrared sensors or the ultrasonic sensor. We have also relocated some of the pins. The display can be updated to display 1 of 4 predefined shapes, or to display user-defined text. New shapes can be added by defining new methods within display.ino and calling the appropriate functions while parsing the JSON data in viple.ino. The beeper can be controlled by user-defined input to play any frequency for any amount of time. There is also a function added to play the happy birthday song. More songs can be added by defining new methods within beeper.ino and calling the appropriate functions while parsing the JSON data in viple.ino. More functionality can be added to allow the user to input a list of frequencies along with a list of time so the user can define their own songs or sequences on the fly.
ContributorsWelfert, Monica Michelle (Co-author) / Nguyen, Van (Co-author) / Chen, Yinong (Thesis director) / Nakamura, Mutsumi (Committee member) / Computer Science and Engineering Program (Contributor, Contributor) / Barrett, The Honors College (Contributor)
Created2019-12
Description
Technical innovation has always played a part in live theatre, whether in the form of mechanical pieces like lifts and trapdoors to the more recent integration of digital media. The advances of the art form encourage the development of technology, and at the same time, technological development enables the advancement

Technical innovation has always played a part in live theatre, whether in the form of mechanical pieces like lifts and trapdoors to the more recent integration of digital media. The advances of the art form encourage the development of technology, and at the same time, technological development enables the advancement of theatrical expression. As mechanics, lighting, sound, and visual media have made their way into the spotlight, advances in theatrical robotics continue to push for their inclusion in the director's toolbox. However, much of the technology available is gated by high prices and unintuitive interfaces, designed for large troupes and specialized engineers, making it difficult to access for small schools and students new to the medium. As a group of engineering students with a vested interest in the development of the arts, this thesis team designed a system that will enable troupes from any background to participate in the advent of affordable automation. The intended result of this thesis project was to create a robotic platform that interfaces with custom software, receiving commands and transmitting position data, and to design that software so that a user can define intuitive cues for their shows. In addition, a new pathfinding algorithm was developed to support free-roaming automation in a 2D space. The final product consisted of a relatively inexpensive (< $2000) free-roaming platform, made entirely with COTS and standard materials, and a corresponding control system with cue design, wireless path following, and position tracking. This platform was built to support 1000 lbs, and includes integrated emergency stopping. The software allows for custom cue design, speed variation, and dynamic path following. Both the blueprints and the source code for the platform and control system have been released to open-source repositories, to encourage further development in the area of affordable automation. The platform itself was donated to the ASU School of Theater.
ContributorsHollenbeck, Matthew D. (Co-author) / Wiebel, Griffin (Co-author) / Winnemann, Christopher (Thesis director) / Christensen, Stephen (Committee member) / Computer Science and Engineering Program (Contributor) / School of Film, Dance and Theatre (Contributor) / Barrett, The Honors College (Contributor)
Created2018-05