Matching Items (26)
Filtering by

Clear all filters

134286-Thumbnail Image.png
Description
Many researchers aspire to create robotics systems that assist humans in common office tasks, especially by taking over delivery and messaging tasks. For meaningful interactions to take place, a mobile robot must be able to identify the humans it interacts with and communicate successfully with them. It must also be

Many researchers aspire to create robotics systems that assist humans in common office tasks, especially by taking over delivery and messaging tasks. For meaningful interactions to take place, a mobile robot must be able to identify the humans it interacts with and communicate successfully with them. It must also be able to successfully navigate the office environment. While mobile robots are well suited for navigating and interacting with elements inside a deterministic office environment, attempting to interact with human beings in an office environment remains a challenge due to the limits on the amount of cost-efficient compute power onboard the robot. In this work, I propose the use of remote cloud services to offload intensive interaction tasks. I detail the interactions required in an office environment and discuss the challenges faced when implementing a human-robot interaction platform in a stochastic office environment. I also experiment with cloud services for facial recognition, speech recognition, and environment navigation and discuss my results. As part of my thesis, I have implemented a human-robot interaction system utilizing cloud APIs into a mobile robot, enabling it to navigate the office environment, identify humans within the environment, and communicate with these humans.
Created2017-05
135340-Thumbnail Image.png
Description
Preventive maintenance is a practice that has become popular in recent years, largely due to the increased dependency on electronics and other mechanical systems in modern technologies. The main idea of preventive maintenance is to take care of maintenance-type issues before they fully appear or cause disruption of processes and

Preventive maintenance is a practice that has become popular in recent years, largely due to the increased dependency on electronics and other mechanical systems in modern technologies. The main idea of preventive maintenance is to take care of maintenance-type issues before they fully appear or cause disruption of processes and daily operations. One of the most important parts is being able to predict and foreshadow failures in the system, in order to make sure that those are fixed before they turn into large issues. One specific area where preventive maintenance is a very big part of daily activity is the automotive industry. Automobile owners are encouraged to take their cars in for maintenance on a routine schedule (based on mileage or time), or when their car signals that there is an issue (low oil levels for example). Although this level of maintenance is enough when people are in charge of cars, the rise of autonomous vehicles, specifically self-driving cars, changes that. Now instead of a human being able to look at a car and diagnose any issues, the car needs to be able to do this itself. The objective of this project was to create such a system. The Electronics Preventive Maintenance System is an internal system that is designed to meet all these criteria and more. The EPMS system is comprised of a central computer which monitors all major electronic components in an autonomous vehicle through the use of standard off-the-shelf sensors. The central computer compiles the sensor data, and is able to sort and analyze the readings. The filtered data is run through several mathematical models, each of which diagnoses issues in different parts of the vehicle. The data for each component in the vehicle is compared to pre-set operating conditions. These operating conditions are set in order to encompass all normal ranges of output. If the sensor data is outside the margins, the warning and deviation are recorded and a severity level is calculated. In addition to the individual focus, there's also a vehicle-wide model, which predicts how necessary maintenance is for the vehicle. All of these results are analyzed by a simple heuristic algorithm and a decision is made for the vehicle's health status, which is sent out to the Fleet Management System. This system allows for accurate, effortless monitoring of all parts of an autonomous vehicle as well as predictive modeling that allows the system to determine maintenance needs. With this system, human inspectors are no longer necessary for a fleet of autonomous vehicles. Instead, the Fleet Management System is able to oversee inspections, and the system operator is able to set parameters to decide when to send cars for maintenance. All the models used for the sensor and component analysis are tailored specifically to the vehicle. The models and operating margins are created using empirical data collected during normal testing operations. The system is modular and can be used in a variety of different vehicle platforms, including underwater autonomous vehicles and aerial vehicles.
ContributorsMian, Sami T. (Author) / Collofello, James (Thesis director) / Chen, Yinong (Committee member) / School of Mathematical and Statistical Sciences (Contributor) / Computer Science and Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2016-05
136627-Thumbnail Image.png
Description
This thesis focused on understanding how humans visually perceive swarm behavior through the use of swarm simulations and gaze tracking. The goal of this project was to determine visual patterns subjects display while observing and supervising a swarm as well as determine what swarm characteristics affect these patterns. As an

This thesis focused on understanding how humans visually perceive swarm behavior through the use of swarm simulations and gaze tracking. The goal of this project was to determine visual patterns subjects display while observing and supervising a swarm as well as determine what swarm characteristics affect these patterns. As an ultimate goal, it was hoped that this research will contribute to optimizing human-swarm interaction for the design of human supervisory controllers for swarms. To achieve the stated goals, two investigations were conducted. First, subjects gaze was tracked while observing a simulated swarm as it moved across the screen. This swarm changed in size, disturbance level in the position of the agents, speed, and path curvature. Second, subjects were asked to play a supervisory role as they watched a swarm move across the screen toward targets. The subjects determined whether a collision would occur and with which target while their responses as well as their gaze was tracked. In the case of an observatory role, a model of human gaze was created. This was embodied in a second order model similar to that of a spring-mass-damper system. This model was similar across subjects and stable. In the case of a supervisory role, inherent weaknesses in human perception were found, such as the inability to predict future position of curved paths. These findings are discussed in depth within the thesis. Overall, the results presented suggest that understanding human perception of swarms offers a new approach to the problem of swarm control. The ability to adapt controls to the strengths and weaknesses could lead to great strides in the reduction of operators in the control of one UAV, resulting in a move towards one man operation of a swarm.
ContributorsWhitton, Elena Michelle (Author) / Artemiadis, Panagiotis (Thesis director) / Berman, Spring (Committee member) / Barrett, The Honors College (Contributor) / Mechanical and Aerospace Engineering Program (Contributor)
Created2015-05
Description
Technical innovation has always played a part in live theatre, whether in the form of mechanical pieces like lifts and trapdoors to the more recent integration of digital media. The advances of the art form encourage the development of technology, and at the same time, technological development enables the advancement

Technical innovation has always played a part in live theatre, whether in the form of mechanical pieces like lifts and trapdoors to the more recent integration of digital media. The advances of the art form encourage the development of technology, and at the same time, technological development enables the advancement of theatrical expression. As mechanics, lighting, sound, and visual media have made their way into the spotlight, advances in theatrical robotics continue to push for their inclusion in the director's toolbox. However, much of the technology available is gated by high prices and unintuitive interfaces, designed for large troupes and specialized engineers, making it difficult to access for small schools and students new to the medium. As a group of engineering students with a vested interest in the development of the arts, this thesis team designed a system that will enable troupes from any background to participate in the advent of affordable automation. The intended result of this thesis project was to create a robotic platform that interfaces with custom software, receiving commands and transmitting position data, and to design that software so that a user can define intuitive cues for their shows. In addition, a new pathfinding algorithm was developed to support free-roaming automation in a 2D space. The final product consisted of a relatively inexpensive (< $2000) free-roaming platform, made entirely with COTS and standard materials, and a corresponding control system with cue design, wireless path following, and position tracking. This platform was built to support 1000 lbs, and includes integrated emergency stopping. The software allows for custom cue design, speed variation, and dynamic path following. Both the blueprints and the source code for the platform and control system have been released to open-source repositories, to encourage further development in the area of affordable automation. The platform itself was donated to the ASU School of Theater.
ContributorsHollenbeck, Matthew D. (Co-author) / Wiebel, Griffin (Co-author) / Winnemann, Christopher (Thesis director) / Christensen, Stephen (Committee member) / Computer Science and Engineering Program (Contributor) / School of Film, Dance and Theatre (Contributor) / Barrett, The Honors College (Contributor)
Created2018-05
133580-Thumbnail Image.png
Description
In this paper, we propose an autonomous throwing and catching system to be developed as a preliminary step towards the refinement of a robotic arm capable of improving strength and motor function in the limb. This will be accomplished by first autonomizing simpler movements, such as throwing a ball. In

In this paper, we propose an autonomous throwing and catching system to be developed as a preliminary step towards the refinement of a robotic arm capable of improving strength and motor function in the limb. This will be accomplished by first autonomizing simpler movements, such as throwing a ball. In this system, an autonomous thrower will detect a desired target through the use of image processing. The launch angle and direction necessary to hit the target will then be calculated, followed by the launching of the ball. The smart catcher will then detect the ball as it is in the air, calculate its expected landing location based on its initial trajectory, and adjust its position so that the ball lands in the center of the target. The thrower will then proceed to compare the actual landing position with the position where it expected the ball to land, and adjust its calculations accordingly for the next throw. By utilizing this method of feedback, the throwing arm will be able to automatically correct itself. This means that the thrower will ideally be able to hit the target exactly in the center within a few throws, regardless of any additional uncertainty in the system. This project will focus of the controller and image processing components necessary for the autonomous throwing arm to be able to detect the position of the target at which it will be aiming, and for the smart catcher to be able to detect the position of the projectile and estimate its final landing position by tracking its current trajectory.
ContributorsLundberg, Kathie Joy (Co-author) / Thart, Amanda (Co-author) / Rodriguez, Armando (Thesis director) / Berman, Spring (Committee member) / Electrical Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2018-05
134769-Thumbnail Image.png
Description
In order to adequately introduce students to computer science and robotics in an exciting and engaging manner certain teaching techniques should be used. In recent years some of the most popular paradigms are Visual Programming Languages. Visual Programming Languages are meant to introduce problem solving skills and basic programming constructs

In order to adequately introduce students to computer science and robotics in an exciting and engaging manner certain teaching techniques should be used. In recent years some of the most popular paradigms are Visual Programming Languages. Visual Programming Languages are meant to introduce problem solving skills and basic programming constructs inherent to all modern day languages by allowing users to write programs visually as opposed to textually. By bypassing the need to learn syntax students can focus on the thinking behind developing an algorithm and see immediate results that help generate excitement for the field and reduce disinterest due to startup complexity and burnout. The Introduction to Engineering course at Arizona State University supports this approach by teaching students the basics of autonomous maze traversing algorithms and using ASU VIPLE, a Visual Programming Language developed to connect with and direct real-world robots. However, some startup time is needed to learn how to interface with these robots using ASU VIPLE. That is why the HTML5 Autonomous Robot Web Simulator was created -- by encouraging students to use the simulator the problem solving behind autonomous maze traversing algorithms can be introduced more quickly and with immediate affirmation. Our goal was to improve this simulator and add features so that the simulator could be accessed and used for a more wide variety of introductory Computer Science lessons. Features scattered across past implementations of robotic simulators were aggregated in a cross platform solution. Upon initial development, a classroom test group revealed usability concerns and a demonstration of students' mental models. Mean time for task completion was 8.1min - compared to 2min for the authors. The simulator was updated in response to test group feedback and new instructor requirements. The new implementation reduces programming overhead while maintaining a learning environment with support for even the most complex applications.
ContributorsRodewald, Spencer (Co-author, Co-author) / Patel, Ankit (Co-author) / Chen, Yinong (Thesis director) / Chattin, Linda (Committee member) / Computer Science and Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2016-12
134544-Thumbnail Image.png
Description
This thesis presents an approach to design and implementation of an adaptive boundary coverage control strategy for a swarm robotic system. Several fields of study are relevant to this project, including; dynamic modeling, control theory, programming, and robotic design. Tools and techniques from these fields were used to design and

This thesis presents an approach to design and implementation of an adaptive boundary coverage control strategy for a swarm robotic system. Several fields of study are relevant to this project, including; dynamic modeling, control theory, programming, and robotic design. Tools and techniques from these fields were used to design and implement a model simulation and an experimental testbed. To achieve this goal, a simulation of the boundary coverage control strategy was first developed. This simulated model allowed for concept verification for different robot groups and boundary designs. The simulation consisted of a single, constantly expanding circular boundary with a modeled swarm of robots that autonomously allocate themselves around the boundary. Ultimately, this simulation was implemented in an experimental testbed consisting of mobile robots and a moving boundary wall to exhibit the behaviors of the simulated robots. The conclusions from this experiment are hoped to help make further advancements to swarm robotic technology. The results presented show promise for future progress in adaptive control strategies for robotic swarms.
ContributorsMurphy, Hunter Nicholas (Author) / Berman, Spring (Thesis director) / Marvi, Hamid (Committee member) / Mechanical and Aerospace Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2017-05
132909-Thumbnail Image.png
Description
This thesis details the design and construction of a torque-controlled robotic gripper for use with the Pheeno swarm robotics platform. This project required expertise from several fields of study including: robotic design, programming, rapid prototyping, and control theory. An electronic Inertial Measurement Unit and a DC Motor were both used

This thesis details the design and construction of a torque-controlled robotic gripper for use with the Pheeno swarm robotics platform. This project required expertise from several fields of study including: robotic design, programming, rapid prototyping, and control theory. An electronic Inertial Measurement Unit and a DC Motor were both used along with 3D printed plastic components and an electronic motor control board to develop a functional open-loop controlled gripper for use in collective transportation experiments. Code was developed that effectively acquired and filtered rate of rotation data alongside other code that allows for straightforward control of the DC motor through experimentally derived relationships between the voltage applied to the DC motor and the torque output of the DC motor. Additionally, several versions of the physical components are described through their development.
ContributorsMohr, Brennan (Author) / Berman, Spring (Thesis director) / Ren, Yi (Committee member) / Mechanical and Aerospace Engineering Program (Contributor) / School for Engineering of Matter,Transport & Enrgy (Contributor) / Barrett, The Honors College (Contributor)
Created2019-05
132967-Thumbnail Image.png
Description
Classical planning is a field of Artificial Intelligence concerned with allowing autonomous agents to make reasonable decisions in complex environments. This work investigates
the application of deep learning and planning techniques, with the aim of constructing generalized plans capable of solving multiple problem instances. We construct a Deep Neural Network that,

Classical planning is a field of Artificial Intelligence concerned with allowing autonomous agents to make reasonable decisions in complex environments. This work investigates
the application of deep learning and planning techniques, with the aim of constructing generalized plans capable of solving multiple problem instances. We construct a Deep Neural Network that, given an abstract problem state, predicts both (i) the best action to be taken from that state and (ii) the generalized “role” of the object being manipulated. The neural network was tested on two classical planning domains: the blocks world domain and the logistic domain. Results indicate that neural networks are capable of making such
predictions with high accuracy, indicating a promising new framework for approaching generalized planning problems.
ContributorsNakhleh, Julia Blair (Author) / Srivastava, Siddharth (Thesis director) / Fainekos, Georgios (Committee member) / Computer Science and Engineering Program (Contributor) / School of International Letters and Cultures (Contributor) / Barrett, The Honors College (Contributor)
Created2019-05
134257-Thumbnail Image.png
Description
This thesis describes a multi-robot architecture which allows teams of robots to work with humans to complete tasks. The multi-agent architecture was built using Robot Operating System and Python. This architecture was designed modularly, allowing the use of different planners and robots. The system automatically replans when robots connect or

This thesis describes a multi-robot architecture which allows teams of robots to work with humans to complete tasks. The multi-agent architecture was built using Robot Operating System and Python. This architecture was designed modularly, allowing the use of different planners and robots. The system automatically replans when robots connect or disconnect. The system was demonstrated on two real robots, a Fetch and a PeopleBot, by conducting a surveillance task on the fifth floor of the Computer Science building at Arizona State University. The next part of the system includes extensions for teaming with humans. An Android application was created to serve as the interface between the system and human teammates. This application provides a way for the system to communicate with humans in the loop. In addition, it sends location information of the human teammates to the system so that goal recognition can be performed. This goal recognition allows the generation of human-aware plans. This capability was demonstrated in a mock search and rescue scenario using the Fetch to locate a missing teammate.
ContributorsSaba, Gabriel Christer (Author) / Kambhampati, Subbarao (Thesis director) / Doupé, Adam (Committee member) / Chakraborti, Tathagata (Committee member) / Computer Science and Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2017-05