Matching Items (7)
Filtering by

Clear all filters

134544-Thumbnail Image.png
Description
This thesis presents an approach to design and implementation of an adaptive boundary coverage control strategy for a swarm robotic system. Several fields of study are relevant to this project, including; dynamic modeling, control theory, programming, and robotic design. Tools and techniques from these fields were used to design and

This thesis presents an approach to design and implementation of an adaptive boundary coverage control strategy for a swarm robotic system. Several fields of study are relevant to this project, including; dynamic modeling, control theory, programming, and robotic design. Tools and techniques from these fields were used to design and implement a model simulation and an experimental testbed. To achieve this goal, a simulation of the boundary coverage control strategy was first developed. This simulated model allowed for concept verification for different robot groups and boundary designs. The simulation consisted of a single, constantly expanding circular boundary with a modeled swarm of robots that autonomously allocate themselves around the boundary. Ultimately, this simulation was implemented in an experimental testbed consisting of mobile robots and a moving boundary wall to exhibit the behaviors of the simulated robots. The conclusions from this experiment are hoped to help make further advancements to swarm robotic technology. The results presented show promise for future progress in adaptive control strategies for robotic swarms.
ContributorsMurphy, Hunter Nicholas (Author) / Berman, Spring (Thesis director) / Marvi, Hamid (Committee member) / Mechanical and Aerospace Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2017-05
134121-Thumbnail Image.png
Description
This thesis details the process of developing a force feedback system for a small robotic manipulator in order to prevent damage to manipulators and the objects they are grasping, which is a desired feature in many autonomous robots. This includes the research, design, fabrication, and testing of a custom force-sensing

This thesis details the process of developing a force feedback system for a small robotic manipulator in order to prevent damage to manipulators and the objects they are grasping, which is a desired feature in many autonomous robots. This includes the research, design, fabrication, and testing of a custom force-sensing resistor and a custom set of jaws to implement the feedback system on. In order to complete this project, extensive research went to designing and building test beds for the commercial and custom force sensors to determine if force values could even be obtained. Then the sensors were implemented on a manipulator and were evaluated for ease of use during assembly and testing, accuracy, and repeatability of results using a test bed designed during the course of this research. Afterwards the custom jaws were designed and fabricated based on problems encountered during testing with the initial set of jaws. The new jaws were then tested on the test bed with the sensors and the force feedback system was implemented on it. The overall system was then evaluated for any current limitations and improvements that could be made in the future to further develop this research and assist with its implementation on other robots. The results of this experiment show that a low-cost force sensor that is easy to mass produce can be implemented on an autonomous robot to add force feedback capabilities to it. It is hopeful that the results from the experiments conducted are implemented on robotic manipulators so the area of force sensing technologies research can be expanded upon and improved.
ContributorsMartin, Anna Lynn (Author) / Berman, Spring (Thesis director) / Rajagopalan, Jagannathan (Committee member) / Mechanical and Aerospace Engineering Program (Contributor) / Materials Science and Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2017-12
153874-Thumbnail Image.png
Description
Emergent processes can roughly be defined as processes that self-arise from interactions without a centralized control. People have many robust misconceptions in explaining emergent process concepts such as natural selection and diffusion. This is because they lack a proper categorical representation of emergent processes and often misclassify these processes into

Emergent processes can roughly be defined as processes that self-arise from interactions without a centralized control. People have many robust misconceptions in explaining emergent process concepts such as natural selection and diffusion. This is because they lack a proper categorical representation of emergent processes and often misclassify these processes into the sequential processes category that they are more familiar with. The two kinds of processes can be distinguished by their second-order features that describe how one interaction relates to another interaction. This study investigated if teaching emergent second-order features can help people more correctly categorize new processes, it also compared different instructional methods in teaching emergent second-order features. The prediction was that learning emergent features should help more than learning sequential features because what most people lack is the representation of emergent processes. Results confirmed this by showing participants who generated emergent features and got correct features as feedback were better at distinguishing two kinds of processes compared to participants who rewrote second-order sequential features. Another finding was that participants who generated emergent features followed by reading correct features as feedback did better in distinguishing the processes than participants who only attempted to generate the emergent features without feedback. Finally, switching the order of instruction by teaching emergent features and then asking participants to explain the difference between emergent and sequential features resulted in equivalent learning gain as the experimental group that received feedback. These results proved teaching emergent second-order features helps people categorize processes and demonstrated the most efficient way to teach them.
ContributorsXu, Dongchen (Author) / Chi, Michelene (Thesis advisor) / Homa, Donald (Committee member) / Glenberg, Arthur (Committee member) / Arizona State University (Publisher)
Created2015
155689-Thumbnail Image.png
Description
Paper assessment remains to be an essential formal assessment method in today's classes. However, it is difficult to track student learning behavior on physical papers. This thesis presents a new educational technology—Web Programming Grading Assistant (WPGA). WPGA not only serves as a grading system but also a feedback delivery tool

Paper assessment remains to be an essential formal assessment method in today's classes. However, it is difficult to track student learning behavior on physical papers. This thesis presents a new educational technology—Web Programming Grading Assistant (WPGA). WPGA not only serves as a grading system but also a feedback delivery tool that connects paper-based assessments to digital space. I designed a classroom study and collected data from ASU computer science classes. I tracked and modeled students' reviewing and reflecting behaviors based on the use of WPGA. I analyzed students' reviewing efforts, in terms of frequency, timing, and the associations with their academic performances. Results showed that students put extra emphasis in reviewing prior to the exams and the efforts demonstrated the desire to review formal assessments regardless of if they were graded for academic performance or for attendance. In addition, all students paid more attention on reviewing quizzes and exams toward the end of semester.
ContributorsHuang, Po-Kai (Author) / Hsiao, I-Han (Thesis advisor) / Nelson, Brian (Committee member) / VanLehn, Kurt (Committee member) / Arizona State University (Publisher)
Created2017
156281-Thumbnail Image.png
Description
Currently, one of the biggest limiting factors for long-term deployment of autonomous systems is the power constraints of a platform. In particular, for aerial robots such as unmanned aerial vehicles (UAVs), the energy resource is the main driver of mission planning and operation definitions, as everything revolved around flight time.

Currently, one of the biggest limiting factors for long-term deployment of autonomous systems is the power constraints of a platform. In particular, for aerial robots such as unmanned aerial vehicles (UAVs), the energy resource is the main driver of mission planning and operation definitions, as everything revolved around flight time. The focus of this work is to develop a new method of energy storage and charging for autonomous UAV systems, for use during long-term deployments in a constrained environment. We developed a charging solution that allows pre-equipped UAV system to land on top of designated charging pads and rapidly replenish their battery reserves, using a contact charging point. This system is designed to work with all types of rechargeable batteries, focusing on Lithium Polymer (LiPo) packs, that incorporate a battery management system for increased reliability. The project also explores optimization methods for fleets of UAV systems, to increase charging efficiency and extend battery lifespans. Each component of this project was first designed and tested in computer simulation. Following positive feedback and results, prototypes for each part of this system were developed and rigorously tested. Results show that the contact charging method is able to charge LiPo batteries at a 1-C rate, which is the industry standard rate, maintaining the same safety and efficiency standards as modern day direct connection chargers. Control software for these base stations was also created, to be integrated with a fleet management system, and optimizes UAV charge levels and distribution to extend LiPo battery lifetimes while still meeting expected mission demand. Each component of this project (hardware/software) was designed for manufacturing and implementation using industry standard tools, making it ideal for large-scale implementations. This system has been successfully tested with a fleet of UAV systems at Arizona State University, and is currently being integrated into an Arizona smart city environment for deployment.
ContributorsMian, Sami (Author) / Panchanathan, Sethuraman (Thesis advisor) / Berman, Spring (Committee member) / Yang, Yezhou (Committee member) / McDaniel, Troy (Committee member) / Arizona State University (Publisher)
Created2018
158221-Thumbnail Image.png
Description
The problem of modeling and controlling the distribution of a multi-agent system has recently evolved into an interdisciplinary effort. When the agent population is very large, i.e., at least on the order of hundreds of agents, it is important that techniques for analyzing and controlling the system scale well with

The problem of modeling and controlling the distribution of a multi-agent system has recently evolved into an interdisciplinary effort. When the agent population is very large, i.e., at least on the order of hundreds of agents, it is important that techniques for analyzing and controlling the system scale well with the number of agents. One scalable approach to characterizing the behavior of a multi-agent system is possible when the agents' states evolve over time according to a Markov process. In this case, the density of agents over space and time is governed by a set of difference or differential equations known as a {\it mean-field model}, whose parameters determine the stochastic control policies of the individual agents. These models often have the advantage of being easier to analyze than the individual agent dynamics. Mean-field models have been used to describe the behavior of chemical reaction networks, biological collectives such as social insect colonies, and more recently, swarms of robots that, like natural swarms, consist of hundreds or thousands of agents that are individually limited in capability but can coordinate to achieve a particular collective goal.

This dissertation presents a control-theoretic analysis of mean-field models for which the agent dynamics are governed by either a continuous-time Markov chain on an arbitrary state space, or a discrete-time Markov chain on a continuous state space. Three main problems are investigated. First, the problem of stabilization is addressed, that is, the design of transition probabilities/rates of the Markov process (the agent control parameters) that make a target distribution, satisfying certain conditions, invariant. Such a control approach could be used to achieve desired multi-agent distributions for spatial coverage and task allocation. However, the convergence of the multi-agent distribution to the designed equilibrium does not imply the convergence of the individual agents to fixed states. To prevent the agents from continuing to transition between states once the target distribution is reached, and thus potentially waste energy, the second problem addressed within this dissertation is the construction of feedback control laws that prevent agents from transitioning once the equilibrium distribution is reached. The third problem addressed is the computation of optimized transition probabilities/rates that maximize the speed at which the system converges to the target distribution.
ContributorsBiswal, Shiba (Author) / Berman, Spring (Thesis advisor) / Fainekos, Georgios (Committee member) / Lanchier, Nicolas (Committee member) / Mignolet, Marc (Committee member) / Peet, Matthew (Committee member) / Arizona State University (Publisher)
Created2020
161731-Thumbnail Image.png
Description
As technological advancements in silicon, sensors, and actuation continue, the development of robotic swarms is shifting from the domain of science fiction to reality. Many swarm applications, such as environmental monitoring, precision agriculture, disaster response, and lunar prospecting, will require controlling numerous robots with limited capabilities and information to redistribute

As technological advancements in silicon, sensors, and actuation continue, the development of robotic swarms is shifting from the domain of science fiction to reality. Many swarm applications, such as environmental monitoring, precision agriculture, disaster response, and lunar prospecting, will require controlling numerous robots with limited capabilities and information to redistribute among multiple states, such as spatial locations or tasks. A scalable control approach is to program the robots with stochastic control policies such that the robot population in each state evolves according to a mean-field model, which is independent of the number and identities of the robots. Using this model, the control policies can be designed to stabilize the swarm to the target distribution. To avoid the need to reprogram the robots for different target distributions, the robot control policies can be defined to depend only on the presence of a “leader” agent, whose control policy is designed to guide the swarm to a particular distribution. This dissertation presents a novel deep reinforcement learning (deep RL) approach to designing control policies that redistribute a swarm as quickly as possible over a strongly connected graph, according to a mean-field model in the form of the discrete-time Kolmogorov forward equation. In the leader-based strategies, the leader determines its next action based on its observations of robot populations and shepherds the swarm over the graph by probabilistically repelling nearby robots. The scalability of this approach with the swarm size is demonstrated with leader control policies that are designed using two tabular Temporal-Difference learning algorithms, trained on a discretization of the swarm distribution. To improve the scalability of the approach with robot population and graph size, control policies for both leader-based and leaderless strategies are designed using an actor-critic deep RL method that is trained on the swarm distribution predicted by the mean-field model. In the leaderless strategy, the robots’ control policies depend only on their local measurements of nearby robot populations. The control approaches are validated for different graph and swarm sizes in numerical simulations, 3D robot simulations, and experiments on a multi-robot testbed.
ContributorsKakish, Zahi Mousa (Author) / Berman, Spring (Thesis advisor) / Yong, Sze Zheng (Committee member) / Marvi, Hamid (Committee member) / Pavlic, Theodore (Committee member) / Pratt, Stephen (Committee member) / Ben Amor, Hani (Committee member) / Arizona State University (Publisher)
Created2021