Matching Items (20)
Filtering by

Clear all filters

151762-Thumbnail Image.png
Description
Teamwork and project management (TPM) tools are important components of sustainability science curricula designed using problem- and project-base learning (PPBL). Tools are additional materials, beyond lectures, readings, and assignments, that structure and facilitate students' learning; they can enhance student teams' ability to complete projects and achieve learning outcomes and, if

Teamwork and project management (TPM) tools are important components of sustainability science curricula designed using problem- and project-base learning (PPBL). Tools are additional materials, beyond lectures, readings, and assignments, that structure and facilitate students' learning; they can enhance student teams' ability to complete projects and achieve learning outcomes and, if instructors can find appropriate existing tools, can reduce time needed for class design and preparation. This research uses a case study approach to evaluate the effectiveness of five TPM tools in two Arizona State University (ASU) sustainability classes: an introductory (100-level) and a capstone (400-level) class. Data was collected from student evaluations and instructor observations in both classes during Spring 2013 and qualitatively analyzed to identify patterns in tool use and effectiveness. Results suggest how instructors might improve tool effectiveness in other sustainability classes. Work plans and meeting agendas were the most effective TPM tools in the 100-level class, while work plans and codes of collaboration were most effective at the 400 level. Common factors in tool effectiveness include active use and integration of tools into class activities. Suggestions for improving tool effectiveness at both levels include introducing tools earlier in the course, incorporating tools into activities, and helping students link a tool's value to sustainability problem-solving competence. Polling students on prior use and incorporating tool use into project assignments may increase 100 level tool effectiveness; and at the 400 level, improvements may be achieved by introducing tools earlier and coaching students to select, find, and develop relevant tools.
ContributorsTrippel, Dorothy (Author) / Redman, Charles L. (Thesis advisor) / Pijawka, K. David (Committee member) / Walters, Molina (Committee member) / Arizona State University (Publisher)
Created2013
161939-Thumbnail Image.png
Description
Traditional Reinforcement Learning (RL) assumes to learn policies with respect to reward available from the environment but sometimes learning in a complex domain requires wisdom which comes from a wide range of experience. In behavior based robotics, it is observed that a complex behavior can be described by a combination

Traditional Reinforcement Learning (RL) assumes to learn policies with respect to reward available from the environment but sometimes learning in a complex domain requires wisdom which comes from a wide range of experience. In behavior based robotics, it is observed that a complex behavior can be described by a combination of simpler behaviors. It is tempting to apply similar idea such that simpler behaviors can be combined in a meaningful way to tailor the complex combination. Such an approach would enable faster learning and modular design of behaviors. Complex behaviors can be combined with other behaviors to create even more advanced behaviors resulting in a rich set of possibilities. Similar to RL, combined behavior can keep evolving by interacting with the environment. The requirement of this method is to specify a reasonable set of simple behaviors. In this research, I present an algorithm that aims at combining behavior such that the resulting behavior has characteristics of each individual behavior. This approach has been inspired by behavior based robotics, such as the subsumption architecture and motor schema-based design. The combination algorithm outputs n weights to combine behaviors linearly. The weights are state dependent and change dynamically at every step in an episode. This idea is tested on discrete and continuous environments like OpenAI’s “Lunar Lander” and “Biped Walker”. Results are compared with related domains like Multi-objective RL, Hierarchical RL, Transfer learning, and basic RL. It is observed that the combination of behaviors is a novel way of learning which helps the agent achieve required characteristics. A combination is learned for a given state and so the agent is able to learn faster in an efficient manner compared to other similar approaches. Agent beautifully demonstrates characteristics of multiple behaviors which helps the agent to learn and adapt to the environment. Future directions are also suggested as possible extensions to this research.
ContributorsVora, Kevin Jatin (Author) / Zhang, Yu (Thesis advisor) / Yang, Yezhou (Committee member) / Praharaj, Sarbeswar (Committee member) / Arizona State University (Publisher)
Created2021
189299-Thumbnail Image.png
Description
Multiple robotic arms collaboration is to control multiple robotic arms to collaborate with each other to work on the same task. During the collaboration, theagent is required to avoid all possible collisions between each part of the robotic arms. Thus, incentivizing collaboration and preventing collisions are the two principles which are followed

Multiple robotic arms collaboration is to control multiple robotic arms to collaborate with each other to work on the same task. During the collaboration, theagent is required to avoid all possible collisions between each part of the robotic arms. Thus, incentivizing collaboration and preventing collisions are the two principles which are followed by the agent during the training process. Nowadays, more and more applications, both in industry and daily lives, require at least two arms, instead of requiring only a single arm. A dual-arm robot satisfies much more needs of different types of tasks, such as folding clothes at home, making a hamburger in a grill or picking and placing a product in a warehouse. The applications done in this paper are all about object pushing. This thesis focuses on how to train the agent to learn pushing an object away as far as possible. Reinforcement Learning (RL), which is a type of Machine Learning (ML), is then utilized in this paper to train the agent to generate optimal actions. Deep Deterministic Policy Gradient (DDPG) and Hindsight Experience Replay (HER) are the two RL methods used in this thesis.
ContributorsLin, Steve (Author) / Ben Amor, Hani (Thesis advisor) / Redkar, Sangram (Committee member) / Zhang, Yu (Committee member) / Arizona State University (Publisher)
Created2023
187557-Thumbnail Image.png
Description
Museums have long been known as an exciting, educational field trip for teachers and students, however, they have the potential to be more. Aside from field trips, some museums offer a range of resources for teachers including professional development sessions. This study followed a sample of classroom teachers as they

Museums have long been known as an exciting, educational field trip for teachers and students, however, they have the potential to be more. Aside from field trips, some museums offer a range of resources for teachers including professional development sessions. This study followed a sample of classroom teachers as they completed a three-part workshop on Project Based Learning in order to determine in what ways does museum-based professional development change (a) Teacher perceptions of museum resources and (b) Teacher utilization of museum resources within their classroom, as well as what aspects of the museum-designed professional development experience did teachers find most effective in impacting their own teaching. The Experiential Learning Theory and an Awareness, Interest, Desire, Action (AIDA) effect model were used to evaluate how perceptions changed before and after the museum-designed experience. Overall, the trustworthiness of the informal educators and their resources increased, as well as teacher utilization of the resources since participation. Some of the aspects that teachers reported as most effective included their willingness to engage because of their overall enjoyment of the experience. Teachers also emphasized that these workshop sessions enhanced their current teaching practices, and did not simply replace them.
ContributorsGallagher Schromm, Lisa (Author) / Rotheram-Fuller, Erin (Thesis advisor) / Toon, Richard (Committee member) / Walters, Molina (Committee member) / Arizona State University (Publisher)
Created2023
172013-Thumbnail Image.png
Description
In this thesis work, a novel learning approach to solving the problem of controllinga quadcopter (drone) swarm is explored. To deal with large sizes, swarm control is often achieved in a distributed fashion by combining different behaviors such that each behavior implements some desired swarm characteristics, such as avoiding ob- stacles and staying

In this thesis work, a novel learning approach to solving the problem of controllinga quadcopter (drone) swarm is explored. To deal with large sizes, swarm control is often achieved in a distributed fashion by combining different behaviors such that each behavior implements some desired swarm characteristics, such as avoiding ob- stacles and staying close to neighbors. One common approach in distributed swarm control uses potential fields. A limitation of this approach is that the potential fields often depend statically on a set of control parameters that are manually specified a priori. This paper introduces Dynamic Potential Fields for flexible swarm control. These potential fields are modulated by a set of dynamic control parameters (DCPs) that can change under different environment situations. Since the focus is only on these DCPs, it simplifies the learning problem and makes it feasible for practical use. This approach uses soft actor critic (SAC) where the actor only determines how to modify DCPs in the current situation, resulting in more flexible swarm control. In the results, this work will show that the DCP approach allows for the drones to bet- ter traverse environments with obstacles compared to several state-of-the-art swarm control methods with a fixed set of control parameters. This approach also obtained a higher safety score commonly used to assess swarm behavior. A basic reinforce- ment learning approach is compared to demonstrate faster convergence. Finally, an ablation study is conducted to validate the design of this approach.
ContributorsFerraro, Calvin Shores (Author) / Zhang, Yu (Thesis advisor) / Ben Amor, Hani (Committee member) / Berman, Spring (Committee member) / Arizona State University (Publisher)
Created2022
168422-Thumbnail Image.png
Description
Natural Language plays a crucial role in human-robot interaction as it is the common ground where human beings and robots can communicate and understand each other. However, most of the work in natural language and robotics is majorly on generating robot actions using a natural language command, which is a

Natural Language plays a crucial role in human-robot interaction as it is the common ground where human beings and robots can communicate and understand each other. However, most of the work in natural language and robotics is majorly on generating robot actions using a natural language command, which is a unidirectional way of communication. This work focuses on the other direction of communication, where the approach allows a robot to describe its actions from sampled images and joint sequences from the robot task. The importance of this work is that it utilizes multiple modalities, which are the start and end images from the robot task environment and the joint trajectories of the robot arms. The fusion of different modalities is not just about fusing the data but knowing what information to extract from which data sources in such a way that the language description represents the state of the manipulator and the environment that it is performing the task on. From the experimental results of various simulated robot environments, this research demonstrates that utilizing multiple modalities improves the accuracy of the natural language description, and efficiently fusing the modalities is crucial in generating such descriptions by harnessing most of the various data sources.
ContributorsKALIRATHINAM, KAMALESH (Author) / Ben Amor, Heni (Thesis advisor) / Phielipp, Mariano (Committee member) / Zhang, Yu (Committee member) / Arizona State University (Publisher)
Created2021
189263-Thumbnail Image.png
Description
In this work, I propose to bridge the gap between human users and adaptive control of robotic systems. The goal is to enable robots to consider user feedback and adjust their behaviors. A critical challenge with designing such systems is that users are often non-experts, with limited knowledge about

In this work, I propose to bridge the gap between human users and adaptive control of robotic systems. The goal is to enable robots to consider user feedback and adjust their behaviors. A critical challenge with designing such systems is that users are often non-experts, with limited knowledge about the robot's hardware and dynamics. In the domain of human-robot interaction, there exist different modalities of conveying information regarding the desired behavior of the robot, most commonly used are demonstrations, and preferences. While it is challenging for non-experts to provide demonstrations of robot behavior, works that consider preferences expressed as trajectory rankings lead to users providing noisy and possibly conflicting information, leading to slow adaptation or system failures. The end user can be expected to be familiar with the dynamics and how they relate to their desired objectives through repeated interactions with the system. However, due to inadequate knowledge about the system dynamics, it is expected that the user would find it challenging to provide feedback on all dimension's of the system's behavior at all times. Thus, the key innovation of this work is to enable users to provide partial instead of completely specified preferences as with traditional methods that learn from user preferences. In particular, I consider partial preferences in the form of preferences over plant dynamic parameters, for which I propose Adaptive User Control (AUC) of robotic systems. I leverage the correlations between the observed and hidden parameter preferences to deal with incompleteness. I use a sparse Gaussian Process Latent Variable Model formulation to learn hidden variables that represent the relationships between the observed and hidden preferences over the system parameters. This model is trained using Stochastic Variational Inference with a distributed loss formulation. I evaluate AUC in a custom drone-swarm environment and several domains from DeepMind control suite. I compare AUC with the state-of-the-art preference-based reinforcement learning methods that are utilized with user preferences. Results show that AUC outperforms the baselines substantially in terms of sample and feedback complexity.
ContributorsBiswas, Upasana (Author) / Zhang, Yu (Thesis advisor) / Kambhampati, Subbarao (Committee member) / Berman, Spring (Committee member) / Liu, Lantao (Committee member) / Arizona State University (Publisher)
Created2023
193542-Thumbnail Image.png
Description
As robots become increasingly integrated into the environments, they need to learn how to interact with the objects around them. Many of these objects are articulated with multiple degrees of freedom (DoF). Multi-DoF objects have complex joints that require specific manipulation orders, but existing methods only consider objects with a

As robots become increasingly integrated into the environments, they need to learn how to interact with the objects around them. Many of these objects are articulated with multiple degrees of freedom (DoF). Multi-DoF objects have complex joints that require specific manipulation orders, but existing methods only consider objects with a single joint. To capture the joint structure and manipulation sequence of any object, I introduce the "Object Kinematic State Machines" (OKSMs), a novel representation that models the kinematic constraints and manipulation sequences of multi-DoF objects. I also present Pokenet, a deep neural network architecture that estimates the OKSMs from the sequence of point cloud data of human demonstrations. I conduct experiments on both simulated and real-world datasets to validate my approach. First, I evaluate the modeling of multi-DoF objects on a simulated dataset, comparing against the current state-of-the-art method. I then assess Pokenet's real-world usability on a dataset collected in my lab, comprising 5,500 data points across 4 objects. Results showcase that my method can successfully estimate joint parameters of novel multi-DoF objects with over 25% more accuracy on average than prior methods.
ContributorsGUPTA, ANMOL (Author) / Gopalan, Nakul (Thesis advisor) / Zhang, Yu (Committee member) / Wang, Yalin (Committee member) / Arizona State University (Publisher)
Created2024
193613-Thumbnail Image.png
Description
In today's world, robotic technology has become increasingly prevalent across various fields such as manufacturing, warehouses, delivery, and household applications. Planning is crucial for robots to solve various tasks in such difficult domains. However, most robots rely heavily on humans for world models that enable planning. Consequently, it is not

In today's world, robotic technology has become increasingly prevalent across various fields such as manufacturing, warehouses, delivery, and household applications. Planning is crucial for robots to solve various tasks in such difficult domains. However, most robots rely heavily on humans for world models that enable planning. Consequently, it is not only expensive to create such world models, as it requires human experts who understand the domain as well as robot limitations, these models may also be biased by human embodiment, which can be limiting for robots whose kinematics are not human-like. This thesis answers the fundamental question: Can we learn such world models automatically? This research shows that we can learn complex world models directly from unannotated and unlabeled demonstrations containing only the configurations of the robot and the objects in the environment. The core contributions of this thesis are the first known approaches for i) task and motion planning that explicitly handle stochasticity, ii) automatically inventing neuro-symbolic state and action abstractions for deterministic and stochastic motion planning, and iii) automatically inventing relational and interpretable world models in the form of symbolic predicates and actions. This thesis also presents a thorough and rigorous empirical experimentation. With experiments in both simulated and real-world settings, this thesis has demonstrated the efficacy and robustness of automatically learned world models in overcoming challenges, generalizing beyond situations encountered during training.
ContributorsShah, Naman (Author) / Srivastava, Siddharth (Thesis advisor) / Kambhampati, Subbarao (Committee member) / Konidaris, George (Committee member) / Speranzon, Alberto (Committee member) / Zhang, Yu (Committee member) / Arizona State University (Publisher)
Created2024
157016-Thumbnail Image.png
Description
A critical challenge in the design of AI systems that operate with humans in the loop is to be able to model the intentions and capabilities of the humans, as well as their beliefs and expectations of the AI system itself. This allows the AI system to be "human- aware"

A critical challenge in the design of AI systems that operate with humans in the loop is to be able to model the intentions and capabilities of the humans, as well as their beliefs and expectations of the AI system itself. This allows the AI system to be "human- aware" -- i.e. the human task model enables it to envisage desired roles of the human in joint action, while the human mental model allows it to anticipate how its own actions are perceived from the point of view of the human. In my research, I explore how these concepts of human-awareness manifest themselves in the scope of planning or sequential decision making with humans in the loop. To this end, I will show (1) how the AI agent can leverage the human task model to generate symbiotic behavior; and (2) how the introduction of the human mental model in the deliberative process of the AI agent allows it to generate explanations for a plan or resort to explicable plans when explanations are not desired. The latter is in addition to traditional notions of human-aware planning which typically use the human task model alone and thus enables a new suite of capabilities of a human-aware AI agent. Finally, I will explore how the AI agent can leverage emerging mixed-reality interfaces to realize effective channels of communication with the human in the loop.
ContributorsChakraborti, Tathagata (Author) / Kambhampati, Subbarao (Thesis advisor) / Talamadupula, Kartik (Committee member) / Scheutz, Matthias (Committee member) / Ben Amor, Hani (Committee member) / Zhang, Yu (Committee member) / Arizona State University (Publisher)
Created2018