Matching Items (8)
Filtering by

Clear all filters

152158-Thumbnail Image.png
Description
Most data cleaning systems aim to go from a given deterministic dirty database to another deterministic but clean database. Such an enterprise pre–supposes that it is in fact possible for the cleaning process to uniquely recover the clean versions of each dirty data tuple. This is not possible in many

Most data cleaning systems aim to go from a given deterministic dirty database to another deterministic but clean database. Such an enterprise pre–supposes that it is in fact possible for the cleaning process to uniquely recover the clean versions of each dirty data tuple. This is not possible in many cases, where the most a cleaning system can do is to generate a (hopefully small) set of clean candidates for each dirty tuple. When the cleaning system is required to output a deterministic database, it is forced to pick one clean candidate (say the "most likely" candidate) per tuple. Such an approach can lead to loss of information. For example, consider a situation where there are three equally likely clean candidates of a dirty tuple. An appealing alternative that avoids such an information loss is to abandon the requirement that the output database be deterministic. In other words, even though the input (dirty) database is deterministic, I allow the reconstructed database to be probabilistic. Although such an approach does avoid the information loss, it also brings forth several challenges. For example, how many alternatives should be kept per tuple in the reconstructed database? Maintaining too many alternatives increases the size of the reconstructed database, and hence the query processing time. Second, while processing queries on the probabilistic database may well increase recall, how would they affect the precision of the query processing? In this thesis, I investigate these questions. My investigation is done in the context of a data cleaning system called BayesWipe that has the capability of producing multiple clean candidates per each dirty tuple, along with the probability that they are the correct cleaned version. I represent these alternatives as tuples in a tuple disjoint probabilistic database, and use the Mystiq system to process queries on it. This probabilistic reconstruction (called BayesWipe–PDB) is compared to a deterministic reconstruction (called BayesWipe–DET)—where the most likely clean candidate for each tuple is chosen, and the rest of the alternatives discarded.
ContributorsRihan, Preet Inder Singh (Author) / Kambhampati, Subbarao (Thesis advisor) / Liu, Huan (Committee member) / Davulcu, Hasan (Committee member) / Arizona State University (Publisher)
Created2013
153003-Thumbnail Image.png
Description
Recent efforts in data cleaning have focused mostly on problems like data deduplication, record matching, and data standardization; few of these focus on fixing incorrect attribute values in tuples. Correcting values in tuples is typically performed by a minimum cost repair of tuples that violate static constraints like CFDs (which

Recent efforts in data cleaning have focused mostly on problems like data deduplication, record matching, and data standardization; few of these focus on fixing incorrect attribute values in tuples. Correcting values in tuples is typically performed by a minimum cost repair of tuples that violate static constraints like CFDs (which have to be provided by domain experts, or learned from a clean sample of the database). In this thesis, I provide a method for correcting individual attribute values in a structured database using a Bayesian generative model and a statistical error model learned from the noisy database directly. I thus avoid the necessity for a domain expert or master data. I also show how to efficiently perform consistent query answering using this model over a dirty database, in case write permissions to the database are unavailable. A Map-Reduce architecture to perform this computation in a distributed manner is also shown. I evaluate these methods over both synthetic and real data.
ContributorsDe, Sushovan (Author) / Kambhampati, Subbarao (Thesis advisor) / Chen, Yi (Committee member) / Candan, K. Selcuk (Committee member) / Liu, Huan (Committee member) / Arizona State University (Publisher)
Created2014
153091-Thumbnail Image.png
Description
As robotic technology and its various uses grow steadily more complex and ubiquitous, humans are coming into increasing contact with robotic agents. A large portion of such contact is cooperative interaction, where both humans and robots are required to work on the same application towards achieving common goals. These application

As robotic technology and its various uses grow steadily more complex and ubiquitous, humans are coming into increasing contact with robotic agents. A large portion of such contact is cooperative interaction, where both humans and robots are required to work on the same application towards achieving common goals. These application scenarios are characterized by a need to leverage the strengths of each agent as part of a unified team to reach those common goals. To ensure that the robotic agent is truly a contributing team-member, it must exhibit some degree of autonomy in achieving goals that have been delegated to it. Indeed, a significant portion of the utility of such human-robot teams derives from the delegation of goals to the robot, and autonomy on the part of the robot in achieving those goals. In order to be considered truly autonomous, the robot must be able to make its own plans to achieve the goals assigned to it, with only minimal direction and assistance from the human.

Automated planning provides the solution to this problem -- indeed, one of the main motivations that underpinned the beginnings of the field of automated planning was to provide planning support for Shakey the robot with the STRIPS system. For long, however, automated planners suffered from scalability issues that precluded their application to real world, real time robotic systems. Recent decades have seen a gradual abeyance of those issues, and fast planning systems are now the norm rather than the exception. However, some of these advances in speedup and scalability have been achieved by ignoring or abstracting out challenges that real world integrated robotic systems must confront.

In this work, the problem of planning for human-hobot teaming is introduced. The central idea -- the use of automated planning systems as mediators in such human-robot teaming scenarios -- and the main challenges inspired from real world scenarios that must be addressed in order to make such planning seamless are presented: (i) Goals which can be specified or changed at execution time, after the planning process has completed; (ii) Worlds and scenarios where the state changes dynamically while a previous plan is executing; (iii) Models that are incomplete and can be changed during execution; and (iv) Information about the human agent's plan and intentions that can be used for coordination. These challenges are compounded by the fact that the human-robot team must execute in an open world, rife with dynamic events and other agents; and in a manner that encourages the exchange of information between the human and the robot. As an answer to these challenges, implemented solutions and a fielded prototype that combines all of those solutions into one planning system are discussed. Results from running this prototype in real world scenarios are presented, and extensions to some of the solutions are offered as appropriate.
ContributorsTalamadupula, Kartik (Author) / Kambhampati, Subbarao (Thesis advisor) / Baral, Chitta (Committee member) / Liu, Huan (Committee member) / Scheutz, Matthias (Committee member) / Smith, David E. (Committee member) / Arizona State University (Publisher)
Created2014
150226-Thumbnail Image.png
Description
As the information available to lay users through autonomous data sources continues to increase, mediators become important to ensure that the wealth of information available is tapped effectively. A key challenge that these information mediators need to handle is the varying levels of incompleteness in the underlying databases in terms

As the information available to lay users through autonomous data sources continues to increase, mediators become important to ensure that the wealth of information available is tapped effectively. A key challenge that these information mediators need to handle is the varying levels of incompleteness in the underlying databases in terms of missing attribute values. Existing approaches such as Query Processing over Incomplete Autonomous Databases (QPIAD) aim to mine and use Approximate Functional Dependencies (AFDs) to predict and retrieve relevant incomplete tuples. These approaches make independence assumptions about missing values--which critically hobbles their performance when there are tuples containing missing values for multiple correlated attributes. In this thesis, I present a principled probabilis- tic alternative that views an incomplete tuple as defining a distribution over the complete tuples that it stands for. I learn this distribution in terms of Bayes networks. My approach involves min- ing/"learning" Bayes networks from a sample of the database, and using it do both imputation (predict a missing value) and query rewriting (retrieve relevant results with incompleteness on the query-constrained attributes, when the data sources are autonomous). I present empirical studies to demonstrate that (i) at higher levels of incompleteness, when multiple attribute values are missing, Bayes networks do provide a significantly higher classification accuracy and (ii) the relevant possible answers retrieved by the queries reformulated using Bayes networks provide higher precision and recall than AFDs while keeping query processing costs manageable.
ContributorsRaghunathan, Rohit (Author) / Kambhampati, Subbarao (Thesis advisor) / Liu, Huan (Committee member) / Lee, Joohyung (Committee member) / Arizona State University (Publisher)
Created2011
154073-Thumbnail Image.png
Description
Humans and robots need to work together as a team to accomplish certain shared goals due to the limitations of current robot capabilities. Human assistance is required to accomplish the tasks as human capabilities are often better suited for certain tasks and they complement robot capabilities in many situations. Given

Humans and robots need to work together as a team to accomplish certain shared goals due to the limitations of current robot capabilities. Human assistance is required to accomplish the tasks as human capabilities are often better suited for certain tasks and they complement robot capabilities in many situations. Given the necessity of human-robot teams, it has been long assumed that for the robotic agent to be an effective team member, it must be equipped with automated planning technologies that helps in achieving the goals that have been delegated to it by their human teammates as well as in deducing its own goal to proactively support its human counterpart by inferring their goals. However there has not been any systematic evaluation on the accuracy of this claim.

In my thesis, I perform human factors analysis on effectiveness of such automated planning technologies for remote human-robot teaming. In the first part of my study, I perform an investigation on effectiveness of automated planning in remote human-robot teaming scenarios. In the second part of my study, I perform an investigation on effectiveness of a proactive robot assistant in remote human-robot teaming scenarios.

Both investigations are conducted in a simulated urban search and rescue (USAR) scenario where the human-robot teams are deployed during early phases of an emergency response to explore all areas of the disaster scene. I evaluate through both the studies, how effective is automated planning technology in helping the human-robot teams move closer to human-human teams. I utilize both objective measures (like accuracy and time spent on primary and secondary tasks, Robot Attention Demand, etc.) and a set of subjective Likert-scale questions (on situation awareness, immediacy etc.) to investigate the trade-offs between different types of remote human-robot teams. The results from both the studies seem to suggest that intelligent robots with automated planning capability and proactive support ability is welcomed in general.
ContributorsNarayanan, Vignesh (Author) / Kambhampati, Subbarao (Thesis advisor) / Zhang, Yu (Thesis advisor) / Cooke, Nancy J. (Committee member) / Fainekos, Georgios (Committee member) / Arizona State University (Publisher)
Created2015
134257-Thumbnail Image.png
Description
This thesis describes a multi-robot architecture which allows teams of robots to work with humans to complete tasks. The multi-agent architecture was built using Robot Operating System and Python. This architecture was designed modularly, allowing the use of different planners and robots. The system automatically replans when robots connect or

This thesis describes a multi-robot architecture which allows teams of robots to work with humans to complete tasks. The multi-agent architecture was built using Robot Operating System and Python. This architecture was designed modularly, allowing the use of different planners and robots. The system automatically replans when robots connect or disconnect. The system was demonstrated on two real robots, a Fetch and a PeopleBot, by conducting a surveillance task on the fifth floor of the Computer Science building at Arizona State University. The next part of the system includes extensions for teaming with humans. An Android application was created to serve as the interface between the system and human teammates. This application provides a way for the system to communicate with humans in the loop. In addition, it sends location information of the human teammates to the system so that goal recognition can be performed. This goal recognition allows the generation of human-aware plans. This capability was demonstrated in a mock search and rescue scenario using the Fetch to locate a missing teammate.
ContributorsSaba, Gabriel Christer (Author) / Kambhampati, Subbarao (Thesis director) / Doupé, Adam (Committee member) / Chakraborti, Tathagata (Committee member) / Computer Science and Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2017-05
189263-Thumbnail Image.png
Description
In this work, I propose to bridge the gap between human users and adaptive control of robotic systems. The goal is to enable robots to consider user feedback and adjust their behaviors. A critical challenge with designing such systems is that users are often non-experts, with limited knowledge about

In this work, I propose to bridge the gap between human users and adaptive control of robotic systems. The goal is to enable robots to consider user feedback and adjust their behaviors. A critical challenge with designing such systems is that users are often non-experts, with limited knowledge about the robot's hardware and dynamics. In the domain of human-robot interaction, there exist different modalities of conveying information regarding the desired behavior of the robot, most commonly used are demonstrations, and preferences. While it is challenging for non-experts to provide demonstrations of robot behavior, works that consider preferences expressed as trajectory rankings lead to users providing noisy and possibly conflicting information, leading to slow adaptation or system failures. The end user can be expected to be familiar with the dynamics and how they relate to their desired objectives through repeated interactions with the system. However, due to inadequate knowledge about the system dynamics, it is expected that the user would find it challenging to provide feedback on all dimension's of the system's behavior at all times. Thus, the key innovation of this work is to enable users to provide partial instead of completely specified preferences as with traditional methods that learn from user preferences. In particular, I consider partial preferences in the form of preferences over plant dynamic parameters, for which I propose Adaptive User Control (AUC) of robotic systems. I leverage the correlations between the observed and hidden parameter preferences to deal with incompleteness. I use a sparse Gaussian Process Latent Variable Model formulation to learn hidden variables that represent the relationships between the observed and hidden preferences over the system parameters. This model is trained using Stochastic Variational Inference with a distributed loss formulation. I evaluate AUC in a custom drone-swarm environment and several domains from DeepMind control suite. I compare AUC with the state-of-the-art preference-based reinforcement learning methods that are utilized with user preferences. Results show that AUC outperforms the baselines substantially in terms of sample and feedback complexity.
ContributorsBiswas, Upasana (Author) / Zhang, Yu (Thesis advisor) / Kambhampati, Subbarao (Committee member) / Berman, Spring (Committee member) / Liu, Lantao (Committee member) / Arizona State University (Publisher)
Created2023
161994-Thumbnail Image.png
Description
Imitation learning is a promising methodology for teaching robots how to physically interact and collaborate with human partners. However, successful interaction requires complex coordination in time and space, i.e., knowing what to do as well as when to do it. This dissertation introduces Bayesian Interaction Primitives, a probabilistic imitation learning

Imitation learning is a promising methodology for teaching robots how to physically interact and collaborate with human partners. However, successful interaction requires complex coordination in time and space, i.e., knowing what to do as well as when to do it. This dissertation introduces Bayesian Interaction Primitives, a probabilistic imitation learning framework which establishes a conceptual and theoretical relationship between human-robot interaction (HRI) and simultaneous localization and mapping. In particular, it is established that HRI can be viewed through the lens of recursive filtering in time and space. In turn, this relationship allows one to leverage techniques from an existing, mature field and develop a powerful new formulation which enables multimodal spatiotemporal inference in collaborative settings involving two or more agents. Through the development of exact and approximate variations of this method, it is shown in this work that it is possible to learn complex real-world interactions in a wide variety of settings, including tasks such as handshaking, cooperative manipulation, catching, hugging, and more.
ContributorsCampbell, Joseph (Author) / Ben Amor, Heni (Thesis advisor) / Fainekos, Georgios (Thesis advisor) / Yamane, Katsu (Committee member) / Kambhampati, Subbarao (Committee member) / Arizona State University (Publisher)
Created2021