Matching Items (254)
Filtering by

Clear all filters

154694-Thumbnail Image.png
Description
Despite incremental improvements over decades, academic planning solutions see relatively little use in many industrial domains despite the relevance of planning paradigms to those problems. This work observes four shortfalls of existing academic solutions which contribute to this lack of adoption.

To address these shortfalls this work defines model-independent semantics for

Despite incremental improvements over decades, academic planning solutions see relatively little use in many industrial domains despite the relevance of planning paradigms to those problems. This work observes four shortfalls of existing academic solutions which contribute to this lack of adoption.

To address these shortfalls this work defines model-independent semantics for planning and introduces an extensible planning library. This library is shown to produce feasible results on an existing benchmark domain, overcome the usual modeling limitations of traditional planners, and accommodate domain-dependent knowledge about the problem structure within the planning process.
ContributorsJonas, Michael (Author) / Gaffar, Ashraf (Thesis advisor) / Fainekos, Georgios (Committee member) / Doupe, Adam (Committee member) / Herley, Cormac (Committee member) / Arizona State University (Publisher)
Created2016
155378-Thumbnail Image.png
Description
To ensure system integrity, robots need to proactively avoid any unwanted physical perturbation that may cause damage to the underlying hardware. In this thesis work, we investigate a machine learning approach that allows robots to anticipate impending physical perturbations from perceptual cues. In contrast to other approaches that require knowledge

To ensure system integrity, robots need to proactively avoid any unwanted physical perturbation that may cause damage to the underlying hardware. In this thesis work, we investigate a machine learning approach that allows robots to anticipate impending physical perturbations from perceptual cues. In contrast to other approaches that require knowledge about sources of perturbation to be encoded before deployment, our method is based on experiential learning. Robots learn to associate visual cues with subsequent physical perturbations and contacts. In turn, these extracted visual cues are then used to predict potential future perturbations acting on the robot. To this end, we introduce a novel deep network architecture which combines multiple sub- networks for dealing with robot dynamics and perceptual input from the environment. We present a self-supervised approach for training the system that does not require any labeling of training data. Extensive experiments in a human-robot interaction task show that a robot can learn to predict physical contact by a human interaction partner without any prior information or labeling. Furthermore, the network is able to successfully predict physical contact from either depth stream input or traditional video input or using both modalities as input.
ContributorsSur, Indranil (Author) / Amor, Heni B (Thesis advisor) / Fainekos, Georgios (Committee member) / Yang, Yezhou (Committee member) / Arizona State University (Publisher)
Created2017
190811-Thumbnail Image.png
Description
ABSTRACTThis dissertation addresses two pivotal challenges within the US technology industry: racial equity and the rise of artificial intelligence (AI). It investigates whether the integration of AI in human resources (HR) can foster inclusivity and diversity for Black women in the tech workforce. Despite numerous diversity initiatives, Black women account

ABSTRACTThis dissertation addresses two pivotal challenges within the US technology industry: racial equity and the rise of artificial intelligence (AI). It investigates whether the integration of AI in human resources (HR) can foster inclusivity and diversity for Black women in the tech workforce. Despite numerous diversity initiatives, Black women account for less than 2% of the US tech workforce, symbolizing a persistent challenge. Furthermore, AI often perpetuates structural biases, magnifying workforce inequities. This dissertation employs intersectionality, responsible innovation, and algorithmic bias theories to amplify the voices of Black women. It poses three critical questions: 1) How have Black women's HR experiences influenced diversity issues in the tech industry? 2) How is AI in HR developed considering the experiences of Black women? 3) What measures can enhance the role of AI in HR to promote diversity without deepening inequalities? Key findings reveal that current HR practices do not adequately serve Black women, driven by competing corporate priorities. Solutions should concentrate on recruiting, developing, promoting, and retaining Black women. Black women acknowledge the potential of AI to either reinforce or mitigate biases, yet they express apprehension about the development and implementation of AI in HR, which often lacks Black women's input. For AI to facilitate positive diversity results, companies must actively involve Black women in its development. This entails understanding the problems Black women face, using insights to design AI that addresses these issues and supports Black women's success, and engaging Black women in the development and assessment of AI implementations in HR, thereby enhancing accountability for diversity outcomes.
ContributorsWhye, Barbara Hickman (Author) / Miller, Clark (Thesis advisor) / Richter, Jennifer (Committee member) / Scott, Kimberly (Committee member) / Arizona State University (Publisher)
Created2023
193894-Thumbnail Image.png
Description
In today’s world, artificial intelligence (AI) is increasingly becoming a part of our daily lives. For this integration to be successful, it’s essential that AI systems can effectively interact with humans. This means making the AI system’s behavior more understandable to users and allowing users to customize the system’s behavior

In today’s world, artificial intelligence (AI) is increasingly becoming a part of our daily lives. For this integration to be successful, it’s essential that AI systems can effectively interact with humans. This means making the AI system’s behavior more understandable to users and allowing users to customize the system’s behavior to match their preferences. However, there are significant challenges associated with achieving this goal. One major challenge is that modern AI systems, which have shown great success, often make decisions based on learned representations. These representations, often acquired through deep learning techniques, are typically inscrutable to the users inhibiting explainability and customizability of the system. Additionally, since each user may have unique preferences and expertise, the interaction process must be tailored to each individual. This thesis addresses these challenges that arise in human-AI interaction scenarios, especially in cases where the AI system is tasked with solving sequential decision-making problems. This is achieved by introducing a framework that uses a symbolic interface to facilitate communication between humans and AI agents. This shared vocabulary acts as a bridge, enabling the AI agent to provide explanations in terms that are easy for humans to understand and allowing users to express their preferences using this common language. To address the need for personalization, the framework provides mechanisms that allow users to expand this shared vocabulary, enabling them to express their unique preferences effectively. Moreover, the AI systems are designed to take into account the user’s background knowledge when generating explanations tailored to their specific needs.
ContributorsSoni, Utkarsh (Author) / Kambhampati, Subbarao (Thesis advisor) / Baral, Chitta (Committee member) / Bryan, Chris (Committee member) / Liu, Huan (Committee member) / Arizona State University (Publisher)
Created2024