Matching Items (4)
Filtering by

Clear all filters

157313-Thumbnail Image.png
Description
Allocating tasks for a day's or week's schedule is known to be a challenging and difficult problem. The problem intensifies by many folds in multi-agent settings. A planner or group of planners who decide such kind of task association schedule must have a comprehensive perspective on (1) the entire array

Allocating tasks for a day's or week's schedule is known to be a challenging and difficult problem. The problem intensifies by many folds in multi-agent settings. A planner or group of planners who decide such kind of task association schedule must have a comprehensive perspective on (1) the entire array of tasks to be scheduled (2) idea on constraints like importance cum order of tasks and (3) the individual abilities of the operators. One example of such kind of scheduling is the crew scheduling done for astronauts who will spend time at International Space Station (ISS). The schedule for the crew of ISS is decided before the mission starts. Human planners take part in the decision-making process to determine the timing of activities for multiple days for multiple crew members at ISS. Given the unpredictability of individual assignments and limitations identified with the various operators, deciding upon a satisfactory timetable is a challenging task. The objective of the current work is to develop an automated decision assistant that would assist human planners in coming up with an acceptable task schedule for the crew. At the same time, the decision assistant will also ensure that human planners are always in the driver's seat throughout this process of decision-making.

The decision assistant will make use of automated planning technology to assist human planners. The guidelines of Naturalistic Decision Making (NDM) and the Human-In-The -Loop decision making were followed to make sure that the human is always in the driver's seat. The use cases considered are standard situations which come up during decision-making in crew-scheduling. The effectiveness of automated decision assistance was evaluated by setting it up for domain experts on a comparable domain of scheduling courses for master students. The results of the user study evaluating the effectiveness of automated decision support were subsequently published.
ContributorsMIshra, Aditya Prasad (Author) / Kambhampati, Subbarao (Thesis advisor) / Chiou, Erin (Committee member) / Demakethepalli Venkateswara, Hemanth Kumar (Committee member) / Arizona State University (Publisher)
Created2019
157710-Thumbnail Image.png
Description
With the growth of autonomous vehicles’ prevalence, it is important to understand the relationship between autonomous vehicles and the other drivers around them. More specifically, how does one’s knowledge about autonomous vehicles (AV) affect positive and negative affect towards driving in their presence? Furthermore, how does trust of autonomous vehicles

With the growth of autonomous vehicles’ prevalence, it is important to understand the relationship between autonomous vehicles and the other drivers around them. More specifically, how does one’s knowledge about autonomous vehicles (AV) affect positive and negative affect towards driving in their presence? Furthermore, how does trust of autonomous vehicles correlate with those emotions? These questions were addressed by conducting a survey to measure participant’s positive affect, negative affect, and trust when driving in the presence of autonomous vehicles. Participants’ were issued a pretest measuring existing knowledge of autonomous vehicles, followed by measures of affect and trust. After completing this pre-test portion of the study, participants were given information about how autonomous vehicles work, and were then presented with a posttest identical to the pretest. The educational intervention had no effect on positive or negative affect, though there was a positive relationship between positive affect and trust and a negative relationship between negative affect and trust. These findings will be used to inform future research endeavors researching trust and autonomous vehicles using a test bed developed at Arizona State University. This test bed allows for researchers to examine the behavior of multiple participants at the same time and include autonomous vehicles in studies.
ContributorsMartin, Sterling (Author) / Cooke, Nancy J. (Thesis advisor) / Chiou, Erin (Committee member) / Gray, Robert (Committee member) / Arizona State University (Publisher)
Created2019
161714-Thumbnail Image.png
Description
Decision support systems aid the human-in-the-loop by enhancing the quality of decisions and the ease of making them in complex decision-making scenarios. In the recent years, such systems have been empowered with automated techniques for sequential decision making or planning tasks to effectively assist and cooperate with the human-in-the-loop. This

Decision support systems aid the human-in-the-loop by enhancing the quality of decisions and the ease of making them in complex decision-making scenarios. In the recent years, such systems have been empowered with automated techniques for sequential decision making or planning tasks to effectively assist and cooperate with the human-in-the-loop. This has received significant recognition in the planning as well as human computer interaction communities as such systems connect the key elements of automated planning in decision support to principles of naturalistic decision making in the HCI community. A decision support system, in addition to providing planning support, must be able to provide intuitive explanations based on specific user queries for proposed decisions to its end users. Using this as motivation, I consider scenarios where the user questions the system's suggestion by providing alternatives (referred to as foils). In response, I empower existing decision support technologies to engage in an interactive explanatory dialogue with the user and provide contrastive explanations based on user-specified foils to reach a consensus on proposed decisions. Furthermore, the foils specified by the user can be indicative of the latent preferences of the user. I use this interpretation to equip existing decision support technologies with three different interaction strategies that utilize the foil to provide revised plan suggestions. Finally, as part of my Master's thesis, I present RADAR-X, an extension of RADAR, a proactive decision support system, that showcases the above mentioned capabilities. Further, I present a user-study evaluation that emphasizes the need for contrastive explanations and a computational evaluation of the mentioned interaction strategies.
ContributorsValmeekam, Karthik (Author) / Kambhampati, Subbarao (Thesis advisor) / Chiou, Erin (Committee member) / Sengupta, Sailik (Committee member) / Arizona State University (Publisher)
Created2021
132761-Thumbnail Image.png
Description
Rapid advancements in Artificial Intelligence (AI), Machine Learning, and Deep Learning technologies are widening the playing field for automated decision assistants in healthcare. The field of radiology offers a unique platform for this technology due to its repetitive work structure, ability to leverage large data sets, and high position for

Rapid advancements in Artificial Intelligence (AI), Machine Learning, and Deep Learning technologies are widening the playing field for automated decision assistants in healthcare. The field of radiology offers a unique platform for this technology due to its repetitive work structure, ability to leverage large data sets, and high position for clinical and social impact. Several technologies in cancer screening, such as Computer Aided Detection (CAD), have broken the barrier of research into reality through successful outcomes with patient data (Morton, Whaley, Brandt, & Amrami, 2006; Patel et al, 2018). Technologies, such as the IBM Medical Sieve, are growing excitement with the potential for increased impact through the addition of medical record information ("Medical Sieve Radiology Grand Challenge", 2018). As the capabilities of automation increase and become a part of expert-decision-making jobs, however, the careful consideration of its integration into human systems is often overlooked. This paper aims to identify how healthcare professionals and system engineers implementing and interacting with automated decision-making aids in Radiology should take bureaucratic, legal, professional, and political accountability concerns into consideration. This Accountability Framework is modeled after Romzek and Dubnick’s (1987) public administration framework and expanded on through an analysis of literature on accountability definitions and examples in military, healthcare, and research sectors. A cohesive understanding of this framework and the human concerns it raises helps drive the questions that, if fully addressed, create the potential for a successful integration and adoption of AI in radiology and ultimately the care environment.
ContributorsGilmore, Emily Anne (Author) / Chiou, Erin (Thesis director) / Wu, Teresa (Committee member) / Industrial, Systems & Operations Engineering Prgm (Contributor, Contributor) / Barrett, The Honors College (Contributor)
Created2019-05