Matching Items (4)
Filtering by

Clear all filters

171826-Thumbnail Image.png
Description
Information about the elemental composition of a planetary surface can be determined using nuclear instrumentation such as gamma-ray and neutron spectrometers (GRNS). High-energy Galactic Cosmic Rays (GCRs) resulting from cosmic super novae isotropically bombard the surfaces of planetary bodies in space. When GCRs interact with a body’s surface, they can

Information about the elemental composition of a planetary surface can be determined using nuclear instrumentation such as gamma-ray and neutron spectrometers (GRNS). High-energy Galactic Cosmic Rays (GCRs) resulting from cosmic super novae isotropically bombard the surfaces of planetary bodies in space. When GCRs interact with a body’s surface, they can liberate neutrons in a process called spallation, resulting in neutrons and gamma rays being emitted from the planet’s surface; how GCRs and source particles (i.e. active neutron generators) interact with nearby nuclei defines the nuclear environment. In this work I describe the development of nuclear detection systems and techniques for future orbital and landed missions, as well as the implications of nuclear environments on a non-silicate (icy) planetary body. This work aids in the development of future NASA and international missions by presenting many of the capabilities and limitations of nuclear detection systems for a variety of planetary bodies (Earth, the Moon, metallic asteroids, icy moons). From bench top experiments to theoretical simulations, from geochemical hypotheses to instrument calibrations—nuclear planetary science is a challenging and rapidly expanding multidisciplinary field. In this work (1) I describe ground-truth verification of the neutron die-away method using a new type of elpasolite (Cs2YLiCl6:Ce) scintillator, (2) I explore the potential use of temporal neutron measurements on the surface of Titan through Monte-Carlo simulation models, and (3) I report on the experimental spatial efficiency and calibration details of the miniature neutron spectrometer (Mini-NS) on board the NASA LunaH-Map mission. This work presents a subset of planetary nuclear science and its many challenges in humanity's ongoing effort to explore strange new worlds.
ContributorsHeffern, Lena Elizabeth (Author) / Hardgrove, Craig (Thesis advisor) / Elkins-Tanton, Linda (Committee member) / Parsons, Ann (Committee member) / Garvie, Laurence (Committee member) / Holbert, Keith (Committee member) / Lyons, James (Committee member) / Arizona State University (Publisher)
Created2022
168406-Thumbnail Image.png
Description
Enabling robots to physically engage with their environment in a safe and efficient manner is an essential step towards human-robot interaction. To date, robots usually operate as pre-programmed workers that blindly execute tasks in highly structured environments crafted by skilled engineers. Changing the robots’ behavior to cover new duties or

Enabling robots to physically engage with their environment in a safe and efficient manner is an essential step towards human-robot interaction. To date, robots usually operate as pre-programmed workers that blindly execute tasks in highly structured environments crafted by skilled engineers. Changing the robots’ behavior to cover new duties or handle variability is an expensive, complex, and time-consuming process. However, with the advent of more complex sensors and algorithms, overcoming these limitations becomes within reach. This work proposes innovations in artificial intelligence, language understanding, and multimodal integration to enable next-generation grasping and manipulation capabilities in autonomous robots. The underlying thesis is that multimodal observations and instructions can drastically expand the responsiveness and dexterity of robot manipulators. Natural language, in particular, can be used to enable intuitive, bidirectional communication between a human user and the machine. To this end, this work presents a system that learns context-aware robot control policies from multimodal human demonstrations. Among the main contributions presented are techniques for (a) collecting demonstrations in an efficient and intuitive fashion, (b) methods for leveraging physical contact with the environment and objects, (c) the incorporation of natural language to understand context, and (d) the generation of robust robot control policies. The presented approach and systems are evaluated in multiple grasping and manipulation settings ranging from dexterous manipulation to pick-and-place, as well as contact-rich bimanual insertion tasks. Moreover, the usability of these innovations, especially when utilizing human task demonstrations and communication interfaces, is evaluated in several human-subject studies.
ContributorsStepputtis, Simon (Author) / Ben Amor, Heni (Thesis advisor) / Baral, Chitta (Committee member) / Yang, Yezhou (Committee member) / Lee, Stefan (Committee member) / Arizona State University (Publisher)
Created2021
161994-Thumbnail Image.png
Description
Imitation learning is a promising methodology for teaching robots how to physically interact and collaborate with human partners. However, successful interaction requires complex coordination in time and space, i.e., knowing what to do as well as when to do it. This dissertation introduces Bayesian Interaction Primitives, a probabilistic imitation learning

Imitation learning is a promising methodology for teaching robots how to physically interact and collaborate with human partners. However, successful interaction requires complex coordination in time and space, i.e., knowing what to do as well as when to do it. This dissertation introduces Bayesian Interaction Primitives, a probabilistic imitation learning framework which establishes a conceptual and theoretical relationship between human-robot interaction (HRI) and simultaneous localization and mapping. In particular, it is established that HRI can be viewed through the lens of recursive filtering in time and space. In turn, this relationship allows one to leverage techniques from an existing, mature field and develop a powerful new formulation which enables multimodal spatiotemporal inference in collaborative settings involving two or more agents. Through the development of exact and approximate variations of this method, it is shown in this work that it is possible to learn complex real-world interactions in a wide variety of settings, including tasks such as handshaking, cooperative manipulation, catching, hugging, and more.
ContributorsCampbell, Joseph (Author) / Ben Amor, Heni (Thesis advisor) / Fainekos, Georgios (Thesis advisor) / Yamane, Katsu (Committee member) / Kambhampati, Subbarao (Committee member) / Arizona State University (Publisher)
Created2021
157694-Thumbnail Image.png
Description
There are more than 20 active missions exploring planets and small bodies beyond Earth in our solar system today. Many more have completed their journeys or will soon begin. Each spacecraft has a suite of instruments and sensors that provide a treasure trove of data that scientists use to advance

There are more than 20 active missions exploring planets and small bodies beyond Earth in our solar system today. Many more have completed their journeys or will soon begin. Each spacecraft has a suite of instruments and sensors that provide a treasure trove of data that scientists use to advance our understanding of the past, present, and future of the solar system and universe. As more missions come online and the volume of data increases, it becomes more difficult for scientists to analyze these complex data at the desired pace. There is a need for systems that can rapidly and intelligently extract information from planetary instrument datasets and prioritize the most promising, novel, or relevant observations for scientific analysis. Machine learning methods can serve this need in a variety of ways: by uncovering patterns or features of interest in large, complex datasets that are difficult for humans to analyze; by inspiring new hypotheses based on structure and patterns revealed in data; or by automating tedious or time-consuming tasks. In this dissertation, I present machine learning solutions to enhance the tactical planning process for the Mars Science Laboratory Curiosity rover and future tactically-planned missions, as well as the science analysis process for archived and ongoing orbital imaging investigations such as the High Resolution Imaging Science Experiment (HiRISE) at Mars. These include detecting novel geology in multispectral images and active nuclear spectroscopy data, analyzing the intrinsic variability in active nuclear spectroscopy data with respect to elemental geochemistry, automating tedious image review processes, and monitoring changes in surface features such as impact craters in orbital remote sensing images. Collectively, this dissertation shows how machine learning can be a powerful tool for facilitating scientific discovery during active exploration missions and in retrospective analysis of archived data.
ContributorsKerner, Hannah Rae (Author) / Bell, James F. (Thesis advisor) / Ben Amor, Heni (Thesis advisor) / Wagstaff, Kiri L (Committee member) / Hardgrove, Craig J (Committee member) / Shirzaei, Manoochehr (Committee member) / Arizona State University (Publisher)
Created2019