Matching Items (34)
Description

Robots are often used in long-duration scenarios, such as on the surface of Mars,where they may need to adapt to environmental changes. Typically, robots have been built specifically for single tasks, such as moving boxes in a warehouse

Robots are often used in long-duration scenarios, such as on the surface of Mars,where they may need to adapt to environmental changes. Typically, robots have been built specifically for single tasks, such as moving boxes in a warehouse or surveying construction sites. However, there is a modern trend away from human hand-engineering and toward robot learning. To this end, the ideal robot is not engineered,but automatically designed for a specific task. This thesis focuses on robots which learn path-planning algorithms for specific environments. Learning is accomplished via genetic programming. Path-planners are represented as Python code, which is optimized via Pareto evolution. These planners are encouraged to explore curiously and efficiently. This research asks the questions: “How can robots exhibit life-long learning where they adapt to changing environments in a robust way?”, and “How can robots learn to be curious?”.

ContributorsSaldyt, Lucas P (Author) / Ben Amor, Heni (Thesis director) / Pavlic, Theodore (Committee member) / Computer Science and Engineering Program (Contributor, Contributor) / Barrett, The Honors College (Contributor)
Created2021-05
137072-Thumbnail Image.png
Description
Hydraulic fracturing, or fracking, has become a common practice in United States oil fields for enhancing their productivity. Among the concerns regarding fracking, however, is the possibility that it could trigger shallow earthquakes. The brine that results from fracking is injected into the subsurface for disposal. This brine causes a

Hydraulic fracturing, or fracking, has become a common practice in United States oil fields for enhancing their productivity. Among the concerns regarding fracking, however, is the possibility that it could trigger shallow earthquakes. The brine that results from fracking is injected into the subsurface for disposal. This brine causes a pore pressure gradient that is commonly believed to trigger failure along critically stressed subsurface faults. In Timpson, a small city in eastern Texas, earthquakes have become much more common since two injection wells were installed in 2007. 16 events of M_W > 2 have been detected since 2008 and are believed to be associated with failure along a subsurface fault. Applying interferometric synthetic aperture radar, we analyzed 3 sets of SAR images from the Advanced Land Observing Satellite (ALOS) from May 2007 to December 2010. Using these data sets, XX interferograms were generated. From these interferograms, it was possible to determine the spatial and temporal evolution of the crustal deformation in the line-of-sight of the satellite. The results show strong evidence of uplift in the region adjacent to the injection wells. While previous studies have established a strong connection between fluid injection and increased seismicity, this is to our knowledge the first observed case of crustal deformation that has been observed as a result of hydraulic fracturing fluid disposal.
Created2014-05
131135-Thumbnail Image.png
Description
Accurate pose initialization and pose estimation are crucial requirements in on-orbit space assembly and various other autonomous on-orbit tasks. However, pose initialization and pose estimation are much more difficult to do accurately and consistently in space. This is primarily due to not only the variable lighting conditions present in space,

Accurate pose initialization and pose estimation are crucial requirements in on-orbit space assembly and various other autonomous on-orbit tasks. However, pose initialization and pose estimation are much more difficult to do accurately and consistently in space. This is primarily due to not only the variable lighting conditions present in space, but also the power requirements mandated by space-flyable hardware. This thesis investigates leveraging a deep learning approach for monocular one-shot pose initialization and pose estimation. A convolutional neural network was used to estimate the 6D pose of an assembly truss object. This network was trained by utilizing synthetic imagery generated from a simulation testbed. Furthermore, techniques to quantify model uncertainty of the deep learning model were investigated and applied in the task of in-space pose estimation and pose initialization. The feasibility of this approach on low-power computational platforms was also tested. The results demonstrate that accurate pose initialization and pose estimation can be conducted using a convolutional neural network. In addition, the results show that the model uncertainty can be obtained from the network. Lastly, the use of deep learning for pose initialization and pose estimation in addition with uncertainty quantification was demonstrated to be feasible on low-power compute platforms.
ContributorsKailas, Siva Maneparambil (Author) / Ben Amor, Heni (Thesis director) / Detry, Renaud (Committee member) / Economics Program in CLAS (Contributor) / School of Mathematical and Statistical Sciences (Contributor) / Computer Science and Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2020-05
Description
In the field of machine learning, reinforcement learning stands out for its ability to explore approaches to complex, high dimensional problems that outperform even expert humans. For robotic locomotion tasks reinforcement learning provides an approach to solving them without the need for unique controllers. In this thesis, two reinforcement learning

In the field of machine learning, reinforcement learning stands out for its ability to explore approaches to complex, high dimensional problems that outperform even expert humans. For robotic locomotion tasks reinforcement learning provides an approach to solving them without the need for unique controllers. In this thesis, two reinforcement learning algorithms, Deep Deterministic Policy Gradient and Group Factor Policy Search are compared based upon their performance in the bipedal walking environment provided by OpenAI gym. These algorithms are evaluated on their performance in the environment and their sample efficiency.
ContributorsMcDonald, Dax (Author) / Ben Amor, Heni (Thesis director) / Yang, Yezhou (Committee member) / Barrett, The Honors College (Contributor) / Computer Science and Engineering Program (Contributor)
Created2018-12
132995-Thumbnail Image.png
Description
Lyric classification and generation are trending in topics in the machine learning community. Long Short-Term Networks (LSTMs) are effective tools for classifying and generating text. We explored their effectiveness in the generation and classification of lyrical data and proposed methods of evaluating their accuracy. We found that LSTM networks with

Lyric classification and generation are trending in topics in the machine learning community. Long Short-Term Networks (LSTMs) are effective tools for classifying and generating text. We explored their effectiveness in the generation and classification of lyrical data and proposed methods of evaluating their accuracy. We found that LSTM networks with dropout layers were effective at lyric classification. We also found that Word embedding LSTM networks were extremely effective at lyric generation.
ContributorsTallapragada, Amit (Author) / Ben Amor, Heni (Thesis director) / Caviedes, Jorge (Committee member) / Computer Science and Engineering Program (Contributor, Contributor) / Barrett, The Honors College (Contributor)
Created2019-05
161564-Thumbnail Image.png
Description
The volume of scientific research is growing at an exponential rate over the past100 years. With the advent of the internet and ubiquitous access to the web, academic research search engines such as Google Scholar, Microsoft Academic, etc., have become the go-to platforms for systemic reviews and search. Although many

The volume of scientific research is growing at an exponential rate over the past100 years. With the advent of the internet and ubiquitous access to the web, academic research search engines such as Google Scholar, Microsoft Academic, etc., have become the go-to platforms for systemic reviews and search. Although many academic search engines host lots of content, they provide minimal context about where the search terms matched. Many of these search engines also fail to provide additional tools which can help enhance a researcher’s understanding of research content outside their respective websites. An example of such a tool can be a browser extension/plugin that surfaces context-relevant information about a research article when the user reads a research article. This dissertation discusses a solution developed to bring more intrinsic characteristics of research documents such as the structure of the research document, tables in the document, the keywords associated with the document to improve search capabilities and augment the information a researcher may read. The prototype solution named Sci-Genie(https://sci-genie.com/) is a search engine over scientific articles from Computer Science ArXiv. Sci-Genie parses research papers and indexes research documents’ structure to provide context-relevant information about the matched search fragments. The same search engine also powers a browser extension to augment the information about a research article the user may be reading. The browser extension augments the user’s interface with information about tables from the cited papers, other papers by the same authors, and even the citations to and from the current article. The browser extension is further powered with access endpoints that leverage a machine learning model to filter tables comparing various entities. The dissertation further discusses these machine learning models and some baselines that help classify whether a table is comparing various entities or not. The dissertation finally concludes by discussing the current shortcomings of Sci-Genie and possible future research scope based on learnings after building Sci-Genie.
ContributorsDave, Valay (Author) / Zou, Jia (Thesis advisor) / Ben Amor, Heni (Thesis advisor) / Candan, Kasim Selcuk (Committee member) / Arizona State University (Publisher)
Created2021
189226-Thumbnail Image.png
Description
This dissertation explores the use of artificial intelligence and machine learningtechniques for the development of controllers for fully-powered robotic prosthetics. The aim of the research is to enable prosthetics to predict future states and control biomechanical properties in both linear and nonlinear fashions, with a particular focus on ergonomics. The research is motivated by

This dissertation explores the use of artificial intelligence and machine learningtechniques for the development of controllers for fully-powered robotic prosthetics. The aim of the research is to enable prosthetics to predict future states and control biomechanical properties in both linear and nonlinear fashions, with a particular focus on ergonomics. The research is motivated by the need to provide amputees with prosthetic devices that not only replicate the functionality of the missing limb, but also offer a high level of comfort and usability. Traditional prosthetic devices lack the sophistication to adjust to a user’s movement patterns and can cause discomfort and pain over time. The proposed solution involves the development of machine learning-based controllers that can learn from user movements and adjust the prosthetic device’s movements accordingly. The research involves a combination of simulation and real-world testing to evaluate the effectiveness of the proposed approach. The simulation involves the creation of a model of the prosthetic device and the use of machine learning algorithms to train controllers that predict future states and control biomechanical properties. The real- world testing involves the use of human subjects wearing the prosthetic device to evaluate its performance and usability. The research focuses on two main areas: the prediction of future states and the control of biomechanical properties. The prediction of future states involves the development of machine learning algorithms that can analyze a user’s movements and predict the next movements with a high degree of accuracy. The control of biomechanical properties involves the development of algorithms that can adjust the prosthetic device’s movements to ensure maximum comfort and usability for the user. The results of the research show that the use of artificial intelligence and machine learning techniques can significantly improve the performance and usability of pros- thetic devices. The machine learning-based controllers developed in this research are capable of predicting future states and adjusting the prosthetic device’s movements in real-time, leading to a significant improvement in ergonomics and usability. Overall, this dissertation provides a comprehensive analysis of the use of artificial intelligence and machine learning techniques for the development of controllers for fully-powered robotic prosthetics.
ContributorsCLARK, GEOFFEY M (Author) / Ben Amor, Heni (Thesis advisor) / Dasarathy, Gautam (Committee member) / Papandreou-Suppappola, Antonia (Committee member) / Ward, Jeffrey (Committee member) / Arizona State University (Publisher)
Created2023
168422-Thumbnail Image.png
Description
Natural Language plays a crucial role in human-robot interaction as it is the common ground where human beings and robots can communicate and understand each other. However, most of the work in natural language and robotics is majorly on generating robot actions using a natural language command, which is a

Natural Language plays a crucial role in human-robot interaction as it is the common ground where human beings and robots can communicate and understand each other. However, most of the work in natural language and robotics is majorly on generating robot actions using a natural language command, which is a unidirectional way of communication. This work focuses on the other direction of communication, where the approach allows a robot to describe its actions from sampled images and joint sequences from the robot task. The importance of this work is that it utilizes multiple modalities, which are the start and end images from the robot task environment and the joint trajectories of the robot arms. The fusion of different modalities is not just about fusing the data but knowing what information to extract from which data sources in such a way that the language description represents the state of the manipulator and the environment that it is performing the task on. From the experimental results of various simulated robot environments, this research demonstrates that utilizing multiple modalities improves the accuracy of the natural language description, and efficiently fusing the modalities is crucial in generating such descriptions by harnessing most of the various data sources.
ContributorsKALIRATHINAM, KAMALESH (Author) / Ben Amor, Heni (Thesis advisor) / Phielipp, Mariano (Committee member) / Zhang, Yu (Committee member) / Arizona State University (Publisher)
Created2021
168406-Thumbnail Image.png
Description
Enabling robots to physically engage with their environment in a safe and efficient manner is an essential step towards human-robot interaction. To date, robots usually operate as pre-programmed workers that blindly execute tasks in highly structured environments crafted by skilled engineers. Changing the robots’ behavior to cover new duties or

Enabling robots to physically engage with their environment in a safe and efficient manner is an essential step towards human-robot interaction. To date, robots usually operate as pre-programmed workers that blindly execute tasks in highly structured environments crafted by skilled engineers. Changing the robots’ behavior to cover new duties or handle variability is an expensive, complex, and time-consuming process. However, with the advent of more complex sensors and algorithms, overcoming these limitations becomes within reach. This work proposes innovations in artificial intelligence, language understanding, and multimodal integration to enable next-generation grasping and manipulation capabilities in autonomous robots. The underlying thesis is that multimodal observations and instructions can drastically expand the responsiveness and dexterity of robot manipulators. Natural language, in particular, can be used to enable intuitive, bidirectional communication between a human user and the machine. To this end, this work presents a system that learns context-aware robot control policies from multimodal human demonstrations. Among the main contributions presented are techniques for (a) collecting demonstrations in an efficient and intuitive fashion, (b) methods for leveraging physical contact with the environment and objects, (c) the incorporation of natural language to understand context, and (d) the generation of robust robot control policies. The presented approach and systems are evaluated in multiple grasping and manipulation settings ranging from dexterous manipulation to pick-and-place, as well as contact-rich bimanual insertion tasks. Moreover, the usability of these innovations, especially when utilizing human task demonstrations and communication interfaces, is evaluated in several human-subject studies.
ContributorsStepputtis, Simon (Author) / Ben Amor, Heni (Thesis advisor) / Baral, Chitta (Committee member) / Yang, Yezhou (Committee member) / Lee, Stefan (Committee member) / Arizona State University (Publisher)
Created2021
161994-Thumbnail Image.png
Description
Imitation learning is a promising methodology for teaching robots how to physically interact and collaborate with human partners. However, successful interaction requires complex coordination in time and space, i.e., knowing what to do as well as when to do it. This dissertation introduces Bayesian Interaction Primitives, a probabilistic imitation learning

Imitation learning is a promising methodology for teaching robots how to physically interact and collaborate with human partners. However, successful interaction requires complex coordination in time and space, i.e., knowing what to do as well as when to do it. This dissertation introduces Bayesian Interaction Primitives, a probabilistic imitation learning framework which establishes a conceptual and theoretical relationship between human-robot interaction (HRI) and simultaneous localization and mapping. In particular, it is established that HRI can be viewed through the lens of recursive filtering in time and space. In turn, this relationship allows one to leverage techniques from an existing, mature field and develop a powerful new formulation which enables multimodal spatiotemporal inference in collaborative settings involving two or more agents. Through the development of exact and approximate variations of this method, it is shown in this work that it is possible to learn complex real-world interactions in a wide variety of settings, including tasks such as handshaking, cooperative manipulation, catching, hugging, and more.
ContributorsCampbell, Joseph (Author) / Ben Amor, Heni (Thesis advisor) / Fainekos, Georgios (Thesis advisor) / Yamane, Katsu (Committee member) / Kambhampati, Subbarao (Committee member) / Arizona State University (Publisher)
Created2021