Matching Items (9)
Filtering by

Clear all filters

152234-Thumbnail Image.png
Description
One of the main challenges in planetary robotics is to traverse the shortest path through a set of waypoints. The shortest distance between any two waypoints is a direct linear traversal. Often times, there are physical restrictions that prevent a rover form traversing straight to a waypoint. Thus, knowledge of

One of the main challenges in planetary robotics is to traverse the shortest path through a set of waypoints. The shortest distance between any two waypoints is a direct linear traversal. Often times, there are physical restrictions that prevent a rover form traversing straight to a waypoint. Thus, knowledge of the terrain is needed prior to traversal. The Digital Terrain Model (DTM) provides information about the terrain along with waypoints for the rover to traverse. However, traversing a set of waypoints linearly is burdensome, as the rovers would constantly need to modify their orientation as they successively approach waypoints. Although there are various solutions to this problem, this research paper proposes the smooth traversability of the rover using splines as a quick and easy implementation to traverse a set of waypoints. In addition, a rover was used to compare the smoothness of the linear traversal along with the spline interpolations. The data collected illustrated that spline traversals had a less rate of change in the velocity over time, indicating that the rover performed smoother than with linear paths.
ContributorsKamasamudram, Anurag (Author) / Saripalli, Srikanth (Thesis advisor) / Fainekos, Georgios (Thesis advisor) / Turaga, Pavan (Committee member) / Arizona State University (Publisher)
Created2013
152179-Thumbnail Image.png
Description
As the complexity of robotic systems and applications grows rapidly, development of high-performance, easy to use, and fully integrated development environments for those systems is inevitable. Model-Based Design (MBD) of dynamic systems using engineering software such as Simulink® from MathWorks®, SciCos from Metalau team and SystemModeler® from Wolfram® is quite

As the complexity of robotic systems and applications grows rapidly, development of high-performance, easy to use, and fully integrated development environments for those systems is inevitable. Model-Based Design (MBD) of dynamic systems using engineering software such as Simulink® from MathWorks®, SciCos from Metalau team and SystemModeler® from Wolfram® is quite popular nowadays. They provide tools for modeling, simulation, verification and in some cases automatic code generation for desktop applications, embedded systems and robots. For real-world implementation of models on the actual hardware, those models should be converted into compilable machine code either manually or automatically. Due to the complexity of robotic systems, manual code translation from model to code is not a feasible optimal solution so we need to move towards automated code generation for such systems. MathWorks® offers code generation facilities called Coder® products for this purpose. However in order to fully exploit the power of model-based design and code generation tools for robotic applications, we need to enhance those software systems by adding and modifying toolboxes, files and other artifacts as well as developing guidelines and procedures. In this thesis, an effort has been made to propose a guideline as well as a Simulink® library, StateFlow® interface API and a C/C++ interface API to complete this toolchain for NAO humanoid robots. Thus the model of the hierarchical control architecture can be easily and properly converted to code and built for implementation.
ContributorsRaji Kermani, Ramtin (Author) / Fainekos, Georgios (Thesis advisor) / Lee, Yann-Hang (Committee member) / Sarjoughian, Hessam S. (Committee member) / Arizona State University (Publisher)
Created2013
151793-Thumbnail Image.png
Description
Linear Temporal Logic is gaining increasing popularity as a high level specification language for robot motion planning due to its expressive power and scalability of LTL control synthesis algorithms. This formalism, however, requires expert knowledge and makes it inaccessible to non-expert users. This thesis introduces a graphical specification environment to

Linear Temporal Logic is gaining increasing popularity as a high level specification language for robot motion planning due to its expressive power and scalability of LTL control synthesis algorithms. This formalism, however, requires expert knowledge and makes it inaccessible to non-expert users. This thesis introduces a graphical specification environment to create high level motion plans to control robots in the field by converting a visual representation of the motion/task plan into a Linear Temporal Logic (LTL) specification. The visual interface is built on the Android tablet platform and provides functionality to create task plans through a set of well defined gestures and on screen controls. It uses the notion of waypoints to quickly and efficiently describe the motion plan and enables a variety of complex Linear Temporal Logic specifications to be described succinctly and intuitively by the user without the need for the knowledge and understanding of LTL specification. Thus, it opens avenues for its use by personnel in military, warehouse management, and search and rescue missions. This thesis describes the construction of LTL for various scenarios used for robot navigation using the visual interface developed and leverages the use of existing LTL based motion planners to carry out the task plan by a robot.
ContributorsSrinivas, Shashank (Author) / Fainekos, Georgios (Thesis advisor) / Baral, Chitta (Committee member) / Burleson, Winslow (Committee member) / Arizona State University (Publisher)
Created2013
152324-Thumbnail Image.png
Description
With robots being used extensively in various areas, a certain degree of robot autonomy has always been found desirable. In applications like planetary exploration, autonomous path planning and navigation are considered essential. But every now and then, a need to modify the robot's operation arises, a need for a human

With robots being used extensively in various areas, a certain degree of robot autonomy has always been found desirable. In applications like planetary exploration, autonomous path planning and navigation are considered essential. But every now and then, a need to modify the robot's operation arises, a need for a human to provide it some supervisory parameters that modify the degree of autonomy or allocate extra tasks to the robot. In this regard, this thesis presents an approach to include a provision to accept and incorporate such human inputs and modify the navigation functions of the robot accordingly. Concepts such as applying kinematical constraints while planning paths, traversing of unknown areas with an intent of maximizing field of view, performing complex tasks on command etc. have been examined and implemented. The approaches have been tested in Robot Operating System (ROS), using robots such as the iRobot Create, Personal Robotics (PR2) etc. Simulations and experimental demonstrations have proved that this approach is feasible for solving some of the existing problems and that it certainly can pave way to further research for enhancing functionality.
ContributorsVemprala, Sai Hemachandra (Author) / Saripalli, Srikanth (Thesis advisor) / Fainekos, Georgios (Committee member) / Turaga, Pavan (Committee member) / Arizona State University (Publisher)
Created2013
154073-Thumbnail Image.png
Description
Humans and robots need to work together as a team to accomplish certain shared goals due to the limitations of current robot capabilities. Human assistance is required to accomplish the tasks as human capabilities are often better suited for certain tasks and they complement robot capabilities in many situations. Given

Humans and robots need to work together as a team to accomplish certain shared goals due to the limitations of current robot capabilities. Human assistance is required to accomplish the tasks as human capabilities are often better suited for certain tasks and they complement robot capabilities in many situations. Given the necessity of human-robot teams, it has been long assumed that for the robotic agent to be an effective team member, it must be equipped with automated planning technologies that helps in achieving the goals that have been delegated to it by their human teammates as well as in deducing its own goal to proactively support its human counterpart by inferring their goals. However there has not been any systematic evaluation on the accuracy of this claim.

In my thesis, I perform human factors analysis on effectiveness of such automated planning technologies for remote human-robot teaming. In the first part of my study, I perform an investigation on effectiveness of automated planning in remote human-robot teaming scenarios. In the second part of my study, I perform an investigation on effectiveness of a proactive robot assistant in remote human-robot teaming scenarios.

Both investigations are conducted in a simulated urban search and rescue (USAR) scenario where the human-robot teams are deployed during early phases of an emergency response to explore all areas of the disaster scene. I evaluate through both the studies, how effective is automated planning technology in helping the human-robot teams move closer to human-human teams. I utilize both objective measures (like accuracy and time spent on primary and secondary tasks, Robot Attention Demand, etc.) and a set of subjective Likert-scale questions (on situation awareness, immediacy etc.) to investigate the trade-offs between different types of remote human-robot teams. The results from both the studies seem to suggest that intelligent robots with automated planning capability and proactive support ability is welcomed in general.
ContributorsNarayanan, Vignesh (Author) / Kambhampati, Subbarao (Thesis advisor) / Zhang, Yu (Thesis advisor) / Cooke, Nancy J. (Committee member) / Fainekos, Georgios (Committee member) / Arizona State University (Publisher)
Created2015
155378-Thumbnail Image.png
Description
To ensure system integrity, robots need to proactively avoid any unwanted physical perturbation that may cause damage to the underlying hardware. In this thesis work, we investigate a machine learning approach that allows robots to anticipate impending physical perturbations from perceptual cues. In contrast to other approaches that require knowledge

To ensure system integrity, robots need to proactively avoid any unwanted physical perturbation that may cause damage to the underlying hardware. In this thesis work, we investigate a machine learning approach that allows robots to anticipate impending physical perturbations from perceptual cues. In contrast to other approaches that require knowledge about sources of perturbation to be encoded before deployment, our method is based on experiential learning. Robots learn to associate visual cues with subsequent physical perturbations and contacts. In turn, these extracted visual cues are then used to predict potential future perturbations acting on the robot. To this end, we introduce a novel deep network architecture which combines multiple sub- networks for dealing with robot dynamics and perceptual input from the environment. We present a self-supervised approach for training the system that does not require any labeling of training data. Extensive experiments in a human-robot interaction task show that a robot can learn to predict physical contact by a human interaction partner without any prior information or labeling. Furthermore, the network is able to successfully predict physical contact from either depth stream input or traditional video input or using both modalities as input.
ContributorsSur, Indranil (Author) / Amor, Heni B (Thesis advisor) / Fainekos, Georgios (Committee member) / Yang, Yezhou (Committee member) / Arizona State University (Publisher)
Created2017
171513-Thumbnail Image.png
Description
Automated driving systems (ADS) have come a long way since their inception. It is clear that these systems rely heavily on stochastic deep learning techniques for perception, planning, and prediction, as it is impossible to construct every possible driving scenario to generate driving policies. Moreover, these systems need to be

Automated driving systems (ADS) have come a long way since their inception. It is clear that these systems rely heavily on stochastic deep learning techniques for perception, planning, and prediction, as it is impossible to construct every possible driving scenario to generate driving policies. Moreover, these systems need to be trained and validated extensively on typical and abnormal driving situations before they can be trusted with human life. However, most publicly available driving datasets only consist of typical driving behaviors. On the other hand, there is a plethora of videos available on the internet that capture abnormal driving scenarios, but they are unusable for ADS training or testing as they lack important information such as camera calibration parameters, and annotated vehicle trajectories. This thesis proposes a new toolbox, DeepCrashTest-V2, that is capable of reconstructing high-quality simulations from monocular dashcam videos found on the internet. The toolbox not only estimates the crucial parameters such as camera calibration, ego-motion, and surrounding road user trajectories but also creates a virtual world in Car Learning to Act (CARLA) using data from OpenStreetMaps to simulate the estimated trajectories. The toolbox is open-source and is made available in the form of a python package on GitHub at https://github.com/C-Aniruddh/deepcrashtest_v2.
ContributorsChandratre, Aniruddh Vinay (Author) / Fainekos, Georgios (Thesis advisor) / Ben Amor, Hani (Thesis advisor) / Pedrielli, Giulia (Committee member) / Arizona State University (Publisher)
Created2022
171516-Thumbnail Image.png
Description
In recent years, the development of Control Barrier Functions (CBF) has allowed safety guarantees to be placed on nonlinear control affine systems. While powerful as a mathematical tool, CBF implementations on systems with high relative degree constraints can become too computationally intensive for real-time control. Such deployments typically rely on

In recent years, the development of Control Barrier Functions (CBF) has allowed safety guarantees to be placed on nonlinear control affine systems. While powerful as a mathematical tool, CBF implementations on systems with high relative degree constraints can become too computationally intensive for real-time control. Such deployments typically rely on the analysis of a system's symbolic equations of motion, leading to large, platform-specific control programs that do not generalize well. To address this, a more generalized framework is needed. This thesis provides a formulation for second-order CBFs for rigid open kinematic chains. An algorithm for numerically computing the safe control input of a CBF is then introduced based on this formulation. It is shown that this algorithm can be used on a broad category of systems, with specific examples shown for convoy platooning, drone obstacle avoidance, and robotic arms with large degrees of freedom. These examples show up to three-times performance improvements in computation time as well as 2-3 orders of magnitude in the reduction in program size.
ContributorsPietz, Daniel Johannes (Author) / Fainekos, Georgios (Thesis advisor) / Vrudhula, Sarma (Thesis advisor) / Pedrielli, Giulia (Committee member) / Pavlic, Theodore (Committee member) / Arizona State University (Publisher)
Created2022
158851-Thumbnail Image.png
Description
Most planning agents assume complete knowledge of the domain, which may not be the case in scenarios where certain domain knowledge is missing. This problem could be due to design flaws or arise from domain ramifications or qualifications. In such cases, planning algorithms could produce highly undesirable behaviors. Planning with

Most planning agents assume complete knowledge of the domain, which may not be the case in scenarios where certain domain knowledge is missing. This problem could be due to design flaws or arise from domain ramifications or qualifications. In such cases, planning algorithms could produce highly undesirable behaviors. Planning with incomplete domain knowledge is more challenging than partial observability in the sense that the planning agent is unaware of the existence of such knowledge, in contrast to it being just unobservable or partially observable. That is the difference between known unknowns and unknown unknowns.

In this thesis, I introduce and formulate this as the problem of Domain Concretization, which is inverse to domain abstraction studied extensively before. Furthermore, I present a solution that starts from the incomplete domain model provided to the agent by the designer and uses teacher traces from human users to determine the candidate model set under a minimalistic model assumption. A robust plan is then generated for the maximum probability of success under the set of candidate models. In addition to a standard search formulation in the model-space, I propose a sample-based search method and also an online version of it to improve search time. The solution presented has been evaluated on various International Planning Competition domains where incompleteness was introduced by deleting certain predicates from the complete domain model. The solution is also tested in a robot simulation domain to illustrate its effectiveness in handling incomplete domain knowledge. The results show that the plan generated by the algorithm increases the plan success rate without impacting action cost too much.
ContributorsSharma, Akshay (Author) / Zhang, Yu (Thesis advisor) / Fainekos, Georgios (Committee member) / Srivastava, Siddharth (Committee member) / Arizona State University (Publisher)
Created2020