Matching Items (8)
Filtering by

Clear all filters

151780-Thumbnail Image.png
Description
Objective of this thesis project is to build a prototype using Linear Temporal Logic specifications for generating a 2D motion plan commanding an iRobot to fulfill the specifications. This thesis project was created for Cyber Physical Systems Lab in Arizona State University. The end product of this thesis is creation

Objective of this thesis project is to build a prototype using Linear Temporal Logic specifications for generating a 2D motion plan commanding an iRobot to fulfill the specifications. This thesis project was created for Cyber Physical Systems Lab in Arizona State University. The end product of this thesis is creation of a software solution which can be used in the academia and industry for research in cyber physical systems related applications. The major features of the project are: creating a modular system for motion planning, use of Robot Operating System (ROS), use of triangulation for environment decomposition and using stargazer sensor for localization. The project is built on an open source software called ROS which provides an environment where it is very easy to integrate different modules be it software or hardware on a Linux based platform. Use of ROS implies the project or its modules can be adapted quickly for different applications as the need arises. The final software package created and tested takes a data file as its input which contains the LTL specifications, a symbols list used in the LTL and finally the environment polygon data containing real world coordinates for all polygons and also information on neighbors and parents of each polygon. The software package successfully ran the experiment of coverage, reachability with avoidance and sequencing.
ContributorsPandya, Parth (Author) / Fainekos, Georgios (Thesis advisor) / Dasgupta, Partha (Committee member) / Lee, Yann-Hang (Committee member) / Arizona State University (Publisher)
Created2013
151793-Thumbnail Image.png
Description
Linear Temporal Logic is gaining increasing popularity as a high level specification language for robot motion planning due to its expressive power and scalability of LTL control synthesis algorithms. This formalism, however, requires expert knowledge and makes it inaccessible to non-expert users. This thesis introduces a graphical specification environment to

Linear Temporal Logic is gaining increasing popularity as a high level specification language for robot motion planning due to its expressive power and scalability of LTL control synthesis algorithms. This formalism, however, requires expert knowledge and makes it inaccessible to non-expert users. This thesis introduces a graphical specification environment to create high level motion plans to control robots in the field by converting a visual representation of the motion/task plan into a Linear Temporal Logic (LTL) specification. The visual interface is built on the Android tablet platform and provides functionality to create task plans through a set of well defined gestures and on screen controls. It uses the notion of waypoints to quickly and efficiently describe the motion plan and enables a variety of complex Linear Temporal Logic specifications to be described succinctly and intuitively by the user without the need for the knowledge and understanding of LTL specification. Thus, it opens avenues for its use by personnel in military, warehouse management, and search and rescue missions. This thesis describes the construction of LTL for various scenarios used for robot navigation using the visual interface developed and leverages the use of existing LTL based motion planners to carry out the task plan by a robot.
ContributorsSrinivas, Shashank (Author) / Fainekos, Georgios (Thesis advisor) / Baral, Chitta (Committee member) / Burleson, Winslow (Committee member) / Arizona State University (Publisher)
Created2013
153492-Thumbnail Image.png
Description
Although current urban search and rescue (USAR) robots are little more than remotely controlled cameras, the end goal is for them to work alongside humans as trusted teammates. Natural language communications and performance data are collected as a team of humans works to carry out a simulated search and rescue

Although current urban search and rescue (USAR) robots are little more than remotely controlled cameras, the end goal is for them to work alongside humans as trusted teammates. Natural language communications and performance data are collected as a team of humans works to carry out a simulated search and rescue task in an uncertain virtual environment. Conditions are tested emulating a remotely controlled robot versus an intelligent one. Differences in performance, situation awareness, trust, workload, and communications are measured. The Intelligent robot condition resulted in higher levels of performance and operator situation awareness (SA).
ContributorsBartlett, Cade Earl (Author) / Cooke, Nancy J. (Thesis advisor) / Kambhampati, Subbarao (Committee member) / Wu, Bing (Committee member) / Arizona State University (Publisher)
Created2015
153297-Thumbnail Image.png
Description
This thesis considers two problems in the control of robotic swarms. Firstly, it addresses a trajectory planning and task allocation problem for a swarm of resource-constrained robots that cannot localize or communicate with each other and that exhibit stochasticity in their motion and task switching policies. We model the population

This thesis considers two problems in the control of robotic swarms. Firstly, it addresses a trajectory planning and task allocation problem for a swarm of resource-constrained robots that cannot localize or communicate with each other and that exhibit stochasticity in their motion and task switching policies. We model the population dynamics of the robotic swarm as a set of advection-diffusion- reaction (ADR) partial differential equations (PDEs).

Specifically, we consider a linear parabolic PDE model that is bilinear in the robots' velocity and task-switching rates. These parameters constitute a set of time-dependent control variables that can be optimized and transmitted to the robots prior to their deployment or broadcasted in real time. The planning and allocation problem can then be formulated as a PDE-constrained optimization problem, which we solve using techniques from optimal control. Simulations of a commercial pollination scenario validate the ability of our control approach to drive a robotic swarm to achieve predefined spatial distributions of activity over a closed domain, which may contain obstacles. Secondly, we consider a mapping problem wherein a robotic swarm is deployed over a closed domain and it is necessary to reconstruct the unknown spatial distribution of a feature of interest. The ADR-based primitives result in a coefficient identification problem for the corresponding system of PDEs. To deal with the inherent ill-posedness of the problem, we frame it as an optimization problem. We validate our approach through simulations and show that reconstruction of the spatially-dependent coefficient can be achieved with considerable accuracy using temporal information alone.
ContributorsElamvazhuthi, Karthik (Author) / Berman, Spring Melody (Thesis advisor) / Peet, Matthew Monnig (Committee member) / Mittelmann, Hans (Committee member) / Arizona State University (Publisher)
Created2014
151156-Thumbnail Image.png
Description
Continuous underwater observation is a challenging engineering task that could be accomplished by development and deployment of a sensor array that can survive harsh underwater conditions. One approach to this challenge is a swarm of micro underwater robots, known as Sensorbots, that are equipped with biogeochemical sensors that can relay

Continuous underwater observation is a challenging engineering task that could be accomplished by development and deployment of a sensor array that can survive harsh underwater conditions. One approach to this challenge is a swarm of micro underwater robots, known as Sensorbots, that are equipped with biogeochemical sensors that can relay information among themselves in real-time. This innovative method for underwater exploration can contribute to a more comprehensive understanding of the ocean by not limiting sampling to a single point and time. In this thesis, Sensorbot Beta, a low-cost fully enclosed Sensorbot prototype for bench-top characterization and short-term field testing, is presented in a modular format that provides flexibility and the potential for rapid design. Sensorbot Beta is designed around a microcontroller driven platform comprised of commercial off-the-shelf components for all hardware to reduce cost and development time. The primary sensor incorporated into Sensorbot Beta is an in situ fluorescent pH sensor. Design considerations have been made for easy adoption of other fluorescent or phosphorescent sensors, such as dissolved oxygen or temperature. Optical components are designed in a format that enables additional sensors. A real-time data acquisition system, utilizing Bluetooth, allows for characterization of the sensor in bench top experiments. The Sensorbot Beta demonstrates rapid calibration and future work will include deployment for large scale experiments in a lake or ocean.
ContributorsJohansen, John (Civil engineer) (Author) / Meldrum, Deirdre R (Thesis advisor) / Chao, Shih-hui (Committee member) / Papandreou-Suppappola, Antonia (Committee member) / Arizona State University (Publisher)
Created2012
154664-Thumbnail Image.png
Description
Long-term monitoring of deep brain structures using microelectrode implants is critical for the success of emerging clinical applications including cortical neural prostheses, deep brain stimulation and other neurobiology studies such as progression of disease states, learning and memory, brain mapping etc. However, current microelectrode technologies are not capable enough

Long-term monitoring of deep brain structures using microelectrode implants is critical for the success of emerging clinical applications including cortical neural prostheses, deep brain stimulation and other neurobiology studies such as progression of disease states, learning and memory, brain mapping etc. However, current microelectrode technologies are not capable enough of reaching those clinical milestones given their inconsistency in performance and reliability in long-term studies. In all the aforementioned applications, it is important to understand the limitations & demands posed by technology as well as biological processes. Recent advances in implantable Micro Electro Mechanical Systems (MEMS) technology have tremendous potential and opens a plethora of opportunities for long term studies which were not possible before. The overall goal of the project is to develop large scale autonomous, movable, micro-scale interfaces which can seek and monitor/stimulate large ensembles of precisely targeted neurons and neuronal networks that can be applied for brain mapping in behaving animals. However, there are serious technical (fabrication) challenges related to packaging and interconnects, examples of which include: lack of current industry standards in chip-scale packaging techniques for silicon chips with movable microstructures, incompatible micro-bonding techniques to elongate current micro-electrode length to reach deep brain structures, inability to achieve hermetic isolation of implantable devices from biological tissue and fluids (i.e. cerebrospinal fluid (CSF), blood, etc.). The specific aims are to: 1) optimize & automate chip scale packaging of MEMS devices with unique requirements not amenable to conventional industry standards with respect to bonding, process temperature and pressure in order to achieve scalability 2) develop a novel micro-bonding technique to extend the length of current polysilicon micro-electrodes to reach and monitor deep brain structures 3) design & develop high throughput packaging mechanism for constructing a dense array of movable microelectrodes. Using a combination of unique micro-bonding technique which involves conductive thermosetting epoxy’s with hermetically sealed support structures and a highly optimized, semi-automated, 90-minute flip-chip packaging process, I have now extended the repertoire of previously reported movable microelectrode arrays to bond conventional stainless steel and Pt/Ir microelectrode arrays of desired lengths to steerable polysilicon shafts. I tested scalable prototypes in rigorous bench top tests including Impedance measurements, accelerated aging and non-destructive testing to assess electrical and mechanical stability of micro-bonds under long-term implantation. I propose a 3D printed packaging method allows a wide variety of electrode configurations to be realized such as a rectangular or circular array configuration or other arbitrary geometries optimal for specific regions of the brain with inter-electrode distance as low as 25 um with an unprecedented capability of seeking and recording/stimulating targeted single neurons in deep brain structures up to 10 mm deep (with 6 μm displacement resolution). The advantage of this computer controlled moveable deep brain electrodes facilitates potential capabilities of moving past glial sheath surrounding microelectrodes to restore neural connection, counter the variabilities in signal amplitudes, and enable simultaneous recording/stimulation at precisely targeted layers of brain.
ContributorsPalaniswamy, Sivakumar (Author) / Muthuswamy, Jitendran (Thesis advisor) / Buneo, Christopher (Committee member) / Abbas, James (Committee member) / Arizona State University (Publisher)
Created2016
154868-Thumbnail Image.png
Description
Robots are becoming an important part of our life and industry. Although a lot of robot control interfaces have been developed to simplify the control method and improve user experience, users still cannot control robots comfortably. With the improvements of the robot functions, the requirements of universality and ease of

Robots are becoming an important part of our life and industry. Although a lot of robot control interfaces have been developed to simplify the control method and improve user experience, users still cannot control robots comfortably. With the improvements of the robot functions, the requirements of universality and ease of use of robot control interfaces are also increasing. This research introduces a graphical interface for Linear Temporal Logic (LTL) specifications for mobile robots. It is a sketch based interface built on the Android platform which makes the LTL control interface more friendly to non-expert users. By predefining a set of areas of interest, this interface can quickly and efficiently create plans that satisfy extended plan goals in LTL. The interface can also allow users to customize the paths for this plan by sketching a set of reference trajectories. Given the custom paths by the user, the LTL specification and the environment, the interface generates a plan balancing the customized paths and the LTL specifications. We also show experimental results with the implemented interface.
ContributorsWei, Wei (Author) / Fainekos, Georgios (Thesis advisor) / Amor, Hani Ben (Committee member) / Zhang, Yu (Committee member) / Arizona State University (Publisher)
Created2016
155071-Thumbnail Image.png
Description
Sports activities have been a cornerstone in the evolution of humankind through the ages from the ancient Roman empire to the Olympics in the 21st century. These activities have been used as a benchmark to evaluate the how humans have progressed through the sands of time. In the 21st century,

Sports activities have been a cornerstone in the evolution of humankind through the ages from the ancient Roman empire to the Olympics in the 21st century. These activities have been used as a benchmark to evaluate the how humans have progressed through the sands of time. In the 21st century, machines along with the help of powerful computing and relatively new computing paradigms have made a good case for taking up the mantle. Even though machines have been able to perform complex tasks and maneuvers, they have struggled to match the dexterity, coordination, manipulability and acuteness displayed by humans. Bi-manual tasks are more complex and bring in additional variables like coordination into the task making it harder to evaluate.

A task capable of demonstrating the above skillset would be a good measure of the progress in the field of robotic technology. Therefore a dual armed robot has been built and taught to handle the ball and make the basket successfully thus demonstrating the capability of using both arms. A combination of machine learning techniques, Reinforcement learning, and Imitation learning has been used along with advanced optimization algorithms to accomplish the task.
ContributorsKalige, Nikhil (Author) / Amor, Heni Ben (Thesis advisor) / Shrivastava, Aviral (Committee member) / Zhang, Yu (Committee member) / Arizona State University (Publisher)
Created2016