Matching Items (19)
156971-Thumbnail Image.png
Description
Recent advancements in external memory based neural networks have shown promise

in solving tasks that require precise storage and retrieval of past information. Re-

searchers have applied these models to a wide range of tasks that have algorithmic

properties but have not applied these models to real-world robotic tasks. In this

thesis, we present

Recent advancements in external memory based neural networks have shown promise

in solving tasks that require precise storage and retrieval of past information. Re-

searchers have applied these models to a wide range of tasks that have algorithmic

properties but have not applied these models to real-world robotic tasks. In this

thesis, we present memory-augmented neural networks that synthesize robot navigation policies which a) encode long-term temporal dependencies b) make decisions in

partially observed environments and c) quantify the uncertainty inherent in the task.

We extract information about the temporal structure of a task via imitation learning

from human demonstration and evaluate the performance of the models on control

policies for a robot navigation task. Experiments are performed in partially observed

environments in both simulation and the real world
ContributorsSrivatsav, Nambi (Author) / Ben Amor, Hani (Thesis advisor) / Srivastava, Siddharth (Committee member) / Tong, Hanghang (Committee member) / Arizona State University (Publisher)
Created2018
157016-Thumbnail Image.png
Description
A critical challenge in the design of AI systems that operate with humans in the loop is to be able to model the intentions and capabilities of the humans, as well as their beliefs and expectations of the AI system itself. This allows the AI system to be "human- aware"

A critical challenge in the design of AI systems that operate with humans in the loop is to be able to model the intentions and capabilities of the humans, as well as their beliefs and expectations of the AI system itself. This allows the AI system to be "human- aware" -- i.e. the human task model enables it to envisage desired roles of the human in joint action, while the human mental model allows it to anticipate how its own actions are perceived from the point of view of the human. In my research, I explore how these concepts of human-awareness manifest themselves in the scope of planning or sequential decision making with humans in the loop. To this end, I will show (1) how the AI agent can leverage the human task model to generate symbiotic behavior; and (2) how the introduction of the human mental model in the deliberative process of the AI agent allows it to generate explanations for a plan or resort to explicable plans when explanations are not desired. The latter is in addition to traditional notions of human-aware planning which typically use the human task model alone and thus enables a new suite of capabilities of a human-aware AI agent. Finally, I will explore how the AI agent can leverage emerging mixed-reality interfaces to realize effective channels of communication with the human in the loop.
ContributorsChakraborti, Tathagata (Author) / Kambhampati, Subbarao (Thesis advisor) / Talamadupula, Kartik (Committee member) / Scheutz, Matthias (Committee member) / Ben Amor, Hani (Committee member) / Zhang, Yu (Committee member) / Arizona State University (Publisher)
Created2018
134066-Thumbnail Image.png
Description
For those interested in the field of robotics, there are not many options to get your hands on a physical robot without paying a steep price. This is why the folks at BCN3D Technologies decided to design a fully open-source 3D-printable robotic arm. Their goal was to reduce the barrier

For those interested in the field of robotics, there are not many options to get your hands on a physical robot without paying a steep price. This is why the folks at BCN3D Technologies decided to design a fully open-source 3D-printable robotic arm. Their goal was to reduce the barrier to entry for the field of robotics and make it exponentially more accessible for people around the world. For our honors thesis, we chose to take the design from BCN3D and attempt to build their robot, to see how accessible the design truly is. Although their designs were not perfect and we were forced to make some adjustments to the 3D files, overall the work put forth by the people at BCN3D was extremely useful in successfully building a robotic arm that is programmed with ease.
ContributorsCohn, Riley (Co-author) / Petty, Charles (Co-author) / Ben Amor, Hani (Thesis director) / Yong, Sze Zheng (Committee member) / Computer Science and Engineering Program (Contributor) / Mechanical and Aerospace Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2017-12
133211-Thumbnail Image.png
Description
This thesis aims to improve neural control policies for self-driving cars. State-of-the-art navigation software for self-driving cars is based on deep neural networks, where the network is trained on a dataset of past driving experience in various situations. With previous methods, the car can only make decisions based on short-term

This thesis aims to improve neural control policies for self-driving cars. State-of-the-art navigation software for self-driving cars is based on deep neural networks, where the network is trained on a dataset of past driving experience in various situations. With previous methods, the car can only make decisions based on short-term memory. To address this problem, we proposed that using a Neural Turing Machine (NTM) framework adds long-term memory to the system. We evaluated this approach by using it to master a palindrome task. The network was able to infer how to create a palindrome with 100% accuracy. Since the NTM structure proves useful, we aim to use it in the given scenarios to improve the navigation safety and accuracy of a simulated autonomous car.
ContributorsMartin, Sarah (Author) / Ben Amor, Hani (Thesis director) / Fainekos, Georgios (Committee member) / Barrett, The Honors College (Contributor)
Created2018-05
155511-Thumbnail Image.png
Description
The Internet is a major source of online news content. Online news is a form of large-scale narrative text with rich, complex contents that embed deep meanings (facts, strategic communication frames, and biases) for shaping and transitioning standards, values, attitudes, and beliefs of the masses. Currently, this body of narrative

The Internet is a major source of online news content. Online news is a form of large-scale narrative text with rich, complex contents that embed deep meanings (facts, strategic communication frames, and biases) for shaping and transitioning standards, values, attitudes, and beliefs of the masses. Currently, this body of narrative text remains untapped due—in large part—to human limitations. The human ability to comprehend rich text and extract hidden meanings is far superior to known computational algorithms but remains unscalable. In this research, computational treatment is given to online news framing for exposing a deeper level of expressivity coined “double subjectivity” as characterized by its cumulative amplification effects. A visual language is offered for extracting spatial and temporal dynamics of double subjectivity that may give insight into social influence about critical issues, such as environmental, economic, or political discourse. This research offers benefits of 1) scalability for processing hidden meanings in big data and 2) visibility of the entire network dynamics over time and space to give users insight into the current status and future trends of mass communication.
ContributorsCheeks, Loretta H. (Author) / Gaffar, Ashraf (Thesis advisor) / Wald, Dara M (Committee member) / Ben Amor, Hani (Committee member) / Doupe, Adam (Committee member) / Cooke, Nancy J. (Committee member) / Arizona State University (Publisher)
Created2017
155401-Thumbnail Image.png
Description
This work presents a communication paradigm, using a context-aware mixed reality approach, for instructing human workers when collaborating with robots. The main objective of this approach is to utilize the physical work environment as a canvas to communicate task-related instructions and robot intentions in the form of visual cues. A

This work presents a communication paradigm, using a context-aware mixed reality approach, for instructing human workers when collaborating with robots. The main objective of this approach is to utilize the physical work environment as a canvas to communicate task-related instructions and robot intentions in the form of visual cues. A vision-based object tracking algorithm is used to precisely determine the pose and state of physical objects in and around the workspace. A projection mapping technique is used to overlay visual cues on tracked objects and the workspace. Simultaneous tracking and projection onto objects enables the system to provide just-in-time instructions for carrying out a procedural task. Additionally, the system can also inform and warn humans about the intentions of the robot and safety of the workspace. It was hypothesized that using this system for executing a human-robot collaborative task will improve the overall performance of the team and provide a positive experience to the human partner. To test this hypothesis, an experiment involving human subjects was conducted and the performance (both objective and subjective) of the presented system was compared with a conventional method based on printed instructions. It was found that projecting visual cues enabled human subjects to collaborate more effectively with the robot and resulted in higher efficiency in completing the task.
ContributorsKalpagam Ganesan, Ramsundar (Author) / Ben Amor, Hani (Thesis advisor) / Yang, Yezhou (Committee member) / Zhang, Yu (Committee member) / Arizona State University (Publisher)
Created2017
155312-Thumbnail Image.png
Description
For autonomous vehicles, intelligent autonomous intersection management will be required for safe and efficient operation. In order to achieve safe operation despite uncertainties in vehicle trajectory, intersection management techniques must consider a safety buffer around the vehicles. For truly safe operation, an extra buffer space should be added to account

For autonomous vehicles, intelligent autonomous intersection management will be required for safe and efficient operation. In order to achieve safe operation despite uncertainties in vehicle trajectory, intersection management techniques must consider a safety buffer around the vehicles. For truly safe operation, an extra buffer space should be added to account for the network and computational delay caused by communication with the Intersection Manager (IM). However, modeling the worst-case computation and network delay as additional buffer around the vehicle degrades the throughput of the intersection. To avoid this problem, AIM, a popular state-of-the-art IM, adopts a query-based approach in which the vehicle requests to enter at a certain arrival time dictated by its current velocity and distance to the intersection, and the IM replies yes
o. Although this solution does not degrade the position uncertainty, it ultimately results in poor intersection throughput. We present Crossroads, a time-sensitive programming method to program the interface of a vehicle and the IM. Without requiring additional buffer to account for the effect of network and computational delay, Crossroads enables efficient intersection management. Test results on a 1/10 scale model of intersection using TRAXXAS RC cars demonstrates that our Crossroads approach obviates the need for large buffers to accommodate for the network and computation delay, and can reduce the average wait time for the vehicles at a single-lane intersection by 24%. To compare Crossroads with previous approaches, we perform extensive Matlab simulations, and find that Crossroads achieves on average 1.62X higher throughput than a simple VT-IM with extra safety buffer, and 1.36X better than AIM.
ContributorsAndert, Edward (Author) / Shrivastava, Aviral (Thesis advisor) / Fainekos, Georgios (Committee member) / Ben Amor, Hani (Committee member) / Arizona State University (Publisher)
Created2017
128351-Thumbnail Image.png
Description

Assistive and rehabilitation devices are a promising and challenging field of recent robotics research. Motivated by societal needs such as aging populations, such devices can support motor functionality and subject training. The design, control, sensing, and assessment of the devices become more sophisticated due to a human in the loop.

Assistive and rehabilitation devices are a promising and challenging field of recent robotics research. Motivated by societal needs such as aging populations, such devices can support motor functionality and subject training. The design, control, sensing, and assessment of the devices become more sophisticated due to a human in the loop. This paper gives a human-robot interaction perspective on current issues and opportunities in the field. On the topic of control and machine learning, approaches that support but do not distract subjects are reviewed. Options to provide sensory user feedback that are currently missing from robotic devices are outlined. Parallels between device acceptance and affective computing are made. Furthermore, requirements for functional assessment protocols that relate to real-world tasks are discussed. In all topic areas, the design of human-oriented frameworks and methods is dominated by challenges related to the close interaction between the human and robotic device. This paper discusses the aforementioned aspects in order to open up new perspectives for future robotic solutions.

ContributorsBeckerle, Philipp (Author) / Salvietti, Gionata (Author) / Unal, Ramazan (Author) / Prattichizzo, Domenico (Author) / Rossi, Simone (Author) / Castellini, Claudio (Author) / Hirche, Sandra (Author) / Endo, Satoshi (Author) / Ben Amor, Hani (Author) / Ciocarlie, Matei (Author) / Mastrogiovanni, Fulvio (Author) / Argall, Brenna D. (Author) / Bianchi, Matteo (Author) / Ira A. Fulton Schools of Engineering (Contributor)
Created2017-05-23
168417-Thumbnail Image.png
Description
Trajectory forecasting is used in many fields such as vehicle future trajectory prediction, stock market price prediction, human motion prediction and so on. Also, robots having the capability to reason about human behavior is an important aspect in human robot interaction. In trajectory prediction with regards to human motion prediction,

Trajectory forecasting is used in many fields such as vehicle future trajectory prediction, stock market price prediction, human motion prediction and so on. Also, robots having the capability to reason about human behavior is an important aspect in human robot interaction. In trajectory prediction with regards to human motion prediction, implicit learning and reproduction of human behavior is the major challenge. This work tries to compare some of the recent advances taking a phenomenological approach to trajectory prediction. \par The work is expected to mainly target on generating future events or trajectories based on the previous data observed across many time intervals. In particular, this work presents and compares machine learning models to generate various human handwriting trajectories. Although the behavior of every individual is unique, it is still possible to broadly generalize and learn the underlying human behavior from the current observations to predict future human writing trajectories. This enables the machine or the robot to generate future handwriting trajectories given an initial trajectory from the individual thus helping the person to fill up the rest of the letter or curve. This work tests and compares the performance of Conditional Variational Autoencoders and Sinusoidal Representation Network models on handwriting trajectory prediction and reconstruction.
ContributorsKota, Venkata Anil (Author) / Ben Amor, Hani (Thesis advisor) / Venkateswara, Hemanth Kumar Demakethepalli (Committee member) / Redkar, Sangram (Committee member) / Arizona State University (Publisher)
Created2021
172013-Thumbnail Image.png
Description
In this thesis work, a novel learning approach to solving the problem of controllinga quadcopter (drone) swarm is explored. To deal with large sizes, swarm control is often achieved in a distributed fashion by combining different behaviors such that each behavior implements some desired swarm characteristics, such as avoiding ob- stacles and staying

In this thesis work, a novel learning approach to solving the problem of controllinga quadcopter (drone) swarm is explored. To deal with large sizes, swarm control is often achieved in a distributed fashion by combining different behaviors such that each behavior implements some desired swarm characteristics, such as avoiding ob- stacles and staying close to neighbors. One common approach in distributed swarm control uses potential fields. A limitation of this approach is that the potential fields often depend statically on a set of control parameters that are manually specified a priori. This paper introduces Dynamic Potential Fields for flexible swarm control. These potential fields are modulated by a set of dynamic control parameters (DCPs) that can change under different environment situations. Since the focus is only on these DCPs, it simplifies the learning problem and makes it feasible for practical use. This approach uses soft actor critic (SAC) where the actor only determines how to modify DCPs in the current situation, resulting in more flexible swarm control. In the results, this work will show that the DCP approach allows for the drones to bet- ter traverse environments with obstacles compared to several state-of-the-art swarm control methods with a fixed set of control parameters. This approach also obtained a higher safety score commonly used to assess swarm behavior. A basic reinforce- ment learning approach is compared to demonstrate faster convergence. Finally, an ablation study is conducted to validate the design of this approach.
ContributorsFerraro, Calvin Shores (Author) / Zhang, Yu (Thesis advisor) / Ben Amor, Hani (Committee member) / Berman, Spring (Committee member) / Arizona State University (Publisher)
Created2022