Matching Items (109)
Filtering by

Clear all filters

157926-Thumbnail Image.png
Description
In order for a robot to solve complex tasks in real world, it needs to compute discrete, high-level strategies that can be translated into continuous movement trajectories. These problems become increasingly difficult with increasing numbers of objects and domain constraints, as well as with the increasing degrees of freedom of

In order for a robot to solve complex tasks in real world, it needs to compute discrete, high-level strategies that can be translated into continuous movement trajectories. These problems become increasingly difficult with increasing numbers of objects and domain constraints, as well as with the increasing degrees of freedom of robotic manipulator arms.

The first part of this thesis develops and investigates new methods for addressing these problems through hierarchical task and motion planning for manipulation with a focus on autonomous construction of free-standing structures using precision-cut planks. These planks can be arranged in various orientations to design complex structures; reliably and autonomously building such structures from scratch is computationally intractable due to the long planning horizon and the infinite branching factor of possible grasps and placements that the robot could make.

An abstract representation is developed for this class of problems and show how pose generators can be used to autonomously compute feasible robot motion plans for constructing a given structure. The approach was evaluated through simulation and on a real ABB YuMi robot. Results show that hierarchical algorithms for planning can effectively overcome the computational barriers to solving such problems.

The second part of this thesis proposes a deep learning-based algorithm to identify critical regions for motion planning. Further investigation is done whether these learned critical regions can be translated to learn high-level landmark actions for automated planning.
ContributorsKumar, Kislay (Author) / Srivastava, Siddharth (Thesis advisor) / Zhang, Yu (Committee member) / Yang, Yezhou (Committee member) / Arizona State University (Publisher)
Created2019
157623-Thumbnail Image.png
Description
Feature embeddings differ from raw features in the sense that the former obey certain properties like notion of similarity/dissimilarity in it's embedding space. word2vec is a preeminent example in this direction, where the similarity in the embedding space is measured in terms of the cosine similarity. Such language embedding models

Feature embeddings differ from raw features in the sense that the former obey certain properties like notion of similarity/dissimilarity in it's embedding space. word2vec is a preeminent example in this direction, where the similarity in the embedding space is measured in terms of the cosine similarity. Such language embedding models have seen numerous applications in both language and vision community as they capture the information in the modality (English language) efficiently. Inspired by these language models, this work focuses on learning embedding spaces for two visual computing tasks, 1. Image Hashing 2. Zero Shot Learning. The training set was used to learn embedding spaces over which similarity/dissimilarity is measured using several distance metrics like hamming / euclidean / cosine distances. While the above-mentioned language models learn generic word embeddings, in this work task specific embeddings were learnt which can be used for Image Retrieval and Classification separately.

Image Hashing is the task of mapping images to binary codes such that some notion of user-defined similarity is preserved. The first part of this work focuses on designing a new framework that uses the hash-tags associated with web images to learn the binary codes. Such codes can be used in several applications like Image Retrieval and Image Classification. Further, this framework requires no labelled data, leaving it very inexpensive. Results show that the proposed approach surpasses the state-of-art approaches by a significant margin.

Zero-shot classification is the task of classifying the test sample into a new class which was not seen during training. This is possible by establishing a relationship between the training and the testing classes using auxiliary information. In the second part of this thesis, a framework is designed that trains using the handcrafted attribute vectors and word vectors but doesn’t require the expensive attribute vectors during test time. More specifically, an intermediate space is learnt between the word vector space and the image feature space using the hand-crafted attribute vectors. Preliminary results on two zero-shot classification datasets show that this is a promising direction to explore.
ContributorsGattupalli, Jaya Vijetha (Author) / Li, Baoxin (Thesis advisor) / Yang, Yezhou (Committee member) / Venkateswara, Hemanth (Committee member) / Arizona State University (Publisher)
Created2019
157799-Thumbnail Image.png
Description
The goal of reinforcement learning is to enable systems to autonomously solve tasks in the real world, even in the absence of prior data. To succeed in such situations, reinforcement learning algorithms collect new experience through interactions with the environment to further the learning process. The behaviour is optimized

The goal of reinforcement learning is to enable systems to autonomously solve tasks in the real world, even in the absence of prior data. To succeed in such situations, reinforcement learning algorithms collect new experience through interactions with the environment to further the learning process. The behaviour is optimized by maximizing a reward function, which assigns high numerical values to desired behaviours. Especially in robotics, such interactions with the environment are expensive in terms of the required execution time, human involvement, and mechanical degradation of the system itself. Therefore, this thesis aims to introduce sample-efficient reinforcement learning methods which are applicable to real-world settings and control tasks such as bimanual manipulation and locomotion. Sample efficiency is achieved through directed exploration, either by using dimensionality reduction or trajectory optimization methods. Finally, it is demonstrated how data-efficient reinforcement learning methods can be used to optimize the behaviour and morphology of robots at the same time.
ContributorsLuck, Kevin Sebastian (Author) / Ben Amor, Hani (Thesis advisor) / Aukes, Daniel (Committee member) / Fainekos, Georgios (Committee member) / Scholz, Jonathan (Committee member) / Yang, Yezhou (Committee member) / Arizona State University (Publisher)
Created2019
158746-Thumbnail Image.png
Description
This work solves the problem of incorrect rotations while using handheld devices.Two new methods which improve upon previous works are explored. The first method
uses an infrared camera to capture and detect the user’s face position and orient the
display accordingly. The second method utilizes gyroscopic and accelerometer data
as input to a

This work solves the problem of incorrect rotations while using handheld devices.Two new methods which improve upon previous works are explored. The first method
uses an infrared camera to capture and detect the user’s face position and orient the
display accordingly. The second method utilizes gyroscopic and accelerometer data
as input to a machine learning model to classify correct and incorrect rotations.
Experiments show that these new methods achieve an overall success rate of 67%
for the first and 92% for the second which reaches a new high for this performance
category. The paper also discusses logistical and legal reasons for implementing this
feature into an end-user product from a business perspective. Lastly, the monetary
incentive behind a feature like irRotate in a consumer device and explore related
patents is discussed.
ContributorsTallman, Riley (Author) / Yang, Yezhou (Thesis advisor) / Liang, Jianming (Committee member) / Chen, Yinong (Committee member) / Arizona State University (Publisher)
Created2020
158646-Thumbnail Image.png
Description
Imagery data has become important for civil infrastructure operation and

maintenance because imagery data can capture detailed visual information with high

frequencies. Computer vision can be useful for acquiring spatiotemporal details to

support the timely maintenance of critical civil infrastructures that serve society. Some

examples include: irrigation canals need to maintain the leaking sections

Imagery data has become important for civil infrastructure operation and

maintenance because imagery data can capture detailed visual information with high

frequencies. Computer vision can be useful for acquiring spatiotemporal details to

support the timely maintenance of critical civil infrastructures that serve society. Some

examples include: irrigation canals need to maintain the leaking sections to avoid water

loss; project engineers need to identify the deviating parts of the workflow to have the

project finished on time and within budget; detecting abnormal behaviors of air traffic

controllers is necessary to reduce operational errors and avoid air traffic accidents.

Identifying the outliers of the civil infrastructure can help engineers focus on targeted

areas. However, large amounts of imagery data bring the difficulty of information

overloading. Anomaly detection combined with contextual knowledge could help address

such information overloading to support the operation and maintenance of civil

infrastructures.

Some challenges make such identification of anomalies difficult. The first challenge is

that diverse large civil infrastructures span among various geospatial environments so

that previous algorithms cannot handle anomaly detection of civil infrastructures in

different environments. The second challenge is that the crowded and rapidly changing

workspaces can cause difficulties for the reliable detection of deviating parts of the

workflow. The third challenge is that limited studies examined how to detect abnormal

behaviors for diverse people in a real-time and non-intrusive manner. Using video andii

relevant data sources (e.g., biometric and communication data) could be promising but

still need a baseline of normal behaviors for outlier detection.

This dissertation presents an anomaly detection framework that uses contextual

knowledge, contextual information, and contextual data for filtering visual information

extracted by computer vision techniques (ADCV) to address the challenges described

above. The framework categorizes the anomaly detection of civil infrastructures into two

categories: with and without a baseline of normal events. The author uses three case

studies to illustrate how the developed approaches can address ADCV challenges in

different categories of anomaly detection. Detailed data collection and experiments

validate the developed ADCV approaches.
ContributorsChen, Jiawei (Author) / Tang, Pingbo (Thesis advisor) / Ayer, Steven (Committee member) / Yang, Yezhou (Committee member) / Arizona State University (Publisher)
Created2020
158648-Thumbnail Image.png
Description
The need for incorporating game engines into robotics tools becomes increasingly crucial as their graphics continue to become more photorealistic. This thesis presents a simulation framework, referred to as OpenUAV, that addresses cloud simulation and photorealism challenges in academic and research goals. In this work, OpenUAV is used to create

The need for incorporating game engines into robotics tools becomes increasingly crucial as their graphics continue to become more photorealistic. This thesis presents a simulation framework, referred to as OpenUAV, that addresses cloud simulation and photorealism challenges in academic and research goals. In this work, OpenUAV is used to create a simulation of an autonomous underwater vehicle (AUV) closely following a moving autonomous surface vehicle (ASV) in an underwater coral reef environment. It incorporates the Unity3D game engine and the robotics software Gazebo to take advantage of Unity3D's perception and Gazebo's physics simulation. The software is developed as a containerized solution that is deployable on cloud and on-premise systems.

This method of utilizing Gazebo's physics and Unity3D perception is evaluated for a team of marine vehicles (an AUV and an ASV) in a coral reef environment. A coordinated navigation and localization module is presented that allows the AUV to follow the path of the ASV. A fiducial marker underneath the ASV facilitates pose estimation of the AUV, and the pose estimates are filtered using the known dynamical system model of both vehicles for better localization. This thesis also investigates different fiducial markers and their detection rates in this Unity3D underwater environment. The limitations and capabilities of this Unity3D perception and Gazebo physics approach are examined.
ContributorsAnand, Harish (Author) / Das, Jnaneshwar (Thesis advisor) / Yang, Yezhou (Committee member) / Berman, Spring M (Committee member) / Arizona State University (Publisher)
Created2020
158844-Thumbnail Image.png
Description
Many real-world planning problems can be modeled as Markov Decision Processes (MDPs) which provide a framework for handling uncertainty in outcomes of action executions. A solution to such a planning problem is a policy that handles possible contingencies that could arise during execution. MDP solvers typically construct policies for a

Many real-world planning problems can be modeled as Markov Decision Processes (MDPs) which provide a framework for handling uncertainty in outcomes of action executions. A solution to such a planning problem is a policy that handles possible contingencies that could arise during execution. MDP solvers typically construct policies for a problem instance without re-using information from previously solved instances. Research in generalized planning has demonstrated the utility of constructing algorithm-like plans that reuse such information. However, using such techniques in an MDP setting has not been adequately explored.

This thesis presents a novel approach for learning generalized partial policies that can be used to solve problems with different object names and/or object quantities using very few example policies for learning. This approach uses abstraction for state representation, which allows the identification of patterns in solutions such as loops that are agnostic to problem-specific properties. This thesis also presents some theoretical results related to the uniqueness and succinctness of the policies computed using such a representation. The presented algorithm can be used as fast, yet greedy and incomplete method for policy computation while falling back to a complete policy search algorithm when needed. Extensive empirical evaluation on discrete MDP benchmarks shows that this approach generalizes effectively and is often able to solve problems much faster than existing state-of-art discrete MDP solvers. Finally, the practical applicability of this approach is demonstrated by incorporating it in an anytime stochastic task and motion planning framework to successfully construct free-standing tower structures using Keva planks.
ContributorsKala Vasudevan, Deepak (Author) / Srivastava, Siddharth (Thesis advisor) / Zhang, Yu (Committee member) / Yang, Yezhou (Committee member) / Arizona State University (Publisher)
Created2020
158811-Thumbnail Image.png
Description
Image super-resolution (SR) is a low-level image processing task, which has manyapplications such as medical imaging, satellite image processing, and video enhancement,
etc. Given a low resolution image, it aims to reconstruct a high resolution
image. The problem is ill-posed since there can be more than one high resolution
image corresponding to the

Image super-resolution (SR) is a low-level image processing task, which has manyapplications such as medical imaging, satellite image processing, and video enhancement,
etc. Given a low resolution image, it aims to reconstruct a high resolution
image. The problem is ill-posed since there can be more than one high resolution
image corresponding to the same low-resolution image. To address this problem, a
number of machine learning-based approaches have been proposed.
In this dissertation, I present my works on single image super-resolution (SISR)
and accelerated magnetic resonance imaging (MRI) (a.k.a. super-resolution on MR
images), followed by the investigation on transfer learning for accelerated MRI reconstruction.
For the SISR, a dictionary-based approach and two reconstruction based
approaches are presented. To be precise, a convex dictionary learning (CDL)
algorithm is proposed by constraining the dictionary atoms to be formed by nonnegative
linear combination of the training data, which is a natural, desired property.
Also, two reconstruction-based single methods are presented, which make use
of (i)the joint regularization, where a group-residual-based regularization (GRR) and
a ridge-regression-based regularization (3R) are combined; (ii)the collaborative representation
and non-local self-similarity. After that, two deep learning approaches
are proposed, aiming at reconstructing high-quality images from accelerated MRI
acquisition. Residual Dense Block (RDB) and feedback connection are introduced
in the proposed models. In the last chapter, the feasibility of transfer learning for
accelerated MRI reconstruction is discussed.
ContributorsDing, Pak Lun Kevin (Author) / Li, Baoxin (Thesis advisor) / Wu, Teresa (Committee member) / Wang, Yalin (Committee member) / Yang, Yezhou (Committee member) / Arizona State University (Publisher)
Created2020
158897-Thumbnail Image.png
Description
A complex social system, whether artificial or natural, can possess its macroscopic properties as a collective, which may change in real time as a result of local behavioral interactions among a number of agents in it. If a reliable indicator is available to abstract the macrolevel states, decision makers could

A complex social system, whether artificial or natural, can possess its macroscopic properties as a collective, which may change in real time as a result of local behavioral interactions among a number of agents in it. If a reliable indicator is available to abstract the macrolevel states, decision makers could use it to take a proactive action, whenever needed, in order for the entire system to avoid unacceptable states or con-verge to desired ones. In realistic scenarios, however, there can be many challenges in learning a model of dynamic global states from interactions of agents, such as 1) high complexity of the system itself, 2) absence of holistic perception, 3) variability of group size, 4) biased observations on state space, and 5) identification of salient behavioral cues. In this dissertation, I introduce useful applications of macrostate estimation in complex multi-agent systems and explore effective deep learning frameworks to ad-dress the inherited challenges. First of all, Remote Teammate Localization (ReTLo)is developed in multi-robot teams, in which an individual robot can use its local interactions with a nearby robot as an information channel to estimate the holistic view of the group. Within the problem, I will show (a) learning a model of a modular team can generalize to all others to gain the global awareness of the team of variable sizes, and (b) active interactions are necessary to diversify training data and speed up the overall learning process. The complexity of the next focal system escalates to a colony of over 50 individual ants undergoing 18-day social stabilization since a chaotic event. I will utilize this natural platform to demonstrate, in contrast to (b), (c)monotonic samples only from “before chaos” can be sufficient to model the panicked society, and (d) the model can also be used to discover salient behaviors to precisely predict macrostates.
ContributorsChoi, Taeyeong (Author) / Pavlic, Theodore (Thesis advisor) / Richa, Andrea (Committee member) / Ben Amor, Heni (Committee member) / Yang, Yezhou (Committee member) / Liebig, Juergen (Committee member) / Arizona State University (Publisher)
Created2020
158399-Thumbnail Image.png
Description
Languages, specially gestural and sign languages, are best learned in immersive environments with rich feedback. Computer-Aided Language Learning (CALL) solu- tions for spoken languages have successfully incorporated some feedback mechanisms, but no such solution exists for signed languages. Computer Aided Sign Language Learning (CASLL) is a recent and promising field

Languages, specially gestural and sign languages, are best learned in immersive environments with rich feedback. Computer-Aided Language Learning (CALL) solu- tions for spoken languages have successfully incorporated some feedback mechanisms, but no such solution exists for signed languages. Computer Aided Sign Language Learning (CASLL) is a recent and promising field of research which is made feasible by advances in Computer Vision and Sign Language Recognition(SLR). Leveraging existing SLR systems for feedback based learning is not feasible because their decision processes are not human interpretable and do not facilitate conceptual feedback to learners. Thus, fundamental research is needed towards designing systems that are modular and explainable. The explanations from these systems can then be used to produce feedback to aid in the learning process.

In this work, I present novel approaches for the recognition of location, movement and handshape that are components of American Sign Language (ASL) using both wrist-worn sensors as well as webcams. Finally, I present Learn2Sign(L2S), a chat- bot based AI tutor that can provide fine-grained conceptual feedback to learners of ASL using the modular recognition approaches. L2S is designed to provide feedback directly relating to the fundamental concepts of ASL using an explainable AI. I present the system performance results in terms of Precision, Recall and F-1 scores as well as validation results towards the learning outcomes of users. Both retention and execution tests for 26 participants for 14 different ASL words learned using learn2sign is presented. Finally, I also present the results of a post-usage usability survey for all the participants. In this work, I found that learners who received live feedback on their executions improved their execution as well as retention performances. The average increase in execution performance was 28% points and that for retention was 4% points.
ContributorsPaudyal, Prajwal (Author) / Gupta, Sandeep (Thesis advisor) / Banerjee, Ayan (Committee member) / Hsiao, Ihan (Committee member) / Azuma, Tamiko (Committee member) / Yang, Yezhou (Committee member) / Arizona State University (Publisher)
Created2020