Matching Items (183)
Filtering by
- Genre: Academic theses
- Genre: Masters Thesis
- Creators: Yang, Yezhou
Description
The presented work in this report is about Real time Estimation of wind and analyzing current wind correction algorithm in commercial off the shelf Autopilot board. The open source ArduPilot Mega 2.5 (APM 2.5) board manufactured by 3D Robotics is used. Currently there is lot of development being done in the field of Unmanned Aerial Systems (UAVs), various aerial platforms and corresponding; autonomous systems for them. This technology has advanced to such a stage that UAVs can be used for specific designed missions and deployed with reliability. But in some areas like missions requiring high maneuverability with greater efficiency is still under research area. This would help in increasing reliability and augmenting range of UAVs significantly. One of the problems addressed through this thesis work is, current autopilot systems have algorithm that handles wind by attitude correction with appropriate Crab angle. But the real time wind vector (direction) and its calculated velocity is based on geometrical and algebraic transformation between ground speed and air speed vectors. This method of wind estimation and prediction, many a times leads to inaccuracy in attitude correction. The same has been proved in the following report with simulation and actual field testing. In later part, new ways to tackle while flying windy conditions have been proposed.
ContributorsBiradar, Anandrao Shesherao (Author) / Saripalli, Srikanth (Thesis advisor) / Berman, Spring (Thesis advisor) / Thanga, Jekan (Committee member) / Arizona State University (Publisher)
Created2014
Description
Myoelectric control is lled with potential to signicantly change human-robot interaction.
Humans desire compliant robots to safely interact in dynamic environments
associated with daily activities. As surface electromyography non-invasively measures
limb motion intent and correlates with joint stiness during co-contractions,
it has been identied as a candidate for naturally controlling such robots. However,
state-of-the-art myoelectric interfaces have struggled to achieve both enhanced
functionality and long-term reliability. As demands in myoelectric interfaces trend
toward simultaneous and proportional control of compliant robots, robust processing
of multi-muscle coordinations, or synergies, plays a larger role in the success of the
control scheme. This dissertation presents a framework enhancing the utility of myoelectric
interfaces by exploiting motor skill learning and
exible muscle synergies for
reliable long-term simultaneous and proportional control of multifunctional compliant
robots. The interface is learned as a new motor skill specic to the controller,
providing long-term performance enhancements without requiring any retraining or
recalibration of the system. Moreover, the framework oers control of both motion
and stiness simultaneously for intuitive and compliant human-robot interaction. The
framework is validated through a series of experiments characterizing motor learning
properties and demonstrating control capabilities not seen previously in the literature.
The results validate the approach as a viable option to remove the trade-o
between functionality and reliability that have hindered state-of-the-art myoelectric
interfaces. Thus, this research contributes to the expansion and enhancement of myoelectric
controlled applications beyond commonly perceived anthropomorphic and
\intuitive control" constraints and into more advanced robotic systems designed for
everyday tasks.
Humans desire compliant robots to safely interact in dynamic environments
associated with daily activities. As surface electromyography non-invasively measures
limb motion intent and correlates with joint stiness during co-contractions,
it has been identied as a candidate for naturally controlling such robots. However,
state-of-the-art myoelectric interfaces have struggled to achieve both enhanced
functionality and long-term reliability. As demands in myoelectric interfaces trend
toward simultaneous and proportional control of compliant robots, robust processing
of multi-muscle coordinations, or synergies, plays a larger role in the success of the
control scheme. This dissertation presents a framework enhancing the utility of myoelectric
interfaces by exploiting motor skill learning and
exible muscle synergies for
reliable long-term simultaneous and proportional control of multifunctional compliant
robots. The interface is learned as a new motor skill specic to the controller,
providing long-term performance enhancements without requiring any retraining or
recalibration of the system. Moreover, the framework oers control of both motion
and stiness simultaneously for intuitive and compliant human-robot interaction. The
framework is validated through a series of experiments characterizing motor learning
properties and demonstrating control capabilities not seen previously in the literature.
The results validate the approach as a viable option to remove the trade-o
between functionality and reliability that have hindered state-of-the-art myoelectric
interfaces. Thus, this research contributes to the expansion and enhancement of myoelectric
controlled applications beyond commonly perceived anthropomorphic and
\intuitive control" constraints and into more advanced robotic systems designed for
everyday tasks.
ContributorsIson, Mark (Author) / Artemiadis, Panagiotis (Thesis advisor) / Santello, Marco (Committee member) / Greger, Bradley (Committee member) / Berman, Spring (Committee member) / Sugar, Thomas (Committee member) / Fainekos, Georgios (Committee member) / Arizona State University (Publisher)
Created2015
Description
In a collaborative environment where multiple robots and human beings are expected
to collaborate to perform a task, it becomes essential for a robot to be aware of multiple
agents working in its work environment. A robot must also learn to adapt to
different agents in the workspace and conduct its interaction based on the presence
of these agents. A theoretical framework was introduced which performs interaction
learning from demonstrations in a two-agent work environment, and it is called
Interaction Primitives.
This document is an in-depth description of the new state of the art Python
Framework for Interaction Primitives between two agents in a single as well as multiple
task work environment and extension of the original framework in a work environment
with multiple agents doing a single task. The original theory of Interaction
Primitives has been extended to create a framework which will capture correlation
between more than two agents while performing a single task. The new state of the
art Python framework is an intuitive, generic, easy to install and easy to use python
library which can be applied to use the Interaction Primitives framework in a work
environment. This library was tested in simulated environments and controlled laboratory
environment. The results and benchmarks of this library are available in the
related sections of this document.
to collaborate to perform a task, it becomes essential for a robot to be aware of multiple
agents working in its work environment. A robot must also learn to adapt to
different agents in the workspace and conduct its interaction based on the presence
of these agents. A theoretical framework was introduced which performs interaction
learning from demonstrations in a two-agent work environment, and it is called
Interaction Primitives.
This document is an in-depth description of the new state of the art Python
Framework for Interaction Primitives between two agents in a single as well as multiple
task work environment and extension of the original framework in a work environment
with multiple agents doing a single task. The original theory of Interaction
Primitives has been extended to create a framework which will capture correlation
between more than two agents while performing a single task. The new state of the
art Python framework is an intuitive, generic, easy to install and easy to use python
library which can be applied to use the Interaction Primitives framework in a work
environment. This library was tested in simulated environments and controlled laboratory
environment. The results and benchmarks of this library are available in the
related sections of this document.
ContributorsKumar, Ashish, M.S (Author) / Amor, Hani Ben (Thesis advisor) / Zhang, Yu (Committee member) / Yang, Yezhou (Committee member) / Arizona State University (Publisher)
Created2017
Description
Computer Vision as a eld has gone through signicant changes in the last decade.
The eld has seen tremendous success in designing learning systems with hand-crafted
features and in using representation learning to extract better features. In this dissertation
some novel approaches to representation learning and task learning are studied.
Multiple-instance learning which is generalization of supervised learning, is one
example of task learning that is discussed. In particular, a novel non-parametric k-
NN-based multiple-instance learning is proposed, which is shown to outperform other
existing approaches. This solution is applied to a diabetic retinopathy pathology
detection problem eectively.
In cases of representation learning, generality of neural features are investigated
rst. This investigation leads to some critical understanding and results in feature
generality among datasets. The possibility of learning from a mentor network instead
of from labels is then investigated. Distillation of dark knowledge is used to eciently
mentor a small network from a pre-trained large mentor network. These studies help
in understanding representation learning with smaller and compressed networks.
The eld has seen tremendous success in designing learning systems with hand-crafted
features and in using representation learning to extract better features. In this dissertation
some novel approaches to representation learning and task learning are studied.
Multiple-instance learning which is generalization of supervised learning, is one
example of task learning that is discussed. In particular, a novel non-parametric k-
NN-based multiple-instance learning is proposed, which is shown to outperform other
existing approaches. This solution is applied to a diabetic retinopathy pathology
detection problem eectively.
In cases of representation learning, generality of neural features are investigated
rst. This investigation leads to some critical understanding and results in feature
generality among datasets. The possibility of learning from a mentor network instead
of from labels is then investigated. Distillation of dark knowledge is used to eciently
mentor a small network from a pre-trained large mentor network. These studies help
in understanding representation learning with smaller and compressed networks.
ContributorsVenkatesan, Ragav (Author) / Li, Baoxin (Thesis advisor) / Turaga, Pavan (Committee member) / Yang, Yezhou (Committee member) / Davulcu, Hasan (Committee member) / Arizona State University (Publisher)
Created2017
Description
With the rise of the Big Data Era, an exponential amount of network data is being generated at an unprecedented rate across a wide-range of high impact micro and macro areas of research---from protein interaction to social networks. The critical challenge is translating this large scale network data into actionable information.
A key task in the data translation is the analysis of network connectivity via marked nodes---the primary focus of our research. We have developed a framework for analyzing network connectivity via marked nodes in large scale graphs, utilizing novel algorithms in three interrelated areas: (1) analysis of a single seed node via it’s ego-centric network (AttriPart algorithm); (2) pathway identification between two seed nodes (K-Simple Shortest Paths Multithreaded and Search Reduced (KSSPR) algorithm); and (3) tree detection, defining the interaction between three or more seed nodes (Shortest Path MST algorithm).
In an effort to address both fundamental and applied research issues, we have developed the LocalForcasting algorithm to explore how network connectivity analysis can be applied to local community evolution and recommender systems. The goal is to apply the LocalForecasting algorithm to various domains---e.g., friend suggestions in social networks or future collaboration in co-authorship networks. This algorithm utilizes link prediction in combination with the AttriPart algorithm to predict future connections in local graph partitions.
Results show that our proposed AttriPart algorithm finds up to 1.6x denser local partitions, while running approximately 43x faster than traditional local partitioning techniques (PageRank-Nibble). In addition, our LocalForecasting algorithm demonstrates a significant improvement in the number of nodes and edges correctly predicted over baseline methods. Furthermore, results for the KSSPR algorithm demonstrate a speed-up of up to 2.5x the standard k-simple shortest paths algorithm.
A key task in the data translation is the analysis of network connectivity via marked nodes---the primary focus of our research. We have developed a framework for analyzing network connectivity via marked nodes in large scale graphs, utilizing novel algorithms in three interrelated areas: (1) analysis of a single seed node via it’s ego-centric network (AttriPart algorithm); (2) pathway identification between two seed nodes (K-Simple Shortest Paths Multithreaded and Search Reduced (KSSPR) algorithm); and (3) tree detection, defining the interaction between three or more seed nodes (Shortest Path MST algorithm).
In an effort to address both fundamental and applied research issues, we have developed the LocalForcasting algorithm to explore how network connectivity analysis can be applied to local community evolution and recommender systems. The goal is to apply the LocalForecasting algorithm to various domains---e.g., friend suggestions in social networks or future collaboration in co-authorship networks. This algorithm utilizes link prediction in combination with the AttriPart algorithm to predict future connections in local graph partitions.
Results show that our proposed AttriPart algorithm finds up to 1.6x denser local partitions, while running approximately 43x faster than traditional local partitioning techniques (PageRank-Nibble). In addition, our LocalForecasting algorithm demonstrates a significant improvement in the number of nodes and edges correctly predicted over baseline methods. Furthermore, results for the KSSPR algorithm demonstrate a speed-up of up to 2.5x the standard k-simple shortest paths algorithm.
ContributorsFreitas, Scott (Author) / Tong, Hanghang (Thesis advisor) / Maciejewski, Ross (Committee member) / Yang, Yezhou (Committee member) / Arizona State University (Publisher)
Created2018
Description
The interaction between humans and robots has become an important area of research as the diversity of robotic applications has grown. The cooperation of a human and robot to achieve a goal is an important area within the physical human-robot interaction (pHRI) field. The expansion of this field is toward moving robotics into applications in unstructured environments. When humans cooperate with each other, often there are leader and follower roles. These roles may change during the task. This creates a need for the robotic system to be able to exchange roles with the human during a cooperative task. The unstructured nature of the new applications in the field creates a need for robotic systems to be able to interact in six degrees of freedom (DOF). Moreover, in these unstructured environments, the robotic system will have incomplete information. This means that it will sometimes perform an incorrect action and control methods need to be able to correct for this. However, the most compelling applications for robotics are where they have capabilities that the human does not, which also creates the need for robotic systems to be able to correct human action when it detects an error. Activity in the brain precedes human action. Utilizing this activity in the brain can classify the type of interaction desired by the human. For this dissertation, the cooperation between humans and robots is improved in two main areas. First, the ability for electroencephalogram (EEG) to determine the desired cooperation role with a human is demonstrated with a correct classification rate of 65%. Second, a robotic controller is developed to allow the human and robot to cooperate in six DOF with asymmetric role exchange. This system allowed human-robot cooperation to perform a cooperative task at 100% correct rate. High, medium, and low levels of robotic automation are shown to affect performance, with the human making the greatest numbers of errors when the robotic system has a medium level of automation.
ContributorsWhitsell, Bryan Douglas (Author) / Artemiadis, Panagiotis (Thesis advisor) / Santello, Marco (Committee member) / Berman, Spring (Committee member) / Lee, Hyunglae (Committee member) / Polygerinos, Panagiotis (Committee member) / Arizona State University (Publisher)
Created2017
Description
The performance of most of the visual computing tasks depends on the quality of the features extracted from the raw data. Insightful feature representation increases the performance of many learning algorithms by exposing the underlying explanatory factors of the output for the unobserved input. A good representation should also handle anomalies in the data such as missing samples and noisy input caused by the undesired, external factors of variation. It should also reduce the data redundancy. Over the years, many feature extraction processes have been invented to produce good representations of raw images and videos.
The feature extraction processes can be categorized into three groups. The first group contains processes that are hand-crafted for a specific task. Hand-engineering features requires the knowledge of domain experts and manual labor. However, the feature extraction process is interpretable and explainable. Next group contains the latent-feature extraction processes. While the original feature lies in a high-dimensional space, the relevant factors for a task often lie on a lower dimensional manifold. The latent-feature extraction employs hidden variables to expose the underlying data properties that cannot be directly measured from the input. Latent features seek a specific structure such as sparsity or low-rank into the derived representation through sophisticated optimization techniques. The last category is that of deep features. These are obtained by passing raw input data with minimal pre-processing through a deep network. Its parameters are computed by iteratively minimizing a task-based loss.
In this dissertation, I present four pieces of work where I create and learn suitable data representations. The first task employs hand-crafted features to perform clinically-relevant retrieval of diabetic retinopathy images. The second task uses latent features to perform content-adaptive image enhancement. The third task ranks a pair of images based on their aestheticism. The goal of the last task is to capture localized image artifacts in small datasets with patch-level labels. For both these tasks, I propose novel deep architectures and show significant improvement over the previous state-of-art approaches. A suitable combination of feature representations augmented with an appropriate learning approach can increase performance for most visual computing tasks.
The feature extraction processes can be categorized into three groups. The first group contains processes that are hand-crafted for a specific task. Hand-engineering features requires the knowledge of domain experts and manual labor. However, the feature extraction process is interpretable and explainable. Next group contains the latent-feature extraction processes. While the original feature lies in a high-dimensional space, the relevant factors for a task often lie on a lower dimensional manifold. The latent-feature extraction employs hidden variables to expose the underlying data properties that cannot be directly measured from the input. Latent features seek a specific structure such as sparsity or low-rank into the derived representation through sophisticated optimization techniques. The last category is that of deep features. These are obtained by passing raw input data with minimal pre-processing through a deep network. Its parameters are computed by iteratively minimizing a task-based loss.
In this dissertation, I present four pieces of work where I create and learn suitable data representations. The first task employs hand-crafted features to perform clinically-relevant retrieval of diabetic retinopathy images. The second task uses latent features to perform content-adaptive image enhancement. The third task ranks a pair of images based on their aestheticism. The goal of the last task is to capture localized image artifacts in small datasets with patch-level labels. For both these tasks, I propose novel deep architectures and show significant improvement over the previous state-of-art approaches. A suitable combination of feature representations augmented with an appropriate learning approach can increase performance for most visual computing tasks.
ContributorsChandakkar, Parag Shridhar (Author) / Li, Baoxin (Thesis advisor) / Yang, Yezhou (Committee member) / Turaga, Pavan (Committee member) / Davulcu, Hasan (Committee member) / Arizona State University (Publisher)
Created2017
Description
Research literature was reviewed to find recommended tools and technologies for operating Unmanned Aerial Systems (UAS) fleets in an urban environment. However, restrictive legislation prohibits fully autonomous flight without an operator. Existing literature covers considerations for operating UAS fleets in a controlled environment, with an emphasis on the effect different networking approaches have on the topology of the UAS network. The primary network topology used to implement UAS communications is 802.11 protocols, which can transmit telemetry and a video stream using off the shelf hardware. Other implementations use low-frequency radios for long distance communication, or higher latency 4G LTE modems to access existing network infrastructure. However, a gap remains testing different network topologies outside of a controlled environment.
With the correct permits in place, further research can explore how different UAS network topologies behave in an urban environment when implemented with off the shelf UAS hardware. In addition to testing different network topologies, this thesis covers the implementation of building a secure, scalable system using modern cloud computation tools and services capable of supporting a variable number of UAS. The system also supports the end-to-end simulation of the system considering factors such as battery life and realistic UAS kinematics. The implementation of the system leads to new findings needed to deploy UAS fleets in urban environments.
With the correct permits in place, further research can explore how different UAS network topologies behave in an urban environment when implemented with off the shelf UAS hardware. In addition to testing different network topologies, this thesis covers the implementation of building a secure, scalable system using modern cloud computation tools and services capable of supporting a variable number of UAS. The system also supports the end-to-end simulation of the system considering factors such as battery life and realistic UAS kinematics. The implementation of the system leads to new findings needed to deploy UAS fleets in urban environments.
ContributorsD'Souza, Daniel (Author) / Panchanathan, Sethuraman (Thesis advisor) / Berman, Spring (Committee member) / Zhang, Yu (Committee member) / Arizona State University (Publisher)
Created2018
Description
Topological methods for data analysis present opportunities for enforcing certain invariances of broad interest in computer vision: including view-point in activity analysis, articulation in shape analysis, and measurement invariance in non-linear dynamical modeling. The increasing success of these methods is attributed to the complementary information that topology provides, as well as availability of tools for computing topological summaries such as persistence diagrams. However, persistence diagrams are multi-sets of points and hence it is not straightforward to fuse them with features used for contemporary machine learning tools like deep-nets. In this paper theoretically well-grounded approaches to develop novel perturbation robust topological representations are presented, with the long-term view of making them amenable to fusion with contemporary learning architectures. The proposed representation lives on a Grassmann manifold and hence can be efficiently used in machine learning pipelines.
The proposed representation.The efficacy of the proposed descriptor was explored on three applications: view-invariant activity analysis, 3D shape analysis, and non-linear dynamical modeling. Favorable results in both high-level recognition performance and improved performance in reduction of time-complexity when compared to other baseline methods are obtained.
The proposed representation.The efficacy of the proposed descriptor was explored on three applications: view-invariant activity analysis, 3D shape analysis, and non-linear dynamical modeling. Favorable results in both high-level recognition performance and improved performance in reduction of time-complexity when compared to other baseline methods are obtained.
ContributorsThopalli, Kowshik (Author) / Turaga, Pavan Kumar (Thesis advisor) / Papandreou-Suppappola, Antonia (Committee member) / Yang, Yezhou (Committee member) / Arizona State University (Publisher)
Created2017
Description
Image Understanding is a long-established discipline in computer vision, which encompasses a body of advanced image processing techniques, that are used to locate (“where”), characterize and recognize (“what”) objects, regions, and their attributes in the image. However, the notion of “understanding” (and the goal of artificial intelligent machines) goes beyond factual recall of the recognized components and includes reasoning and thinking beyond what can be seen (or perceived). Understanding is often evaluated by asking questions of increasing difficulty. Thus, the expected functionalities of an intelligent Image Understanding system can be expressed in terms of the functionalities that are required to answer questions about an image. Answering questions about images require primarily three components: Image Understanding, question (natural language) understanding, and reasoning based on knowledge. Any question, asking beyond what can be directly seen, requires modeling of commonsense (or background/ontological/factual) knowledge and reasoning.
Knowledge and reasoning have seen scarce use in image understanding applications. In this thesis, we demonstrate the utilities of incorporating background knowledge and using explicit reasoning in image understanding applications. We first present a comprehensive survey of the previous work that utilized background knowledge and reasoning in understanding images. This survey outlines the limited use of commonsense knowledge in high-level applications. We then present a set of vision and reasoning-based methods to solve several applications and show that these approaches benefit in terms of accuracy and interpretability from the explicit use of knowledge and reasoning. We propose novel knowledge representations of image, knowledge acquisition methods, and a new implementation of an efficient probabilistic logical reasoning engine that can utilize publicly available commonsense knowledge to solve applications such as visual question answering, image puzzles. Additionally, we identify the need for new datasets that explicitly require external commonsense knowledge to solve. We propose the new task of Image Riddles, which requires a combination of vision, and reasoning based on ontological knowledge; and we collect a sufficiently large dataset to serve as an ideal testbed for vision and reasoning research. Lastly, we propose end-to-end deep architectures that can combine vision, knowledge and reasoning modules together and achieve large performance boosts over state-of-the-art methods.
Knowledge and reasoning have seen scarce use in image understanding applications. In this thesis, we demonstrate the utilities of incorporating background knowledge and using explicit reasoning in image understanding applications. We first present a comprehensive survey of the previous work that utilized background knowledge and reasoning in understanding images. This survey outlines the limited use of commonsense knowledge in high-level applications. We then present a set of vision and reasoning-based methods to solve several applications and show that these approaches benefit in terms of accuracy and interpretability from the explicit use of knowledge and reasoning. We propose novel knowledge representations of image, knowledge acquisition methods, and a new implementation of an efficient probabilistic logical reasoning engine that can utilize publicly available commonsense knowledge to solve applications such as visual question answering, image puzzles. Additionally, we identify the need for new datasets that explicitly require external commonsense knowledge to solve. We propose the new task of Image Riddles, which requires a combination of vision, and reasoning based on ontological knowledge; and we collect a sufficiently large dataset to serve as an ideal testbed for vision and reasoning research. Lastly, we propose end-to-end deep architectures that can combine vision, knowledge and reasoning modules together and achieve large performance boosts over state-of-the-art methods.
ContributorsAditya, Somak (Author) / Baral, Chitta (Thesis advisor) / Yang, Yezhou (Thesis advisor) / Aloimonos, Yiannis (Committee member) / Lee, Joohyung (Committee member) / Li, Baoxin (Committee member) / Arizona State University (Publisher)
Created2018