This collection includes both ASU Theses and Dissertations, submitted by graduate students, and the Barrett, Honors College theses submitted by undergraduate students. 

Displaying 1 - 10 of 202
Filtering by

Clear all filters

156044-Thumbnail Image.png
Description
In a collaborative environment where multiple robots and human beings are expected

to collaborate to perform a task, it becomes essential for a robot to be aware of multiple

agents working in its work environment. A robot must also learn to adapt to

different agents in the workspace and conduct its interaction based

In a collaborative environment where multiple robots and human beings are expected

to collaborate to perform a task, it becomes essential for a robot to be aware of multiple

agents working in its work environment. A robot must also learn to adapt to

different agents in the workspace and conduct its interaction based on the presence

of these agents. A theoretical framework was introduced which performs interaction

learning from demonstrations in a two-agent work environment, and it is called

Interaction Primitives.

This document is an in-depth description of the new state of the art Python

Framework for Interaction Primitives between two agents in a single as well as multiple

task work environment and extension of the original framework in a work environment

with multiple agents doing a single task. The original theory of Interaction

Primitives has been extended to create a framework which will capture correlation

between more than two agents while performing a single task. The new state of the

art Python framework is an intuitive, generic, easy to install and easy to use python

library which can be applied to use the Interaction Primitives framework in a work

environment. This library was tested in simulated environments and controlled laboratory

environment. The results and benchmarks of this library are available in the

related sections of this document.
ContributorsKumar, Ashish, M.S (Author) / Amor, Hani Ben (Thesis advisor) / Zhang, Yu (Committee member) / Yang, Yezhou (Committee member) / Arizona State University (Publisher)
Created2017
155963-Thumbnail Image.png
Description
Computer Vision as a eld has gone through signicant changes in the last decade.

The eld has seen tremendous success in designing learning systems with hand-crafted

features and in using representation learning to extract better features. In this dissertation

some novel approaches to representation learning and task learning are studied.

Multiple-instance learning which is

Computer Vision as a eld has gone through signicant changes in the last decade.

The eld has seen tremendous success in designing learning systems with hand-crafted

features and in using representation learning to extract better features. In this dissertation

some novel approaches to representation learning and task learning are studied.

Multiple-instance learning which is generalization of supervised learning, is one

example of task learning that is discussed. In particular, a novel non-parametric k-

NN-based multiple-instance learning is proposed, which is shown to outperform other

existing approaches. This solution is applied to a diabetic retinopathy pathology

detection problem eectively.

In cases of representation learning, generality of neural features are investigated

rst. This investigation leads to some critical understanding and results in feature

generality among datasets. The possibility of learning from a mentor network instead

of from labels is then investigated. Distillation of dark knowledge is used to eciently

mentor a small network from a pre-trained large mentor network. These studies help

in understanding representation learning with smaller and compressed networks.
ContributorsVenkatesan, Ragav (Author) / Li, Baoxin (Thesis advisor) / Turaga, Pavan (Committee member) / Yang, Yezhou (Committee member) / Davulcu, Hasan (Committee member) / Arizona State University (Publisher)
Created2017
156193-Thumbnail Image.png
Description
With the rise of the Big Data Era, an exponential amount of network data is being generated at an unprecedented rate across a wide-range of high impact micro and macro areas of research---from protein interaction to social networks. The critical challenge is translating this large scale network data into actionable

With the rise of the Big Data Era, an exponential amount of network data is being generated at an unprecedented rate across a wide-range of high impact micro and macro areas of research---from protein interaction to social networks. The critical challenge is translating this large scale network data into actionable information.

A key task in the data translation is the analysis of network connectivity via marked nodes---the primary focus of our research. We have developed a framework for analyzing network connectivity via marked nodes in large scale graphs, utilizing novel algorithms in three interrelated areas: (1) analysis of a single seed node via it’s ego-centric network (AttriPart algorithm); (2) pathway identification between two seed nodes (K-Simple Shortest Paths Multithreaded and Search Reduced (KSSPR) algorithm); and (3) tree detection, defining the interaction between three or more seed nodes (Shortest Path MST algorithm).

In an effort to address both fundamental and applied research issues, we have developed the LocalForcasting algorithm to explore how network connectivity analysis can be applied to local community evolution and recommender systems. The goal is to apply the LocalForecasting algorithm to various domains---e.g., friend suggestions in social networks or future collaboration in co-authorship networks. This algorithm utilizes link prediction in combination with the AttriPart algorithm to predict future connections in local graph partitions.

Results show that our proposed AttriPart algorithm finds up to 1.6x denser local partitions, while running approximately 43x faster than traditional local partitioning techniques (PageRank-Nibble). In addition, our LocalForecasting algorithm demonstrates a significant improvement in the number of nodes and edges correctly predicted over baseline methods. Furthermore, results for the KSSPR algorithm demonstrate a speed-up of up to 2.5x the standard k-simple shortest paths algorithm.
ContributorsFreitas, Scott (Author) / Tong, Hanghang (Thesis advisor) / Maciejewski, Ross (Committee member) / Yang, Yezhou (Committee member) / Arizona State University (Publisher)
Created2018
155910-Thumbnail Image.png
Description
The interaction between humans and robots has become an important area of research as the diversity of robotic applications has grown. The cooperation of a human and robot to achieve a goal is an important area within the physical human-robot interaction (pHRI) field. The expansion of this field is toward

The interaction between humans and robots has become an important area of research as the diversity of robotic applications has grown. The cooperation of a human and robot to achieve a goal is an important area within the physical human-robot interaction (pHRI) field. The expansion of this field is toward moving robotics into applications in unstructured environments. When humans cooperate with each other, often there are leader and follower roles. These roles may change during the task. This creates a need for the robotic system to be able to exchange roles with the human during a cooperative task. The unstructured nature of the new applications in the field creates a need for robotic systems to be able to interact in six degrees of freedom (DOF). Moreover, in these unstructured environments, the robotic system will have incomplete information. This means that it will sometimes perform an incorrect action and control methods need to be able to correct for this. However, the most compelling applications for robotics are where they have capabilities that the human does not, which also creates the need for robotic systems to be able to correct human action when it detects an error. Activity in the brain precedes human action. Utilizing this activity in the brain can classify the type of interaction desired by the human. For this dissertation, the cooperation between humans and robots is improved in two main areas. First, the ability for electroencephalogram (EEG) to determine the desired cooperation role with a human is demonstrated with a correct classification rate of 65%. Second, a robotic controller is developed to allow the human and robot to cooperate in six DOF with asymmetric role exchange. This system allowed human-robot cooperation to perform a cooperative task at 100% correct rate. High, medium, and low levels of robotic automation are shown to affect performance, with the human making the greatest numbers of errors when the robotic system has a medium level of automation.
ContributorsWhitsell, Bryan Douglas (Author) / Artemiadis, Panagiotis (Thesis advisor) / Santello, Marco (Committee member) / Berman, Spring (Committee member) / Lee, Hyunglae (Committee member) / Polygerinos, Panagiotis (Committee member) / Arizona State University (Publisher)
Created2017
156084-Thumbnail Image.png
Description
The performance of most of the visual computing tasks depends on the quality of the features extracted from the raw data. Insightful feature representation increases the performance of many learning algorithms by exposing the underlying explanatory factors of the output for the unobserved input. A good representation should also handle

The performance of most of the visual computing tasks depends on the quality of the features extracted from the raw data. Insightful feature representation increases the performance of many learning algorithms by exposing the underlying explanatory factors of the output for the unobserved input. A good representation should also handle anomalies in the data such as missing samples and noisy input caused by the undesired, external factors of variation. It should also reduce the data redundancy. Over the years, many feature extraction processes have been invented to produce good representations of raw images and videos.

The feature extraction processes can be categorized into three groups. The first group contains processes that are hand-crafted for a specific task. Hand-engineering features requires the knowledge of domain experts and manual labor. However, the feature extraction process is interpretable and explainable. Next group contains the latent-feature extraction processes. While the original feature lies in a high-dimensional space, the relevant factors for a task often lie on a lower dimensional manifold. The latent-feature extraction employs hidden variables to expose the underlying data properties that cannot be directly measured from the input. Latent features seek a specific structure such as sparsity or low-rank into the derived representation through sophisticated optimization techniques. The last category is that of deep features. These are obtained by passing raw input data with minimal pre-processing through a deep network. Its parameters are computed by iteratively minimizing a task-based loss.

In this dissertation, I present four pieces of work where I create and learn suitable data representations. The first task employs hand-crafted features to perform clinically-relevant retrieval of diabetic retinopathy images. The second task uses latent features to perform content-adaptive image enhancement. The third task ranks a pair of images based on their aestheticism. The goal of the last task is to capture localized image artifacts in small datasets with patch-level labels. For both these tasks, I propose novel deep architectures and show significant improvement over the previous state-of-art approaches. A suitable combination of feature representations augmented with an appropriate learning approach can increase performance for most visual computing tasks.
ContributorsChandakkar, Parag Shridhar (Author) / Li, Baoxin (Thesis advisor) / Yang, Yezhou (Committee member) / Turaga, Pavan (Committee member) / Davulcu, Hasan (Committee member) / Arizona State University (Publisher)
Created2017
156398-Thumbnail Image.png
Description
Human locomotion is an essential function that enables individuals to lead healthy, independent lives. One important feature of natural walking is the capacity to transition across varying surfaces, enabling an individual to traverse complex terrains while maintaining balance. There has been extensive work regarding improving prostheses' performance in changing walking

Human locomotion is an essential function that enables individuals to lead healthy, independent lives. One important feature of natural walking is the capacity to transition across varying surfaces, enabling an individual to traverse complex terrains while maintaining balance. There has been extensive work regarding improving prostheses' performance in changing walking conditions, but there is still a need to address the transition from rigid to compliant or dynamic surfaces, such as the transition from pavement to long grass or soft sand. This research aims to investigate the mechanisms involved such transitions and identify potential indicators of the anticipated change that can be applied to the control of a powered ankle prosthetic to reduce falls and improve stability in lower-limb amputees in a wider range of walking environments. A series of human subject experiments were conducted using the Variable Stiffness Treadmill (VST) to control walking surface compliance while gait kinematics and muscular activation data were collected from three healthy, nondisabled subjects. Specifically, the kinematics and electromyography (EMG) profiles of the gait cycles immediately preceding and following an expected change in surface compliance were compared to that of normal, rigid surface walking. While the results do not indicate statistical differences in the EMG profiles between the two modes of walking, the muscle activation appears to be qualitatively different from inspection of the data. Additionally, there were promising statistically significant changes in joint angles, especially in observed increases in hip flexion during the swing phases both before and during an expected change in surface. Decreases in ankle flexion immediately before heel strike on the perturbed leg were also observed to occur simultaneously with decreases in tibialis anterior (TA) muscle activation, which encourages additional research investigating potential changes in EMG profiles. Ultimately, more work should be done to make strong conclusions about potential indicators of walking surface transitions, but this research demonstrates the potential of EMG and kinematic data to be used in the control of a powered ankle prosthetic.
ContributorsFou, Linda (Author) / Artemiadis, Panagiotis (Thesis advisor) / Lee, Hyunglae (Committee member) / Polygerinos, Panagiotis (Committee member) / Arizona State University (Publisher)
Created2018
156036-Thumbnail Image.png
Description
Topological methods for data analysis present opportunities for enforcing certain invariances of broad interest in computer vision: including view-point in activity analysis, articulation in shape analysis, and measurement invariance in non-linear dynamical modeling. The increasing success of these methods is attributed to the complementary information that topology provides, as well

Topological methods for data analysis present opportunities for enforcing certain invariances of broad interest in computer vision: including view-point in activity analysis, articulation in shape analysis, and measurement invariance in non-linear dynamical modeling. The increasing success of these methods is attributed to the complementary information that topology provides, as well as availability of tools for computing topological summaries such as persistence diagrams. However, persistence diagrams are multi-sets of points and hence it is not straightforward to fuse them with features used for contemporary machine learning tools like deep-nets. In this paper theoretically well-grounded approaches to develop novel perturbation robust topological representations are presented, with the long-term view of making them amenable to fusion with contemporary learning architectures. The proposed representation lives on a Grassmann manifold and hence can be efficiently used in machine learning pipelines.

The proposed representation.The efficacy of the proposed descriptor was explored on three applications: view-invariant activity analysis, 3D shape analysis, and non-linear dynamical modeling. Favorable results in both high-level recognition performance and improved performance in reduction of time-complexity when compared to other baseline methods are obtained.
ContributorsThopalli, Kowshik (Author) / Turaga, Pavan Kumar (Thesis advisor) / Papandreou-Suppappola, Antonia (Committee member) / Yang, Yezhou (Committee member) / Arizona State University (Publisher)
Created2017
156586-Thumbnail Image.png
Description
Image Understanding is a long-established discipline in computer vision, which encompasses a body of advanced image processing techniques, that are used to locate (“where”), characterize and recognize (“what”) objects, regions, and their attributes in the image. However, the notion of “understanding” (and the goal of artificial intelligent machines) goes beyond

Image Understanding is a long-established discipline in computer vision, which encompasses a body of advanced image processing techniques, that are used to locate (“where”), characterize and recognize (“what”) objects, regions, and their attributes in the image. However, the notion of “understanding” (and the goal of artificial intelligent machines) goes beyond factual recall of the recognized components and includes reasoning and thinking beyond what can be seen (or perceived). Understanding is often evaluated by asking questions of increasing difficulty. Thus, the expected functionalities of an intelligent Image Understanding system can be expressed in terms of the functionalities that are required to answer questions about an image. Answering questions about images require primarily three components: Image Understanding, question (natural language) understanding, and reasoning based on knowledge. Any question, asking beyond what can be directly seen, requires modeling of commonsense (or background/ontological/factual) knowledge and reasoning.

Knowledge and reasoning have seen scarce use in image understanding applications. In this thesis, we demonstrate the utilities of incorporating background knowledge and using explicit reasoning in image understanding applications. We first present a comprehensive survey of the previous work that utilized background knowledge and reasoning in understanding images. This survey outlines the limited use of commonsense knowledge in high-level applications. We then present a set of vision and reasoning-based methods to solve several applications and show that these approaches benefit in terms of accuracy and interpretability from the explicit use of knowledge and reasoning. We propose novel knowledge representations of image, knowledge acquisition methods, and a new implementation of an efficient probabilistic logical reasoning engine that can utilize publicly available commonsense knowledge to solve applications such as visual question answering, image puzzles. Additionally, we identify the need for new datasets that explicitly require external commonsense knowledge to solve. We propose the new task of Image Riddles, which requires a combination of vision, and reasoning based on ontological knowledge; and we collect a sufficiently large dataset to serve as an ideal testbed for vision and reasoning research. Lastly, we propose end-to-end deep architectures that can combine vision, knowledge and reasoning modules together and achieve large performance boosts over state-of-the-art methods.
ContributorsAditya, Somak (Author) / Baral, Chitta (Thesis advisor) / Yang, Yezhou (Thesis advisor) / Aloimonos, Yiannis (Committee member) / Lee, Joohyung (Committee member) / Li, Baoxin (Committee member) / Arizona State University (Publisher)
Created2018
157461-Thumbnail Image.png
Description
Locomotion is of prime importance in enabling human beings to effectively respond

in space and time to meet different needs. Approximately 2 million Americans live

with an amputation with most of those amputations being of the lower limbs. To

advance current state-of-the-art lower limb prosthetic devices, it is necessary to adapt

performance at a

Locomotion is of prime importance in enabling human beings to effectively respond

in space and time to meet different needs. Approximately 2 million Americans live

with an amputation with most of those amputations being of the lower limbs. To

advance current state-of-the-art lower limb prosthetic devices, it is necessary to adapt

performance at a level of intelligence seen in human walking. As such, this thesis

focuses on the mechanisms involved during human walking, while transitioning from

rigid to compliant surfaces such as from pavement to sand, grass or granular media.

Utilizing a unique tool, the Variable Stiffness Treadmill (VST), as the platform for

human walking, rigid to compliant surface transitions are simulated. The analysis of

muscular activation during the transition from rigid to different compliant surfaces

reveals specific anticipatory muscle activation that precedes stepping on a compliant

surface. There is also an indication of varying responses for different surface stiffness

levels. This response is observed across subjects. Results obtained are novel and

useful in establishing a framework for implementing control algorithm parameters to

improve powered ankle prosthesis. With this, it is possible for the prosthesis to adapt

to a new surface and therefore resulting in a more robust smart powered lower limb

prosthesis.
ContributorsObeng, Ruby Afriyie (Author) / Artemiadis, Panagiotis (Thesis advisor) / Santello, Marco (Thesis advisor) / Lee, Hyunglae (Committee member) / Arizona State University (Publisher)
Created2019
157413-Thumbnail Image.png
Description
Rapid growth of internet and connected devices ranging from cloud systems to internet of things have raised critical concerns for securing these systems. In the recent past, security attacks on different kinds of devices have evolved in terms of complexity and diversity. One of the challenges is establishing secure communication

Rapid growth of internet and connected devices ranging from cloud systems to internet of things have raised critical concerns for securing these systems. In the recent past, security attacks on different kinds of devices have evolved in terms of complexity and diversity. One of the challenges is establishing secure communication in the network among various devices and systems. Despite being protected with authentication and encryption, the network still needs to be protected against cyber-attacks. For this, the network traffic has to be closely monitored and should detect anomalies and intrusions. Intrusion detection can be categorized as a network traffic classification problem in machine learning. Existing network traffic classification methods require a lot of training and data preprocessing, and this problem is more serious if the dataset size is huge. In addition, the machine learning and deep learning methods that have been used so far were trained on datasets that contain obsolete attacks. In this thesis, these problems are addressed by using ensemble methods applied on an up to date network attacks dataset. Ensemble methods use multiple learning algorithms to get better classification accuracy that could be obtained when the corresponding learning algorithm is applied alone. This dataset for network traffic classification has recent attack scenarios and contains over fifteen attacks. This approach shows that ensemble methods can be used to classify network traffic and detect intrusions with less training times of the model, and lesser pre-processing without feature selection. In addition, this thesis also shows that only with less than ten percent of the total features of input dataset will lead to similar accuracy that is achieved on whole dataset. This can heavily reduce the training times and classification duration in real-time scenarios.
ContributorsPonneganti, Ramu (Author) / Yau, Stephen (Thesis advisor) / Richa, Andrea (Committee member) / Yang, Yezhou (Committee member) / Arizona State University (Publisher)
Created2019