Matching Items (1,046)
Filtering by

Clear all filters

158548-Thumbnail Image.png
Description
Hyperbolic geometry, which is a geometry which concerns itself with hyperbolic space, has caught the eye of certain circles in the machine learning community as of late. Lauded for its ability to encapsulate strong clustering as well as latent hierarchies in complex and social networks, hyperbolic geometry has proven itself

Hyperbolic geometry, which is a geometry which concerns itself with hyperbolic space, has caught the eye of certain circles in the machine learning community as of late. Lauded for its ability to encapsulate strong clustering as well as latent hierarchies in complex and social networks, hyperbolic geometry has proven itself to be an enduring presence in the network science community throughout the 2010s, with no signs of fading into obscurity anytime soon. Hyperbolic embeddings, which map a given graph to hyperbolic space, have particularly proven to be a powerful and dynamic tool for studying complex networks. Hyperbolic embeddings are exploited in this thesis to illustrate centrality in a graph. In network science, centrality quantifies the influence of individual nodes in a graph. Eigenvector centrality is one type of such measure, and assigns an influence weight to each node in a graph by solving for an eigenvector equation. A procedure is defined to embed a given network in a model of hyperbolic space, known as the Poincare disk, according to the influence weights computed by three eigenvector centrality measures: the PageRank algorithm, the Hyperlink-Induced Topic Search (HITS) algorithm, and the Pinski-Narin algorithm. The resulting embeddings are shown to accurately and meaningfully reflect each node's influence and proximity to influential nodes.
ContributorsChang, Alena (Author) / Xue, Guoliang (Thesis advisor) / Yang, Dejun (Committee member) / Yang, Yezhou (Committee member) / Arizona State University (Publisher)
Created2020
158555-Thumbnail Image.png
Description
Referring Expression Comprehension (REC) is an important area of research in Natural Language Processing (NLP) and vision domain. It involves locating an object in an image described by a natural language referring expression. This task requires information from both Natural Language and Vision aspect. The task is compositional in nature

Referring Expression Comprehension (REC) is an important area of research in Natural Language Processing (NLP) and vision domain. It involves locating an object in an image described by a natural language referring expression. This task requires information from both Natural Language and Vision aspect. The task is compositional in nature as it requires visual reasoning as underlying process along with relationships among the objects in the image. Recent works based on modular networks have

displayed to be an effective framework for performing visual reasoning task.

Although this approach is effective, it has been established that the current benchmark datasets for referring expression comprehension suffer from bias. Recent work on CLEVR-Ref+ dataset deals with bias issues by constructing a synthetic dataset

and provides an approach for the aforementioned task which performed better than the previous state-of-the-art models as well as showing the reasoning process. This work aims to improve the performance on CLEVR-Ref+ dataset and achieve comparable interpretability. In this work, the neural module network approach with the attention map technique is employed. The neural module network is composed of the primitive operation modules which are specific to their functions and the output is generated using a separate segmentation module. From empirical results, it is clear that this approach is performing significantly better than the current State-of-theart in one aspect (Predicted programs) and achieving comparable results for another aspect (Ground truth programs)
ContributorsRathor, Kuldeep Singh (Author) / Baral, Chitta (Thesis advisor) / Yang, Yezhou (Committee member) / Simeone, Michael (Committee member) / Arizona State University (Publisher)
Created2020
158591-Thumbnail Image.png
Description
The coordination of developing various complex and large-scale projects using computers has been well established and is the so-called computer-supported cooperative work (CSCW). Collaborative software development consists of a group of teams working together to achieve a common goal for developing a high-quality, complex, and large-scale software system efficiently, and

The coordination of developing various complex and large-scale projects using computers has been well established and is the so-called computer-supported cooperative work (CSCW). Collaborative software development consists of a group of teams working together to achieve a common goal for developing a high-quality, complex, and large-scale software system efficiently, and it requires common processes and communication channels among these teams. The common processes for coordination among software development teams can be handled by similar principles in CSCW. The development of complex and large-scale software becomes complicated due to the involvement of many software development teams. The development of such a software system can be largely improved by effective collaboration among the participating software development teams at both software components and system levels. The efficiency of developing software components depends on trusted coordination among the participating teams for sharing, processing, and managing information on various participating teams, which are often operating in a distributed environment. Participating teams may belong to the same organization or different organizations. Existing approaches to coordination in collaborative software development are based on using a centralized repository to store, process, and retrieve information on participating software development teams during the development. These approaches use a centralized authority, have a single point of failure, and restricted rights to own data and software. In this thesis, the generation of trusted coordination in collaborative software development using blockchain is studied, and an approach to achieving trusted cooperation for collaborative software development using blockchain is presented. The smart contracts are created in the blockchain to encode software specifications and acceptance criteria for the software results generated by participating teams. The blockchain used in the approach is a private blockchain because a private blockchain has the characteristics of providing non-repudiation, privacy, and integrity, which are required in trusted coordination of collaborative software development. This approach is implemented using Hyperledger, an open-source private blockchain. An example to illustrate the approach is also given.
ContributorsPatel, Jinal Sunilkumar (Author) / Yau, Stephen S. (Thesis advisor) / Bansal, Ajay (Committee member) / Zou, Jia (Committee member) / Arizona State University (Publisher)
Created2020
158596-Thumbnail Image.png
Description
Microlending aims at providing low-barrier loans to small to medium scaled family run businesses that are financially disincluded historically. These borrowers might be in third world countries where traditional financing is not accessible. Lenders can be individual investors or institutions making risky investments or willing to help people who cannot

Microlending aims at providing low-barrier loans to small to medium scaled family run businesses that are financially disincluded historically. These borrowers might be in third world countries where traditional financing is not accessible. Lenders can be individual investors or institutions making risky investments or willing to help people who cannot access traditional banks or do not have the credibility to get loans from traditional sources. Microlending involves a charitable cause as well where lenders are not really concerned about what and how they are paid.

This thesis aims at building a platform that will support both commercial microlending as well as charitable donation to support the real cause of microlending. The platform is expected to ensure privacy and transparency to the users in order to attract more users to use the system. Microlending involves monetary transactions, hence possible security threats to the system are discussed.

Blockchain is one of the technologies which has revolutionized financial transactions and microlending involves monetary transactions. Therefore, blockchain is viable option for microlending platform. Permissioned blockchain restricts the user admission to the platform and provides with identity management feature. This feature is required to ensure the security and privacy of various types of participants on the microlending platform.
ContributorsSiddharth, Sourabh (Author) / Boscovic, Dragan (Thesis advisor) / Basnal, Srividya (Thesis advisor) / Sanchez, Javier Gonzalez (Committee member) / Arizona State University (Publisher)
Created2020
158597-Thumbnail Image.png
Description
Robot motion planning requires computing a sequence of waypoints from an initial configuration of the robot to the goal configuration. Solving a motion planning problem optimally is proven to be NP-Complete. Sampling-based motion planners efficiently compute an approximation of the optimal solution. They sample the configuration space uniformly and hence

Robot motion planning requires computing a sequence of waypoints from an initial configuration of the robot to the goal configuration. Solving a motion planning problem optimally is proven to be NP-Complete. Sampling-based motion planners efficiently compute an approximation of the optimal solution. They sample the configuration space uniformly and hence fail to sample regions of the environment that have narrow passages or pinch points. These critical regions are analogous to landmarks from planning literature as the robot is required to pass through them to reach the goal.

This work proposes a deep learning approach that identifies critical regions in the environment and learns a sampling distribution to effectively sample them in high dimensional configuration spaces.

A classification-based approach is used to learn the distributions. The robot degrees of freedom (DOF) limits are binned and a distribution is generated from sampling motion plan solutions. Conditional information like goal configuration and robot location encoded in the network inputs showcase the network learning to bias the identified critical regions towards the goal configuration. Empirical evaluations are performed against the state of the art sampling-based motion planners on a variety of tasks requiring the robot to pass through critical regions. An empirical analysis of robotic systems with three to eight degrees of freedom indicates that this approach effectively improves planning performance.
ContributorsSrinet, Abhyudaya (Author) / Srivastava, Siddharth (Thesis advisor) / Zhang, Yu (Committee member) / Yang, Yezhou (Committee member) / Arizona State University (Publisher)
Created2020
158486-Thumbnail Image.png
Description
The Java programing language was implemented in such a way as to limit the amount of possible ways that a program written in Java could be exploited. Unfortunately, all of the protections and safeguards put in place for Java can be circumvented if a program created in Java utilizes

The Java programing language was implemented in such a way as to limit the amount of possible ways that a program written in Java could be exploited. Unfortunately, all of the protections and safeguards put in place for Java can be circumvented if a program created in Java utilizes internal or external libraries that were created in a separate, insecure language such as C or C++. A secure Java program can then be made insecure and susceptible to even classic vulnerabilities such as stack overflows, string format attacks, and heap overflows and corruption. Through the internal or external libraries included in the Java program, an attacker could potentially hijack the execution flow of the program. Once the Attacker has control of where and how the program executes, the attacker can spread their influence to the rest of the system.

However, since these classic vulnerabilities are known weaknesses, special types of protections have been added to the compilers which create the executable code and the systems that run them. The most common forms of protection include Address SpaceLayout Randomization (ASLR), Non-eXecutable stack (NX Stack), and stack cookies or canaries. Of course, these protections and their implementations vary depending on the system. I intend to look specifically at the Android operating system which is used in the daily lives of a significant portion of the planet. Most Android applications execute in a Java context and leave little room for exploitability, however, there are also many applications which utilize external libraries to handle more computationally intensive tasks.

The goal of this thesis is to take a closer look at such applications and the protections surrounding them, especially how the default system protections as mentioned above are implemented and applied to the vulnerable external libraries. However, this is only half of the problem. The attacker must get their payload inside of the application in the first place. Since it is necessary to understand how this is occurring, I will also be exploring how the Android operating system gives outside information to applications and how developers have chosen to use that information.
ContributorsGibbs, William (Author) / Doupe, Adam (Thesis advisor) / Wang, Ruoyu (Committee member) / Shoshitaishvilli, Yan (Committee member) / Arizona State University (Publisher)
Created2020
158241-Thumbnail Image.png
Description
This thesis introduces a new robotic leg design with three degrees of freedom that

can be adapted for both bipedal and quadrupedal locomotive systems, and serves as

a blueprint for designers attempting to create low cost robot legs capable of balancing

and walking. Currently, bipedal leg designs are mostly rigid and have not

This thesis introduces a new robotic leg design with three degrees of freedom that

can be adapted for both bipedal and quadrupedal locomotive systems, and serves as

a blueprint for designers attempting to create low cost robot legs capable of balancing

and walking. Currently, bipedal leg designs are mostly rigid and have not strongly

taken into account the advantages/disadvantages of using an active ankle, as opposed

to a passive ankle, for balancing. This design uses low-cost compliant materials, but

the materials used are thick enough to mimic rigid properties under low stresses, so

this paper will treat the links as rigid materials. A new leg design has been created

that contains three degrees of freedom that can be adapted to contain either a passive

ankle using springs, or an actively controlled ankle using an additional actuator. This

thesis largely aims to focus on the ankle and foot design of the robot and the torque

and speed requirements of the design for motor selection. The dynamics of the system,

including height, foot width, weight, and resistances will be analyzed to determine

how to improve design performance. Model-based control techniques will be used to

control the angle of the leg for balancing. In doing so, it will also be shown that it

is possible to implement model-based control techniques on robots made of laminate

materials.
ContributorsShafa, Taha A (Author) / Aukes, Daniel M (Thesis advisor) / Rogers, Bradley (Committee member) / Zhang, Wenlong (Committee member) / Arizona State University (Publisher)
Created2020
158251-Thumbnail Image.png
Description
The lack of fungibility in Bitcoin has forced its userbase to seek out tools that can heighten their anonymity. Third-party Bitcoin mixers utilize obfuscation techniques to protect participants from blockchain analysis. In recent years, various centralized and decentralized Bitcoin mixing implementations have been proposed in academic literature. Although these methods

The lack of fungibility in Bitcoin has forced its userbase to seek out tools that can heighten their anonymity. Third-party Bitcoin mixers utilize obfuscation techniques to protect participants from blockchain analysis. In recent years, various centralized and decentralized Bitcoin mixing implementations have been proposed in academic literature. Although these methods depict a threat-free environment for users to preserve their anonymity, public Bitcoin mixers continue to be associated with theft and poor implementation.

This research explores the public Bitcoin mixer ecosystem to identify if today's mixing services have adopted academically proposed solutions. This is done through real-world interactions with publicly available mixers to analyze both implementation and resistance to common threats in the mixing landscape. First, proposed decentralized and centralized mixing protocols found in literature are outlined. Then, data is presented from 19 publicly announced mixing services available on the deep web and clearnet. The services are categorized based on popularity with the Bitcoin community and experiments are conducted on five public mixing services: ChipMixer, MixTum, Bitcoin Mixer, CryptoMixer, and Sudoku Wallet.

The results of the experiments highlight a clear gap between public and proposed Bitcoin mixers in both implementation and security. Today's mixing services focus on presenting users with a false sense of control to gain their trust rather then employing secure mixing techniques. As a result, the five selected services lack implementation of academically proposed techniques and display poor resistance to common mixer-related threats.
ContributorsPakki, Jaswant (Author) / Doupe, Adam (Thesis advisor) / Shoshitaishvili, Yan (Committee member) / Wang, Ruoyu (Committee member) / Arizona State University (Publisher)
Created2020
158252-Thumbnail Image.png
Description
Background: Process mining (PM) using event log files is gaining popularity in healthcare to investigate clinical pathways. But it has many unique challenges. Clinical Pathways (CPs) are often complex and unstructured which results in spaghetti-like models. Moreover, the log files collected from the electronic health record (EHR) often contain noisy

Background: Process mining (PM) using event log files is gaining popularity in healthcare to investigate clinical pathways. But it has many unique challenges. Clinical Pathways (CPs) are often complex and unstructured which results in spaghetti-like models. Moreover, the log files collected from the electronic health record (EHR) often contain noisy and incomplete data. Objective: Based on the traditional process mining technique of using event logs generated by an EHR, observational video data from rapid ethnography (RE) were combined to model, interpret, simplify and validate the perioperative (PeriOp) CPs. Method: The data collection and analysis pipeline consisted of the following steps: (1) Obtain RE data, (2) Obtain EHR event logs, (3) Generate CP from RE data, (4) Identify EHR interfaces and functionalities, (5) Analyze EHR functionalities to identify missing events, (6) Clean and preprocess event logs to remove noise, (7) Use PM to compute CP time metrics, (8) Further remove noise by removing outliers, (9) Mine CP from event logs and (10) Compare CPs resulting from RE and PM. Results: Four provider interviews and 1,917,059 event logs and 877 minutes of video ethnography recording EHRs interaction were collected. When mapping event logs to EHR functionalities, the intraoperative (IntraOp) event logs were more complete (45%) when compared with preoperative (35%) and postoperative (21.5%) event logs. After removing the noise (496 outliers) and calculating the duration of the PeriOp CP, the median was 189 minutes and the standard deviation was 291 minutes. Finally, RE data were analyzed to help identify most clinically relevant event logs and simplify spaghetti-like CPs resulting from PM. Conclusion: The study demonstrated the use of RE to help overcome challenges of automatic discovery of CPs. It also demonstrated that RE data could be used to identify relevant clinical tasks and incomplete data, remove noise (outliers), simplify CPs and validate mined CPs.
ContributorsDeotale, Aditya Vijay (Author) / Liu, Huan (Thesis advisor) / Grando, Maria (Thesis advisor) / Manikonda, Lydia (Committee member) / Arizona State University (Publisher)
Created2020
158254-Thumbnail Image.png
Description
Detecting areas of change between two synthetic aperture radar (SAR) images of the same scene, taken at different times is generally performed using two approaches. Non-coherent change detection is performed using the sample variance ratio detector, and displays a good performance in detecting areas of significant changes. Coherent change detection

Detecting areas of change between two synthetic aperture radar (SAR) images of the same scene, taken at different times is generally performed using two approaches. Non-coherent change detection is performed using the sample variance ratio detector, and displays a good performance in detecting areas of significant changes. Coherent change detection can be implemented using the classical coherence estimator, which does better at detecting subtle changes, like vehicle tracks. A two-stage detector was proposed by Cha et al., where the sample variance ratio forms the first stage, and the second stage comprises of Berger's alternative coherence estimator.

A modification to the first stage of the two-stage detector is proposed in this study, which significantly simplifies the analysis of the this detector. Cha et al. have used a heuristic approach to determine the thresholds for this two-stage detector. In this study, the probability density function for the modified two-stage detector is derived, and using this probability density function, an approach for determining the thresholds for this two-dimensional detection problem has been proposed. The proposed method of threshold selection reveals an interesting behavior shown by the two-stage detector. With the help of theoretical receiver operating characteristic analysis, it is shown that the two-stage detector gives a better detection performance as compared to the other three detectors. However, the Berger's estimator proves to be a simpler alternative, since it gives only a slightly poorer performance as compared to the two-stage detector. All the four detectors have also been implemented on a SAR data set, and it is shown that the two-stage detector and the Berger's estimator generate images where the areas showing change are easily visible.
ContributorsBondre, Akshay Sunil (Author) / Richmond, Christ D (Thesis advisor) / Papandreou-Suppappola, Antonia (Committee member) / Bliss, Daniel W (Committee member) / Arizona State University (Publisher)
Created2020