Matching Items (1,086)
Filtering by

Clear all filters

157996-Thumbnail Image.png
Description
Component simulation models, such as agent-based models, may depend on spatial data associated with geographic locations. Composition of such models can be achieved using a Geographic Knowledge Interchange Broker (GeoKIB) enabled with spatial-temporal data transformation functions, each of which is responsible for a set of interactions between two independent models.

Component simulation models, such as agent-based models, may depend on spatial data associated with geographic locations. Composition of such models can be achieved using a Geographic Knowledge Interchange Broker (GeoKIB) enabled with spatial-temporal data transformation functions, each of which is responsible for a set of interactions between two independent models. The use of autonomous interaction models allows model composition without alteration of the composed component models. An interaction model must handle differences in the spatial resolutions between models, in addition to differences in their temporal input/output data types and resolutions.

A generalized GeoKIB was designed that regulates unidirectional spatially-based interactions between composed models. Different input and output data types are used for the interaction model, depending on whether data transfer should be passive or active. Synchronization of time-tagged input/output values is made possible with the use of dependency on a discrete simulation clock. An algorithm supporting spatial conversion is developed to transform any two-dimensional geographic data map between different region specifications. Maps belonging to the composed models can have different regions, map cell sizes, or boundaries. The GeoKIB can be extended based on the model specifications to be composed and the target application domain.

Two separate, simplistic models were created to demonstrate model composition via the GeoKIB. An interaction model was created for each of the two directions the composed models interact. This exemplar is developed to demonstrate composition and simulation of geographic-based component models.
ContributorsBoyd, William Angelo (Author) / Sarjoughian, Hessam S. (Thesis advisor) / Maciejewski, Ross (Committee member) / Sarwat, Mohamed (Committee member) / Arizona State University (Publisher)
Created2019
158615-Thumbnail Image.png
Description
In recent years, Convolutional Neural Networks (CNNs) have been widely used in not only the computer vision community but also within the medical imaging community. Specifically, the use of pre-trained CNNs on large-scale datasets (e.g., ImageNet) via transfer learning for a variety of medical imaging applications, has become the de

In recent years, Convolutional Neural Networks (CNNs) have been widely used in not only the computer vision community but also within the medical imaging community. Specifically, the use of pre-trained CNNs on large-scale datasets (e.g., ImageNet) via transfer learning for a variety of medical imaging applications, has become the de facto standard within both communities.

However, to fit the current paradigm, 3D imaging tasks have to be reformulated and solved in 2D, losing rich 3D contextual information. Moreover, pre-trained models on natural images never see any biomedical images and do not have knowledge about anatomical structures present in medical images. To overcome the above limitations, this thesis proposes an image out-painting self-supervised proxy task to develop pre-trained models directly from medical images without utilizing systematic annotations. The idea is to randomly mask an image and train the model to predict the missing region. It is demonstrated that by predicting missing anatomical structures when seeing only parts of the image, the model will learn generic representation yielding better performance on various medical imaging applications via transfer learning.

The extensive experiments demonstrate that the proposed proxy task outperforms training from scratch in six out of seven medical imaging applications covering 2D and 3D classification and segmentation. Moreover, image out-painting proxy task offers competitive performance to state-of-the-art models pre-trained on ImageNet and other self-supervised baselines such as in-painting. Owing to its outstanding performance, out-painting is utilized as one of the self-supervised proxy tasks to provide generic 3D pre-trained models for medical image analysis.
ContributorsSodha, Vatsal Arvindkumar (Author) / Liang, Jianming (Thesis advisor) / Devarakonda, Murthy (Committee member) / Li, Baoxin (Committee member) / Arizona State University (Publisher)
Created2020
158677-Thumbnail Image.png
Description
Convolutional Neural Network (CNN) has achieved state-of-the-art performance in numerous applications like computer vision, natural language processing, robotics etc. The advancement of High-Performance Computing systems equipped with dedicated hardware accelerators has also paved the way towards the success of compute intensive CNNs. Graphics Processing Units (GPUs), with massive processing capability,

Convolutional Neural Network (CNN) has achieved state-of-the-art performance in numerous applications like computer vision, natural language processing, robotics etc. The advancement of High-Performance Computing systems equipped with dedicated hardware accelerators has also paved the way towards the success of compute intensive CNNs. Graphics Processing Units (GPUs), with massive processing capability, have been of general interest for the acceleration of CNNs. Recently, Field Programmable Gate Arrays (FPGAs) have been promising in CNN acceleration since they offer high performance while also being re-configurable to support the evolution of CNNs. This work focuses on a design methodology to accelerate CNNs on FPGA with low inference latency and high-throughput which are crucial for scenarios like self-driving cars, video surveillance etc. It also includes optimizations which reduce the resource utilization by a large margin with a small degradation in performance thus making the design suitable for low-end FPGA devices as well.

FPGA accelerators often suffer due to the limited main memory bandwidth. Also, highly parallel designs with large resource utilization often end up achieving low operating frequency due to poor routing. This work employs data fetch and buffer mechanisms, designed specifically for the memory access pattern of CNNs, that overlap computation with memory access. This work proposes a novel arrangement of the systolic processing element array to achieve high frequency and consume less resources than the existing works. Also, support has been extended to more complicated CNNs to do video processing. On Intel Arria 10 GX1150, the design operates at a frequency as high as 258MHz and performs single inference of VGG-16 and C3D in 23.5ms and 45.6ms respectively. For VGG-16 and C3D the design offers a throughput of 66.1 and 23.98 inferences/s respectively. This design can outperform other FPGA 2D CNN accelerators by up to 9.7 times and 3D CNN accelerators by up to 2.7 times.
ContributorsRavi, Pravin Kumar (Author) / Zhao, Ming (Thesis advisor) / Li, Baoxin (Committee member) / Ren, Fengbo (Committee member) / Arizona State University (Publisher)
Created2020
158752-Thumbnail Image.png
Description
The use of reactive security mechanisms in enterprise networks can, at times, provide an asymmetric advantage to the attacker. Similarly, the use of a proactive security mechanism like Moving Target Defense (MTD), if performed without analyzing the effects of security countermeasures, can lead to security policy and service level agreement

The use of reactive security mechanisms in enterprise networks can, at times, provide an asymmetric advantage to the attacker. Similarly, the use of a proactive security mechanism like Moving Target Defense (MTD), if performed without analyzing the effects of security countermeasures, can lead to security policy and service level agreement violations. In this thesis, I explore the research questions 1) how to model attacker-defender interactions for multi-stage attacks? 2) how to efficiently deploy proactive (MTD) security countermeasures in a software-defined environment for single and multi-stage attacks? 3) how to verify the effects of security and management policies on the network and take corrective actions?

I propose a Software-defined Situation-aware Cloud Security framework, that, 1) analyzes the attacker-defender interactions using an Software-defined Networking (SDN) based scalable attack graph. This research investigates Advanced Persistent Threat (APT) attacks using a scalable attack graph. The framework utilizes a parallel graph partitioning algorithm to generate an attack graph quickly and efficiently. 2) models single-stage and multi-stage attacks (APTs) using the game-theoretic model and provides SDN-based MTD countermeasures. I propose a Markov Game for modeling multi-stage attacks. 3) introduces a multi-stage policy conflict checking framework at the SDN network's application plane. I present INTPOL, a new intent-driven security policy enforcement solution. INTPOL provides a unified language and INTPOL grammar that abstracts the network administrator from the underlying network controller's lexical rules. INTPOL develops a bounded formal model for network service compliance checking, which significantly reduces the number of countermeasures that needs to be deployed. Once the application-layer policy conflicts are resolved, I utilize an Object-Oriented Policy Conflict checking (OOPC) framework that identifies and resolves rule-order dependencies and conflicts between security policies.
ContributorsChowdhary, Ankur (Author) / Huang, Dijiang (Thesis advisor) / Kambhampati, Subbarao (Committee member) / Doupe, Adam (Committee member) / Bao, Youzhi (Committee member) / Arizona State University (Publisher)
Created2020
158648-Thumbnail Image.png
Description
The need for incorporating game engines into robotics tools becomes increasingly crucial as their graphics continue to become more photorealistic. This thesis presents a simulation framework, referred to as OpenUAV, that addresses cloud simulation and photorealism challenges in academic and research goals. In this work, OpenUAV is used to create

The need for incorporating game engines into robotics tools becomes increasingly crucial as their graphics continue to become more photorealistic. This thesis presents a simulation framework, referred to as OpenUAV, that addresses cloud simulation and photorealism challenges in academic and research goals. In this work, OpenUAV is used to create a simulation of an autonomous underwater vehicle (AUV) closely following a moving autonomous surface vehicle (ASV) in an underwater coral reef environment. It incorporates the Unity3D game engine and the robotics software Gazebo to take advantage of Unity3D's perception and Gazebo's physics simulation. The software is developed as a containerized solution that is deployable on cloud and on-premise systems.

This method of utilizing Gazebo's physics and Unity3D perception is evaluated for a team of marine vehicles (an AUV and an ASV) in a coral reef environment. A coordinated navigation and localization module is presented that allows the AUV to follow the path of the ASV. A fiducial marker underneath the ASV facilitates pose estimation of the AUV, and the pose estimates are filtered using the known dynamical system model of both vehicles for better localization. This thesis also investigates different fiducial markers and their detection rates in this Unity3D underwater environment. The limitations and capabilities of this Unity3D perception and Gazebo physics approach are examined.
ContributorsAnand, Harish (Author) / Das, Jnaneshwar (Thesis advisor) / Yang, Yezhou (Committee member) / Berman, Spring M (Committee member) / Arizona State University (Publisher)
Created2020
158844-Thumbnail Image.png
Description
Many real-world planning problems can be modeled as Markov Decision Processes (MDPs) which provide a framework for handling uncertainty in outcomes of action executions. A solution to such a planning problem is a policy that handles possible contingencies that could arise during execution. MDP solvers typically construct policies for a

Many real-world planning problems can be modeled as Markov Decision Processes (MDPs) which provide a framework for handling uncertainty in outcomes of action executions. A solution to such a planning problem is a policy that handles possible contingencies that could arise during execution. MDP solvers typically construct policies for a problem instance without re-using information from previously solved instances. Research in generalized planning has demonstrated the utility of constructing algorithm-like plans that reuse such information. However, using such techniques in an MDP setting has not been adequately explored.

This thesis presents a novel approach for learning generalized partial policies that can be used to solve problems with different object names and/or object quantities using very few example policies for learning. This approach uses abstraction for state representation, which allows the identification of patterns in solutions such as loops that are agnostic to problem-specific properties. This thesis also presents some theoretical results related to the uniqueness and succinctness of the policies computed using such a representation. The presented algorithm can be used as fast, yet greedy and incomplete method for policy computation while falling back to a complete policy search algorithm when needed. Extensive empirical evaluation on discrete MDP benchmarks shows that this approach generalizes effectively and is often able to solve problems much faster than existing state-of-art discrete MDP solvers. Finally, the practical applicability of this approach is demonstrated by incorporating it in an anytime stochastic task and motion planning framework to successfully construct free-standing tower structures using Keva planks.
ContributorsKala Vasudevan, Deepak (Author) / Srivastava, Siddharth (Thesis advisor) / Zhang, Yu (Committee member) / Yang, Yezhou (Committee member) / Arizona State University (Publisher)
Created2020
158849-Thumbnail Image.png
Description
Next-generation sequencing is a powerful tool for detecting genetic variation. How-ever, it is also error-prone, with error rates that are much larger than mutation rates.
This can make mutation detection difficult; and while increasing sequencing depth
can often help, sequence-specific errors and other non-random biases cannot be de-
tected by increased depth. The

Next-generation sequencing is a powerful tool for detecting genetic variation. How-ever, it is also error-prone, with error rates that are much larger than mutation rates.
This can make mutation detection difficult; and while increasing sequencing depth
can often help, sequence-specific errors and other non-random biases cannot be de-
tected by increased depth. The problem of accurate genotyping is exacerbated when
there is not a reference genome or other auxiliary information available.
I explore several methods for sensitively detecting mutations in non-model or-
ganisms using an example Eucalyptus melliodora individual. I use the structure of
the tree to find bounds on its somatic mutation rate and evaluate several algorithms
for variant calling. I find that conventional methods are suitable if the genome of a
close relative can be adapted to the study organism. However, with structured data,
a likelihood framework that is aware of this structure is more accurate. I use the
techniques developed here to evaluate a reference-free variant calling algorithm.
I also use this data to evaluate a k-mer based base quality score recalibrator
(KBBQ), a tool I developed to recalibrate base quality scores attached to sequencing
data. Base quality scores can help detect errors in sequencing reads, but are often
inaccurate. The most popular method for correcting this issue requires a known
set of variant sites, which is unavailable in most cases. I simulate data and show
that errors in this set of variant sites can cause calibration errors. I then show that
KBBQ accurately recalibrates base quality scores while requiring no reference or other
information and performs as well as other methods.
Finally, I use the Eucalyptus data to investigate the impact of quality score calibra-
tion on the quality of output variant calls and show that improved base quality score
calibration increases the sensitivity and reduces the false positive rate of a variant
calling algorithm.
ContributorsOrr, Adam James (Author) / Cartwright, Reed (Thesis advisor) / Wilson, Melissa (Committee member) / Kusumi, Kenro (Committee member) / Taylor, Jesse (Committee member) / Pfeifer, Susanne (Committee member) / Arizona State University (Publisher)
Created2020
158792-Thumbnail Image.png
Description
Access to real-time situational information including the relative position and motion of surrounding objects is critical for safe and independent travel. Object or obstacle (OO) detection at a distance is primarily a task of the visual system due to the high resolution information the eyes are able to receive from

Access to real-time situational information including the relative position and motion of surrounding objects is critical for safe and independent travel. Object or obstacle (OO) detection at a distance is primarily a task of the visual system due to the high resolution information the eyes are able to receive from afar. As a sensory organ in particular, the eyes have an unparalleled ability to adjust to varying degrees of light, color, and distance. Therefore, in the case of a non-visual traveler, someone who is blind or low vision, access to visual information is unattainable if it is positioned beyond the reach of the preferred mobility device or outside the path of travel. Although, the area of assistive technology in terms of electronic travel aids (ETA’s) has received considerable attention over the last two decades; surprisingly, the field has seen little work in the area focused on augmenting rather than replacing current non-visual travel techniques, methods, and tools. Consequently, this work describes the design of an intuitive tactile language and series of wearable tactile interfaces (the Haptic Chair, HaptWrap, and HapBack) to deliver real-time spatiotemporal data. The overall intuitiveness of the haptic mappings conveyed through the tactile interfaces are evaluated using a combination of absolute identification accuracy of a series of patterns and subjective feedback through post-experiment surveys. Two types of spatiotemporal representations are considered: static patterns representing object location at a single time instance, and dynamic patterns, added in the HaptWrap, which represent object movement over a time interval. Results support the viability of multi-dimensional haptics applied to the body to yield an intuitive understanding of dynamic interactions occurring around the navigator during travel. Lastly, it is important to point out that the guiding principle of this work centered on providing the navigator with spatial knowledge otherwise unattainable through current mobility techniques, methods, and tools, thus, providing the \emph{navigator} with the information necessary to make informed navigation decisions independently, at a distance.
ContributorsDuarte, Bryan Joiner (Author) / McDaniel, Troy (Thesis advisor) / Davulcu, Hasan (Committee member) / Li, Baoxin (Committee member) / Venkateswara, Hemanth (Committee member) / Arizona State University (Publisher)
Created2020
158807-Thumbnail Image.png
Description
Ultra High Performance (UHP) cementitious binders are a class of cement-based materials with high strength and ductility, designed for use in precast bridge connections, bridge superstructures, high load-bearing structural members like columns, and in structural repair and strengthening. This dissertation aims to elucidate the chemo-mechanical relationships in complex UHP binders

Ultra High Performance (UHP) cementitious binders are a class of cement-based materials with high strength and ductility, designed for use in precast bridge connections, bridge superstructures, high load-bearing structural members like columns, and in structural repair and strengthening. This dissertation aims to elucidate the chemo-mechanical relationships in complex UHP binders to facilitate better microstructure-based design of these materials and develop machine learning (ML) models to predict their scale-relevant properties from microstructural information.To establish the connection between micromechanical properties and constitutive materials, nanoindentation and scanning electron microscopy experiments are performed on several cementitious pastes. Following Bayesian statistical clustering, mixed reaction products with scattered nanomechanical properties are observed, attributable to the low degree of reaction of the constituent particles, enhanced particle packing, and very low water-to-binder ratio of UHP binders. Relating the phase chemistry to the micromechanical properties, the chemical intensity ratios of Ca/Si and Al/Si are found to be important parameters influencing the incorporation of Al into the C-S-H gel.
ML algorithms for classification of cementitious phases are found to require only the intensities of Ca, Si, and Al as inputs to generate accurate predictions for more homogeneous cement pastes. When applied to more complex UHP systems, the overlapping chemical intensities in the three dominant phases – Ultra High Stiffness (UHS), unreacted cementitious replacements, and clinker – led to ML models misidentifying these three phases. Similarly, a reduced amount of data available on the hard and stiff UHS phases prevents accurate ML regression predictions of the microstructural phase stiffness using only chemical information. The use of generic virtual two-phase microstructures coupled with finite element analysis is also adopted to train MLs to predict composite mechanical properties. This approach applied to three different representations of composite materials produces accurate predictions, thus providing an avenue for image-based microstructural characterization of multi-phase composites such UHP binders. This thesis provides insights into the microstructure of the complex, heterogeneous UHP binders and the utilization of big-data methods such as ML to predict their properties. These results are expected to provide means for rational, first-principles design of UHP mixtures.
ContributorsFord, Emily Lucile (Author) / Neithalath, Narayanan (Thesis advisor) / Rajan, Subramaniam D. (Committee member) / Mobasher, Barzin (Committee member) / Chawla, Nikhilesh (Committee member) / Hoover, Christian G. (Committee member) / Maneparambil, Kailas (Committee member) / Arizona State University (Publisher)
Created2020
158817-Thumbnail Image.png
Description
Over the past decade, machine learning research has made great strides and significant impact in several fields. Its success is greatly attributed to the development of effective machine learning algorithms like deep neural networks (a.k.a. deep learning), availability of large-scale databases and access to specialized hardware like Graphic Processing Units.

Over the past decade, machine learning research has made great strides and significant impact in several fields. Its success is greatly attributed to the development of effective machine learning algorithms like deep neural networks (a.k.a. deep learning), availability of large-scale databases and access to specialized hardware like Graphic Processing Units. When designing and training machine learning systems, researchers often assume access to large quantities of data that capture different possible variations. Variations in the data is needed to incorporate desired invariance and robustness properties in the machine learning system, especially in the case of deep learning algorithms. However, it is very difficult to gather such data in a real-world setting. For example, in certain medical/healthcare applications, it is very challenging to have access to data from all possible scenarios or with the necessary amount of variations as required to train the system. Additionally, the over-parameterized and unconstrained nature of deep neural networks can cause them to be poorly trained and in many cases over-confident which, in turn, can hamper their reliability and generalizability. This dissertation is a compendium of my research efforts to address the above challenges. I propose building invariant feature representations by wedding concepts from topological data analysis and Riemannian geometry, that automatically incorporate the desired invariance properties for different computer vision applications. I discuss how deep learning can be used to address some of the common challenges faced when working with topological data analysis methods. I describe alternative learning strategies based on unsupervised learning and transfer learning to address issues like dataset shifts and limited training data. Finally, I discuss my preliminary work on applying simple orthogonal constraints on deep learning feature representations to help develop more reliable and better calibrated models.
ContributorsSom, Anirudh (Author) / Turaga, Pavan (Thesis advisor) / Krishnamurthi, Narayanan (Committee member) / Spanias, Andreas (Committee member) / Li, Baoxin (Committee member) / Arizona State University (Publisher)
Created2020