Matching Items (1,483)
Filtering by

Clear all filters

161889-Thumbnail Image.png
Description
Systematic Reviews (SRs) aim to synthesize the totality of evidence for clinical practice and are important in making clinical practice guidelines and health policy decisions. However, conducting SRs manually is a laborious and time-consuming process. This challenge is growing due to the increase in the number of databases to search

Systematic Reviews (SRs) aim to synthesize the totality of evidence for clinical practice and are important in making clinical practice guidelines and health policy decisions. However, conducting SRs manually is a laborious and time-consuming process. This challenge is growing due to the increase in the number of databases to search and the papers being published. Hence, the automation of SRs is an essential task. The goal of this thesis work is to develop Natural Language Processing (NLP)-based classifiers to automate the title and abstract-based screening for clinical SRs based on inclusion/exclusion criteria. In clinical SRs, a high-sensitivity system is a key requirement. Most existing methods for SRs use binary classification systems trained on labeled data to predict inclusion/exclusion. While previous studies have shown that NLP-based classification methods can automate title and abstract-based screening for SRs, methods for achieving high-sensitivity have not been empirically studied. In addition, the training strategy for binary classification has several limitations: (1) it ignores the inclusion/exclusion criteria, (2) lacks generalization ability, (3) suffers from low resource data, and (4) fails to achieve reasonable precision at high-sensitivity levels. This thesis work presents contributions to several aspects of the clinical systematic review domain. First, it presents an empirical study of NLP-based supervised text classification and high-sensitivity methods on datasets developed from six different SRs in the clinical domain. Second, this thesis work provides a novel approach to view SR as a Question Answering (QA) problem in order to overcome the limitations of the binary classification training strategy; and propose a more general abstract screening model for different SRs. Finally, this work provides a new QA-based dataset for six different SRs which is made available to the community.
ContributorsParmar, Mihir Prafullsinh (Author) / Baral, Chitta (Thesis advisor) / Devarakonda, Murthy (Thesis advisor) / Riaz, Irbaz B (Committee member) / Arizona State University (Publisher)
Created2021
161894-Thumbnail Image.png
Description
Heterogenous SoCs are in development that marry multiple architectural patterns together. In order for software to be run on such a platform, it must be broken down into its constituent parts, kernels, and scheduled for execution on the hardware. Although this can be done by hand, it would be arduous

Heterogenous SoCs are in development that marry multiple architectural patterns together. In order for software to be run on such a platform, it must be broken down into its constituent parts, kernels, and scheduled for execution on the hardware. Although this can be done by hand, it would be arduous and time consuming; rather, a tool should be developed that analyzes the source binary, extracts the kernels, schedules the kernels, and optimizes the scheduled kernels for their target component. This dissertation proposes a decidable kernel definition that enables an algorithmic approach to detecting kernels from arbitrary programs. This definition is built upon four constraints that can be tested using basic graph theory. In addition, two algorithms are proposed that successfully extract kernels based upon runtime information. The first utilizes dynamic traces, which are generated using a collection of novel optimizations. The second utilizes a simple affinity matrix, which has no runtime overhead during program execution. Finally, a Dense Neural Network is proposed that is capable of detecting a kernel's archetype based upon only the composition of the source program and the number of times individual basic blocks execute. The contributions proposed in this dissertation provide the necessary infrastructure to perform a litany of other optimizations on kernels. By detecting kernels algorithmically, any program can be analyzed and optimized with techniques that have heretofore required kernels be written in a compatible form. Computational kernels can be extracted from any program with no constraints. The innovations describes here will form the foundation for automated kernel optimization in the future, helping optimize the code of the future.
ContributorsUhrie, Richard Lawrence (Author) / Brunhaver, John (Thesis advisor) / Chakrabarti, Chaitali (Committee member) / Shrivastiva, Aviral (Committee member) / Wu, Carole-Jean (Committee member) / Arizona State University (Publisher)
Created2021
161909-Thumbnail Image.png
Description
Millions of people around the world daily engage in artisanal and small-scale gold mining (ASGM)––a vital part of total global gold production. For Colombia, this mining accounts for most of the precious metal’s output. It has also made Colombia, per capita, the worst mercury-polluted country in the world. Though cleaner,

Millions of people around the world daily engage in artisanal and small-scale gold mining (ASGM)––a vital part of total global gold production. For Colombia, this mining accounts for most of the precious metal’s output. It has also made Colombia, per capita, the worst mercury-polluted country in the world. Though cleaner, safer, and more effective methods exist, miners yet opt for mercury-use. Any success with interventions in technology, capacitation, or policy has been limited. This dissertation attends to mercury-use in ASGM in Antioquia, Colombia, via two gaps: a descriptive one (i.e., a failure to pay attention to, and to describe, actual practices in ASGM); and, a theoretical one (i.e., explanations as to why some decisions, including but not limited to policy, succeed or fail). In addition to an ecology of practices, embodiment, and situated knowledges, phenomenological interviews with stakeholders illuminate critical lived experience, as well as whether or how it is possible to reduce mercury-use and contamination. Furthermore, a novel application of speculative sound supplements this work. Finally, key findings complement existing scholarship. The presence of gold drives mining, but an increase in mining comes at a cost. Miners know mercury is hazardous, but mining legally, or formally, has proven too onerous. So, mercury-use persists: it is profitable, and the effects on human health can seem delayed. The state is pivotal to change in mercury-use, but its approach has been punitive. Change will invariably require greater attention to the lived experiences of miners.
ContributorsPimentel, Matthew (Author) / Fonow, Mary Margaret (Thesis advisor) / Parmentier, Mary Jane (Thesis advisor) / Coleman, Grisha (Committee member) / Arizona State University (Publisher)
Created2021
161913-Thumbnail Image.png
Description
Artificial intelligence is one of the leading technologies that mimics the problem solving and decision making capabilities of the human brain. Machine learning algorithms, especially deep learning algorithms, are leading the way in terms of performance and robustness. They are used for various purposes, mainly for computer vision, speech recognition,

Artificial intelligence is one of the leading technologies that mimics the problem solving and decision making capabilities of the human brain. Machine learning algorithms, especially deep learning algorithms, are leading the way in terms of performance and robustness. They are used for various purposes, mainly for computer vision, speech recognition, and object detection. The algorithms are usually tested inaccuracy, and they utilize full floating-point precision (32 bits). The hardware would require a high amount of power and area to accommodate many parameters with full precision. In this exploratory work, the convolution autoencoder is quantized for the working of an event base camera. The model is designed so that the autoencoder can work on-chip, which would sufficiently decrease the latency in processing. Different quantization methods are used to quantize and binarize the weights and activations of this neural network model to be portable and power efficient. The sparsity term is added to make the model as robust and energy-efficient as possible. The network model was able to recoup the lost accuracy due to binarizing the weights and activation's to quantize the layers of the encoder selectively. This method of recouping the accuracy gives enough flexibility to introduce the network on the chip to get real-time processing from systems like event-based cameras. Lately, computer vision, especially object detection have made strides in their object detection accuracy. The algorithms can sufficiently detect and predict the objects in real-time. However, end-to-end detection of the algorithm is challenging due to the large parameter need and processing requirements. A change in the Non Maximum Suppression algorithm in SSD(Single Shot Detector)-Mobilenet-V1 resulted in less computational complexity without change in the quality of output metric. The Mean Average Precision(mAP) calculated suggests that this method can be implemented in the post-processing of other networks.
ContributorsKuzhively, Ajay Balu (Author) / Cao, Yu (Thesis advisor) / Seo, Jae-Sun (Committee member) / Fan, Delian (Committee member) / Arizona State University (Publisher)
Created2021
161446-Thumbnail Image.png
Description
REACT is a distributed resource allocation protocol that can be used to negotiate airtime among nodes in a wireless network. In this thesis, REACT is extended to support quality of service (QoS) airtime in an updated version called REACT QoS . Nodes can request the higher airtime class to receive

REACT is a distributed resource allocation protocol that can be used to negotiate airtime among nodes in a wireless network. In this thesis, REACT is extended to support quality of service (QoS) airtime in an updated version called REACT QoS . Nodes can request the higher airtime class to receive priority in the network. This differentiated service is provided by using the access categories (ACs) provided by 802.11, where one AC represents the best effort (BE) class of airtime and another represents the QoS class. Airtime allocations computed by REACT QoS are realized using an updated tuning algorithm and REACT QoS is updated to allow for QoS airtime along multi-hop paths. Experimentation on the w-iLab.t wireless testbed in an ad-hoc setting shows that these extensions are effective. In a single-hop setting, nodes requesting the higher class of airtime are guaranteed their allocation, with the leftover airtime being divided fairly among the remaining nodes. In the multi-hop scenario, REACT QoS is shown to perform better in each of airtime allocation and delay, jitter, and throughput, when compared to 802.11. Finally, the most influential factors and 2-way interactions are identified through the use of a locating array based screening experiment for delay, jitter, and throughput responses. The screening experiment includes a factor on how the channel is partitioned into data and control traffic, and its effect on the responses is determined.
ContributorsKulenkamp, Daniel J (Author) / Syrotiuk, Violet R (Thesis advisor) / Colbourn, Charles J (Committee member) / Tinnirello, Ilenia (Committee member) / Arizona State University (Publisher)
Created2021
161458-Thumbnail Image.png
Description
Apache Spark is one of the most widely adopted open-source Big Data processing engines. High performance and ease of use for a wide class of users are some of the primary reasons for the wide adoption. Although data partitioning increases the performance of the analytics workload, its application to Apache

Apache Spark is one of the most widely adopted open-source Big Data processing engines. High performance and ease of use for a wide class of users are some of the primary reasons for the wide adoption. Although data partitioning increases the performance of the analytics workload, its application to Apache Spark is very limited due to layered data abstractions. Once data is written to a stable storage system like Hadoop Distributed File System (HDFS), the data locality information is lost, and while reading the data back into Spark’s in-memory layer, the reading process is random which incurs shuffle overhead. This report investigates the use of metadata information that is stored along with the data itself for reducing shuffle overload in the join-based workloads. It explores the Hyperspace library to mitigate the shuffle overhead for Spark SQL applications. The article also introduces the Lachesis system to solve the shuffle overhead problem. The benchmark results show that the persistent partition and co-location techniques can be beneficial for matrix multiplication using SQL (Structured Query Language) operator along with the TPC-H analytical queries benchmark. The study concludes with a discussion about the trade-offs of using integrated stable storage to layered storage abstractions. It also discusses the feasibility of integration of the Machine Learning (ML) inference phase with the SQL operators along with cross-engine compatibility for employing data locality information.
ContributorsBarhate, Pratik Narhar (Author) / Zou, Jia (Thesis advisor) / Zhao, Ming (Committee member) / Elsayed, Mohamed Sarwat (Committee member) / Arizona State University (Publisher)
Created2021
161413-Thumbnail Image.png
Description
Student retention is a critical metric for many universities whose intention is to support student success. The goal of this thesis is to create retention models utilizing machine learning (ML) techniques. The factors explored in this research include only those known during the admissions process. These models have two goals:

Student retention is a critical metric for many universities whose intention is to support student success. The goal of this thesis is to create retention models utilizing machine learning (ML) techniques. The factors explored in this research include only those known during the admissions process. These models have two goals: first, to correctly predict as many non-returning students as possible, while minimizing the number of students who are falsely predicted as non-returning. Next, to identify important features in student retention and provide a practical explanation for a student's decision to no longer persist. The models are then used to provide outreach to students that need more support. The findings of this research indicate that the current top performing model is Adaboost which is able to successfully predict non-returning students with an accuracy of 54 percent.
ContributorsWade, Alexis N (Author) / Gel, Esma (Thesis advisor) / Yan, Hao (Thesis advisor) / Pavlic, Theodore (Committee member) / Arizona State University (Publisher)
Created2021
161428-Thumbnail Image.png
Description
In combinatorial mathematics, a Steiner system is a type of block design. A Steiner triple system is a special case of Steiner system where all blocks contain 3 elements and each pair of points occurs in exactly one block. Independent sets in Steiner triple systems is the topic which is

In combinatorial mathematics, a Steiner system is a type of block design. A Steiner triple system is a special case of Steiner system where all blocks contain 3 elements and each pair of points occurs in exactly one block. Independent sets in Steiner triple systems is the topic which is discussed in this thesis. Some properties related to independent sets in Steiner triple system are provided. The distribution of sizes of maximum independent sets of Steiner triple systems of specific order is also discussed in this thesis. An algorithm for constructing a Steiner triple system with maximum independent set whose size is restricted with a lower bound is provided. An alternative way to construct a Steiner triple system using an affine plane is also presented. A modified greedy algorithm for finding a maximal independent set in a Steiner triple system and a post-optimization method for improving the results yielded by this algorithm are established.
ContributorsWang, Zhaomeng (Author) / Colbourn, Charles (Thesis advisor) / Richa, Andrea (Committee member) / Jiang, Zilin (Committee member) / Arizona State University (Publisher)
Created2021
161431-Thumbnail Image.png
Description
In videos that contain actions performed unintentionally, agents do not achieve their desired goals. In such videos, it is challenging for computer vision systems to understand high-level concepts such as goal-directed behavior. On the other hand, from a very early age, humans are able to understand the relation between an

In videos that contain actions performed unintentionally, agents do not achieve their desired goals. In such videos, it is challenging for computer vision systems to understand high-level concepts such as goal-directed behavior. On the other hand, from a very early age, humans are able to understand the relation between an agent and their ultimate goal even if the action gets disrupted or unintentional effects occur. Inculcating this ability in artificially intelligent agents would make them better social learners by not just learning from their own mistakes, i.e, reinforcement learning, but also learning from other's mistakes. For example, this could greatly reduce the search space for artificially intelligent agents for finding the correct action sequence when trying to achieve a new goal, since they would be able to learn from others what not to do as well as how/when actions result in undesired outcomes.To validate this ability of deep learning models to perform this task, the Weakly Augmented Oops (W-Oops) dataset is proposed, built upon the Oops dataset. W-Oops consists of 2,100 unintentional human action videos, with 44 goal-directed and 33 unintentional video-level activity labels collected through human annotations. Inspired by previous methods on tasks such as weakly supervised action localization which show promise for achieving good localization results without ground truth segment annotations, this paper proposes a weakly supervised algorithm for localizing the goal-directed as well as the unintentional temporal region of a video using only video-level labels. In particular, an attention mechanism based strategy is employed that predicts the temporal regions which contributes the most to a classification task, leveraging solely video-level labels. Meanwhile, our designed overlap regularization allows the model to focus on distinct portions of the video for inferring the goal-directed and unintentional activity, while guaranteeing their temporal ordering. Extensive quantitative experiments verify the validity of our localization method.
ContributorsChakravarthy, Arnav (Author) / Yang, Yezhou (Thesis advisor) / Davulcu, Hasan (Committee member) / Pavlic, Theodore (Committee member) / Arizona State University (Publisher)
Created2021
154288-Thumbnail Image.png
Description
Characterization and modeling of deformation and failure in metallic materials under extreme conditions, such as the high loads and strain rates found under shock loading due to explosive detonation and high velocity-impacts, are extremely important for a wide variety of military and industrial applications. When a shock wave causes stress

Characterization and modeling of deformation and failure in metallic materials under extreme conditions, such as the high loads and strain rates found under shock loading due to explosive detonation and high velocity-impacts, are extremely important for a wide variety of military and industrial applications. When a shock wave causes stress in a material that exceeds the elastic limit, plasticity and eventually spallation occur in the material. The process of spall fracture, which in ductile materials stems from strain localization, void nucleation, growth and coalescence, can be caused by microstructural heterogeneity. The analysis of void nucleation performed from a microstructurally explicit simulation of a spall damage evolution in a multicrystalline copper indicated triple junctions as the preferred sites for incipient damage nucleation revealing 75% of them with at least two grain boundaries with misorientation angle between 20-55°. The analysis suggested the nature of the boundaries connecting at a triple junction is an indicator of their tendency to localize spall damage. The results also showed that damage propagated preferentially into one of the high angle boundaries after voids nucleate at triple junctions. Recently the Rayleigh-Taylor Instability (RTI) and the Richtmyer-Meshkov Instability (RMI) have been used to deduce dynamic material strength at very high pressures and strain rates. The RMI is used in this work since it allows using precise diagnostics such as Transient Imaging Displacement Interferometry (TIDI) due to its slower linear growth rate. The Preston-Tonks-Wallace (PTW) model is used to study the effects of dynamic strength on the behavior of samples with a fed-thru RMI, induced via direct laser drive on a perturbed surface, on stability of the shock front and the dynamic evolution of the amplitudes and velocities of the perturbation imprinted on the back (flat) surface by the perturbed shock front. Simulation results clearly showed that the amplitude of the hydrodynamic instability increases with a decrease in strength and vice versa and that the amplitude of the perturbed shock front produced by the fed-thru RMI is also affected by strength in the same way, which provides an alternative to amplitude measurements to study strength effects under dynamic conditions. Simulation results also indicate the presence of second harmonics in the surface perturbation after a certain time, which were also affected by the material strength.
ContributorsGautam, Sudrishti (Author) / Peralta, Pedro (Thesis advisor) / Oswald, Jay (Committee member) / Solanki, Kiran (Committee member) / Arizona State University (Publisher)
Created2016