This collection includes both ASU Theses and Dissertations, submitted by graduate students, and the Barrett, Honors College theses submitted by undergraduate students. 

Displaying 131 - 140 of 142
Filtering by

Clear all filters

161984-Thumbnail Image.png
Description
The rapid growth of Internet-of-things (IoT) and artificial intelligence applications have called forth a new computing paradigm--edge computing. Edge computing applications, such as video surveillance, autonomous driving, and augmented reality, are highly computationally intensive and require real-time processing. Current edge systems are typically based on commodity general-purpose hardware such as

The rapid growth of Internet-of-things (IoT) and artificial intelligence applications have called forth a new computing paradigm--edge computing. Edge computing applications, such as video surveillance, autonomous driving, and augmented reality, are highly computationally intensive and require real-time processing. Current edge systems are typically based on commodity general-purpose hardware such as Central Processing Units (CPUs) and Graphical Processing Units (GPUs) , which are mainly designed for large, non-time-sensitive jobs in the cloud and do not match the needs of the edge workloads. Also, these systems are usually power hungry and are not suitable for resource-constrained edge deployments. Such application-hardware mismatch calls forth a new computing backbone to support the high-bandwidth, low-latency, and energy-efficient requirements. Also, the new system should be able to support a variety of edge applications with different characteristics. This thesis addresses the above challenges by studying the use of Field Programmable Gate Array (FPGA) -based computing systems for accelerating the edge workloads, from three critical angles. First, it investigates the feasibility of FPGAs for edge computing, in comparison to conventional CPUs and GPUs. Second, it studies the acceleration of common algorithmic characteristics, identified as loop patterns, using FPGAs, and develops a benchmark tool for analyzing the performance of these patterns on different accelerators. Third, it designs a new edge computing platform using multiple clustered FPGAs to provide high-bandwidth and low-latency acceleration of convolutional neural networks (CNNs) widely used in edge applications. Finally, it studies the acceleration of the emerging neural networks, randomly-wired neural networks, on the multi-FPGA platform. The experimental results from this work show that the new generation of workloads requires rethinking the current edge-computing architecture. First, through the acceleration of common loops, it demonstrates that FPGAs can outperform GPUs in specific loops types up to 14 times. Second, it shows the linear scalability of multi-FPGA platforms in accelerating neural networks. Third, it demonstrates the superiority of the new scheduler to optimally place randomly-wired neural networks on multi-FPGA platforms with 81.1 times better throughput than the available scheduling mechanisms.
ContributorsBiookaghazadeh, Saman (Author) / Zhao, Ming (Thesis advisor) / Ren, Fengbo (Thesis advisor) / Li, Baoxin (Committee member) / Seo, Jae-Sun (Committee member) / Arizona State University (Publisher)
Created2021
162017-Thumbnail Image.png
Description
Data mining, also known as big data analysis, has been identified as a critical and challenging process for a variety of applications in real-world problems. Numerous datasets are collected and generated every day to store the information. The rise in the number of data volumes and data modality has resulted

Data mining, also known as big data analysis, has been identified as a critical and challenging process for a variety of applications in real-world problems. Numerous datasets are collected and generated every day to store the information. The rise in the number of data volumes and data modality has resulted in the increased demand for data mining methods and strategies of finding anomalies, patterns, and correlations within large data sets to predict outcomes. Effective machine learning methods are widely adapted to build the data mining pipeline for various purposes like business understanding, data understanding, data preparation, modeling, evaluation, and deployment. The major challenges for effectively and efficiently mining big data include (1) data heterogeneity and (2) missing data. Heterogeneity is the natural characteristic of big data, as the data is typically collected from different sources with diverse formats. The missing value is the most common issue faced by the heterogeneous data analysis, which resulted from variety of factors including the data collecting processing, user initiatives, erroneous data entries, and so on. In response to these challenges, in this thesis, three main research directions with application scenarios have been investigated: (1) Mining and Formulating Heterogeneous Data, (2) missing value imputation strategy in various application scenarios in both offline and online manner, and (3) missing value imputation for multi-modality data. Multiple strategies with theoretical analysis are presented, and the evaluation of the effectiveness of the proposed algorithms compared with state-of-the-art methods is discussed.
Contributorsliu, Xu (Author) / He, Jingrui (Thesis advisor) / Xue, Guoliang (Thesis advisor) / Li, Baoxin (Committee member) / Tong, Hanghang (Committee member) / Arizona State University (Publisher)
Created2021
153926-Thumbnail Image.png
Description
One of the most remarkable outcomes resulting from the evolution of the web into Web 2.0, has been the propelling of blogging into a widely adopted and globally accepted phenomenon. While the unprecedented growth of the Blogosphere has added diversity and enriched the media, it has also added complexity. To

One of the most remarkable outcomes resulting from the evolution of the web into Web 2.0, has been the propelling of blogging into a widely adopted and globally accepted phenomenon. While the unprecedented growth of the Blogosphere has added diversity and enriched the media, it has also added complexity. To cope with the relentless expansion, many enthusiastic bloggers have embarked on voluntarily writing, tagging, labeling, and cataloguing their posts in hopes of reaching the widest possible audience. Unbeknown to them, this reaching-for-others process triggers the generation of a new kind of collective wisdom, a result of shared collaboration, and the exchange of ideas, purpose, and objectives, through the formation of associations, links, and relations. Mastering an understanding of the Blogosphere can greatly help facilitate the needs of the ever growing number of these users, as well as producers, service providers, and advertisers into facilitation of the categorization and navigation of this vast environment. This work explores a novel method to leverage the collective wisdom from the infused label space for blog search and discovery. The work demonstrates that the wisdom space can provide a most unique and desirable framework to which to discover the highly sought after background information that could aid in the building of classifiers. This work incorporates this insight into the construction of a better clustering of blogs which boosts the performance of classifiers for identifying more relevant labels for blogs, and offers a mechanism that can be incorporated into replacing spurious labels and mislabels in a multi-labeled space.
ContributorsGalan, Magdiel F (Author) / Liu, Huan (Thesis advisor) / Davulcu, Hasan (Committee member) / Ye, Jieping (Committee member) / Li, Baoxin (Committee member) / Arizona State University (Publisher)
Created2015
132164-Thumbnail Image.png
Description
With the coming advances of computational power, algorithmic trading has become one of the primary strategies to trading on the stock market. To understand why and how these strategies have been effective, this project has taken a look at the complete process of creating tools and applications to analyze and

With the coming advances of computational power, algorithmic trading has become one of the primary strategies to trading on the stock market. To understand why and how these strategies have been effective, this project has taken a look at the complete process of creating tools and applications to analyze and predict stock prices in order to perform low-frequency trading. The project is composed of three main components. The first component is integrating several public resources to acquire and process financial trading data and store it in order to complete the other components. Alpha Vantage API, a free open source application, provides an accurate and comprehensive dataset of features for each stock ticker requested. The second component is researching, prototyping, and implementing various trading algorithms in code. We began by focusing on the Mean Reversion algorithm as a proof of concept algorithm to develop meaningful trading strategies and identify patterns within our datasets. To augment our market prediction power (“alpha”), we implemented a Long Short-Term Memory recurrent neural network. Neural Networks are an incredibly effective but often complex tool used frequently in data science when traditional methods are found lacking. Following the implementation, the last component is to optimize, analyze, compare, and contrast all of the algorithms and identify key features to conclude the overall effectiveness of each algorithm. We were able to identify conclusively which aspects of each algorithm provided better alpha and create an entire pipeline to automate this process for live trading implementation. An additional reason for automation is to provide an educational framework such that any who may be interested in quantitative finance in the future can leverage this project to gain further insight.
ContributorsYurowkin, Alexander (Co-author) / Kumar, Rohit (Co-author) / Welfert, Bruno (Thesis director) / Li, Baoxin (Committee member) / Economics Program in CLAS (Contributor) / School of Mathematical and Statistical Sciences (Contributor) / Barrett, The Honors College (Contributor)
Created2019-05
168624-Thumbnail Image.png
Description
How to teach a machine to understand natural language? This question is a long-standing challenge in Artificial Intelligence. Several tasks are designed to measure the progress of this challenge. Question Answering is one such task that evaluates a machine's ability to understand natural language, where it reads a passage of

How to teach a machine to understand natural language? This question is a long-standing challenge in Artificial Intelligence. Several tasks are designed to measure the progress of this challenge. Question Answering is one such task that evaluates a machine's ability to understand natural language, where it reads a passage of text or an image and answers comprehension questions. In recent years, the development of transformer-based language models and large-scale human-annotated datasets has led to remarkable progress in the field of question answering. However, several disadvantages of fully supervised question answering systems have been observed. Such as generalizing to unseen out-of-distribution domains, linguistic style differences in questions, and adversarial samples. This thesis proposes implicitly supervised question answering systems trained using knowledge acquisition from external knowledge sources and new learning methods that provide inductive biases to learn question answering. In particular, the following research projects are discussed: (1) Knowledge Acquisition methods: these include semantic and abductive information retrieval for seeking missing knowledge, a method to represent unstructured text corpora as a knowledge graph, and constructing a knowledge base for implicit commonsense reasoning. (2) Learning methods: these include Knowledge Triplet Learning, a method over knowledge graphs; Test-Time Learning, a method to generalize to an unseen out-of-distribution context; WeaQA, a method to learn visual question answering using image captions without strong supervision; WeaSel, weakly supervised method for relative spatial reasoning; and a new paradigm for unsupervised natural language inference. These methods potentially provide a new research direction to overcome the pitfalls of direct supervision.
ContributorsBanerjee, Pratyay (Author) / Baral, Chitta (Thesis advisor) / Yang, Yezhou (Committee member) / Blanco, Eduardo (Committee member) / Li, Baoxin (Committee member) / Arizona State University (Publisher)
Created2022
164535-Thumbnail Image.png
Description

Speedsolving, the art of solving twisty puzzles like the Rubik's Cube as fast as possible, has recently benefitted from the arrival of smartcubes which have special hardware for tracking the cube's face turns and transmitting them via Bluetooth. However, due to their embedded electronics, existing smartcubes cannot be used in

Speedsolving, the art of solving twisty puzzles like the Rubik's Cube as fast as possible, has recently benefitted from the arrival of smartcubes which have special hardware for tracking the cube's face turns and transmitting them via Bluetooth. However, due to their embedded electronics, existing smartcubes cannot be used in competition, reducing their utility in personal speedcubing practice. This thesis proposes a sound-based design for tracking the face turns of a standard, non-smart speedcube consisting of an audio processing receiver in software and a small physical speaker configured as a transmitter. Special attention has been given to ensuring that installing the transmitter requires only a reversible centercap replacement on the original cube. This allows the cube to benefit from smartcube features during practice, while still maintaining compliance with competition regulations. Within a controlled test environment, the software receiver perfectly detected a variety of transmitted move sequences. Furthermore, all components required for the physical transmitter were demonstrated to fit within the centercap of a Gans 356 speedcube.

ContributorsHale, Joseph (Author) / Heinrichs, Robert (Thesis director) / Li, Baoxin (Committee member) / Barrett, The Honors College (Contributor) / Software Engineering (Contributor) / School of International Letters and Cultures (Contributor)
Created2022-05
151402-Thumbnail Image.png
Description
Drosophila melanogaster, as an important model organism, is used to explore the mechanism which governs cell differentiation and embryonic development. Understanding the mechanism will help to reveal the effects of genes on other species or even human beings. Currently, digital camera techniques make high quality Drosophila gene expression imaging possible.

Drosophila melanogaster, as an important model organism, is used to explore the mechanism which governs cell differentiation and embryonic development. Understanding the mechanism will help to reveal the effects of genes on other species or even human beings. Currently, digital camera techniques make high quality Drosophila gene expression imaging possible. On the other hand, due to the advances in biology, gene expression images which can reveal spatiotemporal patterns are generated in a high-throughput pace. Thus, an automated and efficient system that can analyze gene expression will become a necessary tool for investigating the gene functions, interactions and developmental processes. One investigation method is to compare the expression patterns of different developmental stages. Recently, however, the expression patterns are manually annotated with rough stage ranges. The work of annotation requires professional knowledge from experienced biologists. Hence, how to transfer the domain knowledge in biology into an automated system which can automatically annotate the patterns provides a challenging problem for computer scientists. In this thesis, the problem of stage annotation for Drosophila embryo is modeled in the machine learning framework. Three sparse learning algorithms and one ensemble algorithm are used to attack the problem. The sparse algorithms are Lasso, group Lasso and sparse group Lasso. The ensemble algorithm is based on a voting method. Besides that the proposed algorithms can annotate the patterns to stages instead of stage ranges with high accuracy; the decimal stage annotation algorithm presents a novel way to annotate the patterns to decimal stages. In addition, some analysis on the algorithm performance are made and corresponding explanations are given. Finally, with the proposed system, all the lateral view BDGP and FlyFish images are annotated and several interesting applications of decimal stage value are revealed.
ContributorsPan, Cheng (Author) / Ye, Jieping (Thesis advisor) / Li, Baoxin (Committee member) / Farin, Gerald (Committee member) / Arizona State University (Publisher)
Created2012
190777-Thumbnail Image.png
Description
Social networking platforms have redefined communication, serving as conduits forswift global information dissemination on contemporary topics and trends. This research probes information cascade (IC) dynamics, focusing on viral IC, where user-shared information gains rapid, widespread attention. Implications of IC span advertising, persuasion, opinion-shaping, and crisis response. First, this dissertation aims to unravel the context

Social networking platforms have redefined communication, serving as conduits forswift global information dissemination on contemporary topics and trends. This research probes information cascade (IC) dynamics, focusing on viral IC, where user-shared information gains rapid, widespread attention. Implications of IC span advertising, persuasion, opinion-shaping, and crisis response. First, this dissertation aims to unravel the context behind viral content, particularly in the realm of the digital world, introducing a semi-supervised taxonomy induction framework (STIF). STIF employs state-of-the-art term representation, topical phrase detection, and clustering to organize terms into a two-level topic taxonomy. Social scientists then assess the topic clusters for coherence and completeness. STIF proves effective, significantly reducing human coding efforts (up to 74%) while accurately inducing taxonomies and term-to-topic mappings due to the high purity of its topics. Second, to profile the drivers of virality, this study investigates messaging strategies influencing message virality. Three content-based hypotheses are formulated and tested, demonstrating that incorporation of “negativity bias,” “causal arguments,” and “threats to personal or societal core values” - singularly and jointly - significantly enhances message virality on social media, quantified by retweet counts. Furthermore, the study highlights framing narratives’ pivotal role in shaping discourse, particularly in adversarial campaigns. An innovative pipeline for automatic framing detection is introduced, and tested on a collection of texts on the Russia-Ukraine conflict. Integrating representation learning, overlapping graph-clustering, and a unique Topic Actor Graph (TAG) synthesis method, the study achieves remarkable framing detection accuracy. The developed scoring mechanism maps sentences to automatically detect framing signatures. This pipeline attains an impressive F1 score of 92% and a 95% weighted accuracy for framing detection on a real-world dataset. In essence, this dissertation focuses on the multidimensional exploration of information cascade, uncovering the context and drivers of content virality, and automating framing detection. Through innovative methodologies like STIF, messaging strategy analysis, and TAG Frames, the research contributes valuable insights into the mechanics of viral content spread and framing nuances within the digital landscape, enriching fields such as advertisement, communication, public discourse, and crisis response strategies.
ContributorsMousavi, Maryam (Author) / Davulcu, Hasan HD (Thesis advisor) / Li, Baoxin (Committee member) / Corman, Steven (Committee member) / McDaniel, Troy (Committee member) / Arizona State University (Publisher)
Created2023
190759-Thumbnail Image.png
Description
This thesis presents robust and novel solutions using knowledge distillation with geometric approaches and multimodal data that can address the current challenges in deep learning, providing a comprehensive understanding of the learning process involved in knowledge distillation. Deep learning has attained significant success in various applications, such as health and

This thesis presents robust and novel solutions using knowledge distillation with geometric approaches and multimodal data that can address the current challenges in deep learning, providing a comprehensive understanding of the learning process involved in knowledge distillation. Deep learning has attained significant success in various applications, such as health and wellness promotion, smart homes, and intelligent surveillance. In general, stacking more layers or increasing the number of trainable parameters causes deep networks to exhibit improved performance. However, this causes the model to become large, resulting in an additional need for computing and power resources for training, storage, and deployment. These are the core challenges in incorporating such models into small devices with limited power and computational resources. In this thesis, robust solutions aimed at addressing the aforementioned challenges are presented. These proposed methodologies and algorithmic contributions enhance the performance and efficiency of deep learning models. The thesis encompasses a comprehensive exploration of knowledge distillation, an approach that holds promise for creating compact models from high-capacity ones, while preserving their performance. This exploration covers diverse datasets, including both time series and image data, shedding light on the pivotal role of augmentation methods in knowledge distillation. The effects of these methods are rigorously examined through empirical experiments. Furthermore, the study within this thesis delves into the efficient utilization of features derived from two different teacher models, each trained on dissimilar data representations, including time-series and image data. Through these investigations, I present novel approaches to knowledge distillation, leveraging geometric techniques for the analysis of multimodal data. These solutions not only address real-world challenges but also offer valuable insights and recommendations for modeling in new applications.
ContributorsJeon, Eunsom (Author) / Turaga, Pavan (Thesis advisor) / Li, Baoxin (Committee member) / Lee, Hyunglae (Committee member) / Jayasuriya, Suren (Committee member) / Arizona State University (Publisher)
Created2023
193355-Thumbnail Image.png
Description
Image denoising, a fundamental task in computer vision, poses significant challenges due to its inherently inverse and ill-posed nature. Despite advancements in traditional methods and supervised learning approaches, particularly in medical imaging such as Medical Resonance Imaging (MRI) scans, the reliance on paired datasets and known noise distributions remains a

Image denoising, a fundamental task in computer vision, poses significant challenges due to its inherently inverse and ill-posed nature. Despite advancements in traditional methods and supervised learning approaches, particularly in medical imaging such as Medical Resonance Imaging (MRI) scans, the reliance on paired datasets and known noise distributions remains a practical hurdle. Recent progress in noise statistical independence theory and diffusion models has revitalized research interest, offering promising avenues for unsupervised denoising. However, existing methods often yield overly smoothed results or introduce hallucinated structures, limiting their clinical applicability. This thesis tackles the core challenge of progressing towards unsupervised denoising of MRI scans. It aims to retain intricate details without smoothing or introducing artificial structures, thus ensuring the production of high-quality MRI images. The thesis makes a three-fold contribution: Firstly, it presents a detailed analysis of traditional techniques, early machine learning algorithms for denoising, and new statistical-based models, with an extensive evaluation study on self-supervised denoising methods highlighting their limitations. Secondly, it conducts an evaluation study on an emerging class of diffusion-based denoising methods, accompanied by additional empirical findings and discussions on their effectiveness and limitations, proposing solutions to enhance their utility. Lastly, it introduces a novel approach, Unsupervised Multi-stage Ensemble Deep Learning with diffusion models for denoising MRI scans (MEDL). Leveraging diffusion models, this approach operates independently of signal or noise priors and incorporates weighted rescaling of multi-stage reconstructions to balance over-smoothing and hallucination tendencies. Evaluation using benchmark datasets demonstrates an average gain of 1dB and 2% in PSNR and SSIM metrics, respectively, over existing approaches.
ContributorsVora, Sahil (Author) / Li, Baoxin (Thesis advisor) / Wang, Yalin (Committee member) / Zhou, Yuxiang (Committee member) / Arizona State University (Publisher)
Created2024