This collection includes most of the ASU Theses and Dissertations from 2011 to present. ASU Theses and Dissertations are available in downloadable PDF format; however, a small percentage of items are under embargo. Information about the dissertations/theses includes degree information, committee members, an abstract, supporting data or media.

In addition to the electronic theses found in the ASU Digital Repository, ASU Theses and Dissertations can be found in the ASU Library Catalog.

Dissertations and Theses granted by Arizona State University are archived and made available through a joint effort of the ASU Graduate College and the ASU Libraries. For more information or questions about this collection contact or visit the Digital Repository ETD Library Guide or contact the ASU Graduate College at gradformat@asu.edu.

Displaying 1 - 10 of 10
Filtering by

Clear all filters

171880-Thumbnail Image.png
Description
Molecular Dynamics (MD) simulations are ubiquitous throughout the physical sci-ences; they are critical in understanding how particle structures evolve over time given a particular energy function. A software package called ParSplice introduced a new method to generate these simulations in parallel that has significantly inflated their length. Typically, simulations are short discrete Markov

Molecular Dynamics (MD) simulations are ubiquitous throughout the physical sci-ences; they are critical in understanding how particle structures evolve over time given a particular energy function. A software package called ParSplice introduced a new method to generate these simulations in parallel that has significantly inflated their length. Typically, simulations are short discrete Markov chains, only captur- ing a few microseconds of a particle’s behavior and containing tens of thousands of transitions between states; in contrast, a typical ParSplice simulation can be as long as a few milliseconds, containing tens of millions of transitions. Naturally, sifting through data of this size is impossible by hand, and there are a number of visualiza- tion systems that provide comprehensive and intuitive analyses of particle structures throughout MD simulations. However, no visual analytics systems have been built that can manage the simulations that ParSplice produces. To analyze these large data-sets, I built a visual analytics system that provides multiple coordinated views that simultaneously describe the data temporally, within its structural context, and based on its properties. The system provides fluid and powerful user interactions regardless of the size of the data, allowing the user to drill down into the data-set to get detailed insights, as well as run and save various calculations, most notably the Nudged Elastic Band method. The system also allows the comparison of multiple trajectories, revealing more information about the general behavior of particles at different temperatures, energy states etc.
ContributorsHnatyshyn, Rostyslav (Author) / Maciejewski, Ross (Thesis advisor) / Bryan, Chris (Committee member) / Ahrens, James (Committee member) / Arizona State University (Publisher)
Created2022
189344-Thumbnail Image.png
Description
Distributed databases, such as Log-Structured Merge-Tree Key-Value Stores (LSM-KVS), are widely used in modern infrastructure. One of the primary challenges in these databases is ensuring consistency, meaning that all nodes have the same view of data at any given time. However, maintaining consistency requires a trade-off: the stronger the consistency,

Distributed databases, such as Log-Structured Merge-Tree Key-Value Stores (LSM-KVS), are widely used in modern infrastructure. One of the primary challenges in these databases is ensuring consistency, meaning that all nodes have the same view of data at any given time. However, maintaining consistency requires a trade-off: the stronger the consistency, the more resources are necessary to replicate data across replicas, which decreases database performance. Addressing this trade-off poses two challenges: first, developing and managing multiple consistency levels within a single system, and second, assigning consistency levels to effectively balance the consistency-performance trade-off. This thesis introduces Self-configuring Consistency In Distributed LSM-KVS (SCID), a service that leverages unique properties of LSM KVS properties to manage consistency levels and automates level assignment with ML. To address the first challenge, SCID combines Dynamic read-only instances and Logical KV-based partitions to enable on-demand updates of read-only instances and facilitate the logical separation of groups of key-value pairs. SCID uses logical partitions as consistency levels and on-demand updates in dynamic read-only instances to allow for multiple consistency levels. To address the second challenge, the thesis presents an ML-based solution, SCID-ML to manage consistency-performance trade-off with better effectiveness. We evaluate SCID and find it to improve the write throughput up to 50% and achieve 62% accuracy for consistency-level predictions.
ContributorsThakkar, Viraj Deven (Author) / Cao, Zhichao (Thesis advisor) / Xiao, Xusheng (Thesis advisor) / Bryan, Chris (Committee member) / Arizona State University (Publisher)
Created2023
189385-Thumbnail Image.png
Description
Machine learning models are increasingly being deployed in real-world applications where their predictions are used to make critical decisions in a variety of domains. The proliferation of such models has led to a burgeoning need to ensure the reliability and safety of these models, given the potential negative consequences of

Machine learning models are increasingly being deployed in real-world applications where their predictions are used to make critical decisions in a variety of domains. The proliferation of such models has led to a burgeoning need to ensure the reliability and safety of these models, given the potential negative consequences of model vulnerabilities. The complexity of machine learning models, along with the extensive data sets they analyze, can result in unpredictable and unintended outcomes. Model vulnerabilities may manifest due to errors in data input, algorithm design, or model deployment, which can have significant implications for both individuals and society. To prevent such negative outcomes, it is imperative to identify model vulnerabilities at an early stage in the development process. This will aid in guaranteeing the integrity, dependability, and safety of the models, thus mitigating potential risks and enabling the full potential of these technologies to be realized. However, enumerating vulnerabilities can be challenging due to the complexity of the real-world environment. Visual analytics, situated at the intersection of human-computer interaction, computer graphics, and artificial intelligence, offers a promising approach for achieving high interpretability of complex black-box models, thus reducing the cost of obtaining insights into potential vulnerabilities of models. This research is devoted to designing novel visual analytics methods to support the identification and analysis of model vulnerabilities. Specifically, generalizable visual analytics frameworks are instantiated to explore vulnerabilities in machine learning models concerning security (adversarial attacks and data perturbation) and fairness (algorithmic bias). In the end, a visual analytics approach is proposed to enable domain experts to explain and diagnose the model improvement of addressing identified vulnerabilities of machine learning models in a human-in-the-loop fashion. The proposed methods hold the potential to enhance the security and fairness of machine learning models deployed in critical real-world applications.
ContributorsXie, Tiankai (Author) / Maciejewski, Ross (Thesis advisor) / Liu, Huan (Committee member) / Bryan, Chris (Committee member) / Tong, Hanghang (Committee member) / Arizona State University (Publisher)
Created2023
171933-Thumbnail Image.png
Description
As people begin to live longer and the population shifts to having more olderadults on Earth than young children, radical solutions will be needed to ease the burden on society. It will be essential to develop technology that can age with the individual. One solution is to keep older adults in their

As people begin to live longer and the population shifts to having more olderadults on Earth than young children, radical solutions will be needed to ease the burden on society. It will be essential to develop technology that can age with the individual. One solution is to keep older adults in their homes longer through smart home and smart living technology, allowing them to age in place. People have many choices when choosing where to age in place, including their own homes, assisted living facilities, nursing homes, or family members. No matter where people choose to age, they may face isolation and financial hardships. It is crucial to keep finances in mind when developing Smart Home technology. Smart home technologies seek to allow individuals to stay inside their homes for as long as possible, yet little work looks at how we can use technology in different life stages. Robots are poised to impact society and ease burns at home and in the workforce. Special attention has been given to social robots to ease isolation. As social robots become accepted into society, researchers need to understand how these robots should mimic natural conversation. My work attempts to answer this question within social robotics by investigating how to make conversational robots natural and reciprocal. I investigated this through a 2x2 Wizard of Oz between-subjects user study. The study lasted four months, testing four different levels of interactivity with the robot. None of the levels were significantly different from the others, an unexpected result. I then investigated the robot’s personality, the participant’s trust, and the participant’s acceptance of the robot and how that influenced the study.
ContributorsMiller, Jordan (Author) / McDaniel, Troy (Thesis advisor) / Michael, Katina (Committee member) / Cooke, Nancy (Committee member) / Bryan, Chris (Committee member) / Li, Baoxin (Committee member) / Arizona State University (Publisher)
Created2022
171809-Thumbnail Image.png
Description

Data integration involves the reconciliation of data from diverse data sources in order to obtain a unified data repository, upon which an end user such as a data analyst can run analytics sessions to explore the data and obtain useful insights. Supervised Machine Learning (ML) for data integration tasks such

Data integration involves the reconciliation of data from diverse data sources in order to obtain a unified data repository, upon which an end user such as a data analyst can run analytics sessions to explore the data and obtain useful insights. Supervised Machine Learning (ML) for data integration tasks such as ontology (schema) or entity (instance) matching requires several training examples in terms of manually curated, pre-labeled matching and non-matching schema concept or entity pairs which are hard to obtain. On similar lines, an analytics system without predictive capabilities about the impending workload can incur huge querying latencies, while leaving the onus of understanding the underlying database schema and writing a meaningful query at every step during a data exploration session on the user. In this dissertation, I will describe the human-in-the-loop Machine Learning (ML) systems that I have built towards data integration and predictive analytics. I alleviate the need for extensive prior labeling by utilizing active learning (AL) for dataintegration. In each AL iteration, I detect the unlabeled entity or schema concept pairs that would strengthen the ML classifier and selectively query the human oracle for such labels in a budgeted fashion. Thus, I make use of human assistance for ML-based data integration. On the other hand, when the human is an end user exploring data through Online Analytical Processing (OLAP) queries, my goal is to pro-actively assist the human by predicting the top-K next queries that s/he is likely to be interested in. I will describe my proposed SQL-predictor, a Business Intelligence (BI) query predictor and a geospatial query cardinality estimator with an emphasis on schema abstraction, query representation and how I adapt the ML models for these tasks. For each system, I will discuss the evaluation metrics and how the proposed systems compare to the state-of-the-art baselines on multiple datasets and query workloads.

ContributorsMeduri, Venkata Vamsikrishna (Author) / Sarwat, Mohamed (Thesis advisor) / Bryan, Chris (Committee member) / Liu, Huan (Committee member) / Ozcan, Fatma (Committee member) / Popa, Lucian (Committee member) / Arizona State University (Publisher)
Created2022
161949-Thumbnail Image.png
Description
Working memory plays an important role in human activities across academic,professional, and social settings. Working memory is dened as the memory extensively involved in goal-directed behaviors in which information must be retained and manipulated to ensure successful task execution. The aim of this research is to understand the effect of image captioning with

Working memory plays an important role in human activities across academic,professional, and social settings. Working memory is dened as the memory extensively involved in goal-directed behaviors in which information must be retained and manipulated to ensure successful task execution. The aim of this research is to understand the effect of image captioning with image description on an individual's working memory. A study was conducted with eight neutral images comprising situations relatable to daily life such that each image could have a positive or negative description associated with the outcome of the situation in the image. The study consisted of three rounds where the first and second round involved two parts and the third round consisted of one part. The image was captioned a total of five times across the entire study. The findings highlighted that only 25% of participants were able to recall the captions which they captioned for an image after a span of 9-15 days; when comparing the recall rate of the captions, 50% of participants were able to recall the image caption from the previous round in the present round; and out of the positive and negative description associated with the image, 65% of participants recalled the former description rather than the latter. The conclusions drawn from the study are participants tend to retain information for longer periods than the expected duration for working memory, which may be because participants were able to relate the images with their everyday life situations and given a situation with positive and negative information, the human brain is aligned towards positive information over negative information.
ContributorsUppara, Nithiya Shree (Author) / McDaniel, Troy (Thesis advisor) / Venkateswara, Hemanth (Thesis advisor) / Bryan, Chris (Committee member) / Arizona State University (Publisher)
Created2021
189217-Thumbnail Image.png
Description
Component-based models are commonly employed to simulate discrete dynamicalsystems. These models lend themselves to formalizing the structures of systems at multiple levels of granularity. Visual development of component-based models serves to simplify the iterative and incremental model specification activities. The Parallel Discrete Events System Specification (DEVS) formalism offers a flexible

Component-based models are commonly employed to simulate discrete dynamicalsystems. These models lend themselves to formalizing the structures of systems at multiple levels of granularity. Visual development of component-based models serves to simplify the iterative and incremental model specification activities. The Parallel Discrete Events System Specification (DEVS) formalism offers a flexible yet rigorous approach for decomposing a whole model into its components or alternatively, composing a whole model from components. While different concepts, frameworks, and tools offer a variety of visual modeling capabilities, most pose limitations, such as visualizing multiple model hierarchies at any level with arbitrary depths. The visual and persistent layout of any number of hierarchy levels of models can be maintained and navigated seamlessly. Persistence storage is another capability needed for the modeling, simulating, verifying, and validating lifecycle. These are important features to improve the demanding task of creating and changing modular, hierarchical simulation models. This thesis proposes a new approach and develops a tool for the visual development of models. This tool supports storing and reconstructing graphical models using a NoSQL database. It offers unique capabilities important for developing increasingly larger and more complex models essential for analyzing, designing, and building Digital Twins.
ContributorsMohite, Sheetal Chandrakant (Author) / Sarjoughian, Hessam S (Thesis advisor) / Bryan, Chris (Committee member) / Pavlic, Theodore (Committee member) / Arizona State University (Publisher)
Created2023
193025-Thumbnail Image.png
Description
Mid-air ultrasound haptic technology can enhance user interaction and immersion in extended reality (XR) applications through contactless touch feedback. However, existing design tools for mid-air haptics primarily support the creation of static tactile sensations (tactons), which lack adaptability at runtime. These tactons do not offer the required expressiveness in interactive

Mid-air ultrasound haptic technology can enhance user interaction and immersion in extended reality (XR) applications through contactless touch feedback. However, existing design tools for mid-air haptics primarily support the creation of static tactile sensations (tactons), which lack adaptability at runtime. These tactons do not offer the required expressiveness in interactive scenarios where a continuous closed-loop response to user movement or environmental states is desirable. This thesis proposes AdapTics, a toolkit featuring a graphical interface for the rapid prototyping of adaptive tactons—dynamic sensations that can adjust at runtime based on user interactions, environmental changes, or other inputs. A software library and a Unity package accompany the graphical interface to enable integration of adaptive tactons in existing applications. The design space provided by AdapTics for creating adaptive mid-air ultrasound tactons is presented, along with evidence that the design tool enhances Creativity Support Index ratings for Exploration and Expressiveness, as demonstrated in a user study involving 12 XR and haptic designers.
ContributorsJohn, Kevin (Author) / Seifi, Hasti (Thesis advisor) / Bryan, Chris (Committee member) / Schneider, Oliver (Committee member) / Arizona State University (Publisher)
Created2024
161676-Thumbnail Image.png
Description
Augmented Reality (AR) has progressively demonstrated its helpfulness for novicesto learn highly complex and abstract concepts by visualizing details in an immersive environment. However, some studies show that similar results could also be obtained in environments that do not involve AR. To explore the potential of AR in advancing transformative engagement in education,

Augmented Reality (AR) has progressively demonstrated its helpfulness for novicesto learn highly complex and abstract concepts by visualizing details in an immersive environment. However, some studies show that similar results could also be obtained in environments that do not involve AR. To explore the potential of AR in advancing transformative engagement in education, I propose modeling facial expressions as implicit feedback when one is being immersed in the environment. I developed a Unity application to record and log the users' application operations and facial images. A neural network-based model, Visual Geometry Group 19 (VGG19, Simonyan and Zisserman (2014)), is adopted to recognize emotions from the captured facial images. A within-subject user study was designed and conducted to assess the sentiment and user engagement differences in AR and non-AR tasks. To analyze the collected data, Dynamic Time Warping (DTW) was applied to identify the emotional similarities between AR and non-AR environments. The results indicate that users showed an increase in emotion patterns and application operations throughout the AR tasks in comparison to non-AR tasks. The emotion patterns observed in the analysis show that non-AR provides less implicit feedback compared to AR tasks. The DTW analysis reveals that users' emotion change patterns appear to be more distant from neutral emotions in AR than non-AR tasks. Succinctly put, the users in the AR task demonstrated more active use of the application and yielded ranges of emotions while operating it.
ContributorsPapakannu, Kushal Reddy (Author) / Hsiao, Ihan (Thesis advisor) / Bryan, Chris (Committee member) / Glenberg, Mina Johnson (Committee member) / Arizona State University (Publisher)
Created2021
190815-Thumbnail Image.png
Description
Visual Question Answering (VQA) is an increasingly important multi-modal task where models must answer textual questions based on visual image inputs. Numerous VQA datasets have been proposed to train and evaluate models. However, existing benchmarks exhibit a unilateral focus on textual distribution shifts rather than joint shifts across modalities. This

Visual Question Answering (VQA) is an increasingly important multi-modal task where models must answer textual questions based on visual image inputs. Numerous VQA datasets have been proposed to train and evaluate models. However, existing benchmarks exhibit a unilateral focus on textual distribution shifts rather than joint shifts across modalities. This is suboptimal for properly assessing model robustness and generalization. To address this gap, a novel multi-modal VQA benchmark dataset is introduced for the first time. This dataset combines both visual and textual distribution shifts across training and test sets. Using this challenging benchmark exposes vulnerabilities in existing models relying on spurious correlations and overfitting to dataset biases. The novel dataset advances the field by enabling more robust model training and rigorous evaluation of multi-modal distribution shift generalization. In addition, a new few-shot multi-modal prompt fusion model is proposed to better adapt models for downstream VQA tasks. The model incorporates a prompt encoder module and dual-path design to align and fuse image and text prompts. This represents a novel prompt learning approach tailored for multi-modal learning across vision and language. Together, the introduced benchmark dataset and prompt fusion model address key limitations around evaluating and improving VQA model robustness. The work expands the methodology for training models resilient to multi-modal distribution shifts.
ContributorsJyothi Unni, Suraj (Author) / Liu, Huan (Thesis advisor) / Davalcu, Hasan (Committee member) / Bryan, Chris (Committee member) / Arizona State University (Publisher)
Created2023