Matching Items (28)
148180-Thumbnail Image.png
Description

In this Barrett Honors Thesis, I developed a model to quantify the complexity of Sankey diagrams, which are a type of visualization technique that shows flow between groups. To do this, I created a carefully controlled dataset of synthetic Sankey diagrams of varying sizes as study stimuli. Then, a pair

In this Barrett Honors Thesis, I developed a model to quantify the complexity of Sankey diagrams, which are a type of visualization technique that shows flow between groups. To do this, I created a carefully controlled dataset of synthetic Sankey diagrams of varying sizes as study stimuli. Then, a pair of online crowdsourced user studies were conducted and analyzed. User performance for Sankey diagrams of varying size and features (number of groups, number of timesteps, and number of flow crossings) were algorithmically modeled as a formula to quantify the complexity of these diagrams. Model accuracy was measured based on the performance of users in the second crowdsourced study. The results of my experiment conclusively demonstrates that the algorithmic complexity formula I created closely models the visual complexity of the Sankey Diagrams in the dataset.

ContributorsGinjpalli, Shashank (Author) / Bryan, Chris (Thesis director) / Hsiao, Sharon (Committee member) / Computer Science and Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2021-05
Description

The goal of this thesis project was to develop a digital, quantitative assessment of executive functioning skills and problem solving abilities. This assessment was intended to serve as a relative measure of executive functions and problem solving abilities rather than a diagnosis; the main purpose was to identify areas for

The goal of this thesis project was to develop a digital, quantitative assessment of executive functioning skills and problem solving abilities. This assessment was intended to serve as a relative measure of executive functions and problem solving abilities rather than a diagnosis; the main purpose was to identify areas for improvement and provide individuals with an understanding of their current ability levels. To achieve this goal, we developed a web-based assessment through Unity that used gamelike modifications of Flanker, Antisaccade, Embedded Images, Raven’s Matrices, and Color / Order Memory tasks. Participants were invited to access the assessment at www.ExecutiveFunctionLevel.com to complete the assessment and their results were analyzed. The findings of this project indicate that these tasks accurately represent executive functioning skills, the Flanker Effect is present in the collected data, and there is a notable correlation between each of the REFLEX challenges. In conclusion, we successfully developed a short, gamelike, online assessment of executive functioning and problem solving abilities. Future developments of REFLEX could look into immediate scoring, developing a mobile application, and externally validating the results.

ContributorsAnderson, Gabriel (Co-author) / Anderson, Mikayla (Co-author) / Brewer, Gene (Thesis director) / Kobayashi, Yoshihiro (Committee member) / Johnson, Mina (Committee member) / Computer Science and Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2021-05
Description

The goal of this thesis project was to develop a digital, quantitative assessment of executive functioning skills and problem solving abilities. This assessment was intended to serve as a relative measure of executive functions and problem solving abilities rather than a diagnosis; the main purpose was to identify areas for

The goal of this thesis project was to develop a digital, quantitative assessment of executive functioning skills and problem solving abilities. This assessment was intended to serve as a relative measure of executive functions and problem solving abilities rather than a diagnosis; the main purpose was to identify areas for improvement and provide individuals with an understanding of their current ability levels. To achieve this goal, we developed a web-based assessment through Unity that used gamelike modifications of Flanker, Antisaccade, Embedded Images, Raven’s Matrices, and Color / Order Memory tasks. Participants were invited to access the assessment at www.ExecutiveFunctionLevel.com to complete the assessment and their results were analyzed. The findings of this project indicate that these tasks accurately represent executive functioning skills, the Flanker Effect is present in the collected data, and there is a notable correlation between each of the REFLEX challenges. In conclusion, we successfully developed a short, gamelike, online assessment of executive functioning and problem solving abilities. Future developments of REFLEX could look into immediate scoring, developing a mobile application, and externally validating the results.

ContributorsAnderson, Mikayla (Co-author) / Anderson, Gabriel (Co-author) / Brewer, Gene (Thesis director) / Kobayashi, Yoshihiro (Committee member) / Johnson, Mina (Committee member) / Department of Finance (Contributor) / Barrett, The Honors College (Contributor)
Created2021-05
Description

Aphasia is an impairment that affects many different aspects of language and makes it more difficult for a person to communicate with those around them. Treatment for aphasia is often administered by a speech-language pathologist in a clinical setting, but researchers have recently begun exploring the potential of virtual reality

Aphasia is an impairment that affects many different aspects of language and makes it more difficult for a person to communicate with those around them. Treatment for aphasia is often administered by a speech-language pathologist in a clinical setting, but researchers have recently begun exploring the potential of virtual reality (VR) interventions. VR provides an immersive environment and can allow multiple users to interact with digitized content. This exploratory paper proposes the design of a VR rehabilitation game –called Pact– for adults with aphasia that aims to improve the word-finding and picture-naming abilities of users to improve communication skills. Additionally, a study is proposed that will assess how well Pact improves the word-finding and picture-naming abilities of users when it is used in conjunction with speech therapy. If the results of the study show an increase in word-finding and picture-naming scores compared to the control group (patients receiving traditional speech therapy alone), the results would indicate that Pact can achieve its goal of promoting improvement in these domains. There is a further need to examine VR interventions for aphasia, particularly with larger sample sizes that explore the gains associated with or design issues associated with multi-user VR programs.

ContributorsGringorten, Rachel (Author) / Johnson, Mina (Thesis director) / Rogalsky, Corianne (Committee member) / English, Stephen (Committee member) / Barrett, The Honors College (Contributor) / Department of Psychology (Contributor) / College of Health Solutions (Contributor) / School of Music, Dance and Theatre (Contributor)
Created2023-05
171505-Thumbnail Image.png
Description
The impact of Artificial Intelligence (AI) has increased significantly in daily life. AI is taking big strides towards moving into areas of life that are critical such as healthcare but, also into areas such as entertainment and leisure. Deep neural networks have been pivotal in making all these advancements possible.

The impact of Artificial Intelligence (AI) has increased significantly in daily life. AI is taking big strides towards moving into areas of life that are critical such as healthcare but, also into areas such as entertainment and leisure. Deep neural networks have been pivotal in making all these advancements possible. But, a well-known problem with deep neural networks is the lack of explanations for the choices it makes. To combat this, several methods have been tried in the field of research. One example of this is assigning rankings to the individual features and how influential they are in the decision-making process. In contrast a newer class of methods focuses on Concept Activation Vectors (CAV) which focus on extracting higher-level concepts from the trained model to capture more information as a mixture of several features and not just one. The goal of this thesis is to employ concepts in a novel domain: to explain how a deep learning model uses computer vision to classify music into different genres. Due to the advances in the field of computer vision with deep learning for classification tasks, it is rather a standard practice now to convert an audio clip into corresponding spectrograms and use those spectrograms as image inputs to the deep learning model. Thus, a pre-trained model can classify the spectrogram images (representing songs) into musical genres. The proposed explanation system called “Why Pop?” tries to answer certain questions about the classification process such as what parts of the spectrogram influence the model the most, what concepts were extracted and how are they different for different classes. These explanations aid the user gain insights into the model’s learnings, biases, and the decision-making process.
ContributorsSharma, Shubham (Author) / Bryan, Chris (Thesis advisor) / McDaniel, Troy (Committee member) / Sarwat, Mohamed (Committee member) / Arizona State University (Publisher)
Created2022
171809-Thumbnail Image.png
Description

Data integration involves the reconciliation of data from diverse data sources in order to obtain a unified data repository, upon which an end user such as a data analyst can run analytics sessions to explore the data and obtain useful insights. Supervised Machine Learning (ML) for data integration tasks such

Data integration involves the reconciliation of data from diverse data sources in order to obtain a unified data repository, upon which an end user such as a data analyst can run analytics sessions to explore the data and obtain useful insights. Supervised Machine Learning (ML) for data integration tasks such as ontology (schema) or entity (instance) matching requires several training examples in terms of manually curated, pre-labeled matching and non-matching schema concept or entity pairs which are hard to obtain. On similar lines, an analytics system without predictive capabilities about the impending workload can incur huge querying latencies, while leaving the onus of understanding the underlying database schema and writing a meaningful query at every step during a data exploration session on the user. In this dissertation, I will describe the human-in-the-loop Machine Learning (ML) systems that I have built towards data integration and predictive analytics. I alleviate the need for extensive prior labeling by utilizing active learning (AL) for dataintegration. In each AL iteration, I detect the unlabeled entity or schema concept pairs that would strengthen the ML classifier and selectively query the human oracle for such labels in a budgeted fashion. Thus, I make use of human assistance for ML-based data integration. On the other hand, when the human is an end user exploring data through Online Analytical Processing (OLAP) queries, my goal is to pro-actively assist the human by predicting the top-K next queries that s/he is likely to be interested in. I will describe my proposed SQL-predictor, a Business Intelligence (BI) query predictor and a geospatial query cardinality estimator with an emphasis on schema abstraction, query representation and how I adapt the ML models for these tasks. For each system, I will discuss the evaluation metrics and how the proposed systems compare to the state-of-the-art baselines on multiple datasets and query workloads.

ContributorsMeduri, Venkata Vamsikrishna (Author) / Sarwat, Mohamed (Thesis advisor) / Bryan, Chris (Committee member) / Liu, Huan (Committee member) / Ozcan, Fatma (Committee member) / Popa, Lucian (Committee member) / Arizona State University (Publisher)
Created2022
189344-Thumbnail Image.png
Description
Distributed databases, such as Log-Structured Merge-Tree Key-Value Stores (LSM-KVS), are widely used in modern infrastructure. One of the primary challenges in these databases is ensuring consistency, meaning that all nodes have the same view of data at any given time. However, maintaining consistency requires a trade-off: the stronger the consistency,

Distributed databases, such as Log-Structured Merge-Tree Key-Value Stores (LSM-KVS), are widely used in modern infrastructure. One of the primary challenges in these databases is ensuring consistency, meaning that all nodes have the same view of data at any given time. However, maintaining consistency requires a trade-off: the stronger the consistency, the more resources are necessary to replicate data across replicas, which decreases database performance. Addressing this trade-off poses two challenges: first, developing and managing multiple consistency levels within a single system, and second, assigning consistency levels to effectively balance the consistency-performance trade-off. This thesis introduces Self-configuring Consistency In Distributed LSM-KVS (SCID), a service that leverages unique properties of LSM KVS properties to manage consistency levels and automates level assignment with ML. To address the first challenge, SCID combines Dynamic read-only instances and Logical KV-based partitions to enable on-demand updates of read-only instances and facilitate the logical separation of groups of key-value pairs. SCID uses logical partitions as consistency levels and on-demand updates in dynamic read-only instances to allow for multiple consistency levels. To address the second challenge, the thesis presents an ML-based solution, SCID-ML to manage consistency-performance trade-off with better effectiveness. We evaluate SCID and find it to improve the write throughput up to 50% and achieve 62% accuracy for consistency-level predictions.
ContributorsThakkar, Viraj Deven (Author) / Cao, Zhichao (Thesis advisor) / Xiao, Xusheng (Thesis advisor) / Bryan, Chris (Committee member) / Arizona State University (Publisher)
Created2023
171880-Thumbnail Image.png
Description
Molecular Dynamics (MD) simulations are ubiquitous throughout the physical sci-ences; they are critical in understanding how particle structures evolve over time given a particular energy function. A software package called ParSplice introduced a new method to generate these simulations in parallel that has significantly inflated their length. Typically, simulations are short discrete Markov

Molecular Dynamics (MD) simulations are ubiquitous throughout the physical sci-ences; they are critical in understanding how particle structures evolve over time given a particular energy function. A software package called ParSplice introduced a new method to generate these simulations in parallel that has significantly inflated their length. Typically, simulations are short discrete Markov chains, only captur- ing a few microseconds of a particle’s behavior and containing tens of thousands of transitions between states; in contrast, a typical ParSplice simulation can be as long as a few milliseconds, containing tens of millions of transitions. Naturally, sifting through data of this size is impossible by hand, and there are a number of visualiza- tion systems that provide comprehensive and intuitive analyses of particle structures throughout MD simulations. However, no visual analytics systems have been built that can manage the simulations that ParSplice produces. To analyze these large data-sets, I built a visual analytics system that provides multiple coordinated views that simultaneously describe the data temporally, within its structural context, and based on its properties. The system provides fluid and powerful user interactions regardless of the size of the data, allowing the user to drill down into the data-set to get detailed insights, as well as run and save various calculations, most notably the Nudged Elastic Band method. The system also allows the comparison of multiple trajectories, revealing more information about the general behavior of particles at different temperatures, energy states etc.
ContributorsHnatyshyn, Rostyslav (Author) / Maciejewski, Ross (Thesis advisor) / Bryan, Chris (Committee member) / Ahrens, James (Committee member) / Arizona State University (Publisher)
Created2022
189217-Thumbnail Image.png
Description
Component-based models are commonly employed to simulate discrete dynamicalsystems. These models lend themselves to formalizing the structures of systems at multiple levels of granularity. Visual development of component-based models serves to simplify the iterative and incremental model specification activities. The Parallel Discrete Events System Specification (DEVS) formalism offers a flexible

Component-based models are commonly employed to simulate discrete dynamicalsystems. These models lend themselves to formalizing the structures of systems at multiple levels of granularity. Visual development of component-based models serves to simplify the iterative and incremental model specification activities. The Parallel Discrete Events System Specification (DEVS) formalism offers a flexible yet rigorous approach for decomposing a whole model into its components or alternatively, composing a whole model from components. While different concepts, frameworks, and tools offer a variety of visual modeling capabilities, most pose limitations, such as visualizing multiple model hierarchies at any level with arbitrary depths. The visual and persistent layout of any number of hierarchy levels of models can be maintained and navigated seamlessly. Persistence storage is another capability needed for the modeling, simulating, verifying, and validating lifecycle. These are important features to improve the demanding task of creating and changing modular, hierarchical simulation models. This thesis proposes a new approach and develops a tool for the visual development of models. This tool supports storing and reconstructing graphical models using a NoSQL database. It offers unique capabilities important for developing increasingly larger and more complex models essential for analyzing, designing, and building Digital Twins.
ContributorsMohite, Sheetal Chandrakant (Author) / Sarjoughian, Hessam S (Thesis advisor) / Bryan, Chris (Committee member) / Pavlic, Theodore (Committee member) / Arizona State University (Publisher)
Created2023
Description
This thesis serves as an experimental investigation into the potential of machine learning through attempting to predict the future price of a cryptocurrency. Through the use of web scraping, short interval data was collected on both Bitcoin and Dogecoin. Dogecoin was the dataset that was eventually used in this thesis

This thesis serves as an experimental investigation into the potential of machine learning through attempting to predict the future price of a cryptocurrency. Through the use of web scraping, short interval data was collected on both Bitcoin and Dogecoin. Dogecoin was the dataset that was eventually used in this thesis due to its relative stability compared to Bitcoin. At the time of the data collection, Bitcoin became a much more frequent topic in the media and had more significant fluctuations due to it. The data was processed into consistent three separate, consistent timesteps, and used to generate predictive models. The models were able to accurately predict test data given all the preceding test data but were unable to autoregressively predict future data given only the first set of test data points. Ultimately, this project helps illustrate the complexities of extended future price prediction when using simple models like linear regression.
ContributorsMurwin, Andrew (Author) / Bryan, Chris (Thesis director) / Ghayekhloo, Samira (Committee member) / Barrett, The Honors College (Contributor) / Computer Science and Engineering Program (Contributor)
Created2022-12