Matching Items (111)
165711-Thumbnail Image.png
Description
The Population Receptive Field (pRF) model is widely used to predict the location (retinotopy) and size of receptive fields on the visual space. Doing so allows for the creation of a mapping from locations in the visual field to the associated groups of neurons in the cortical region (within the

The Population Receptive Field (pRF) model is widely used to predict the location (retinotopy) and size of receptive fields on the visual space. Doing so allows for the creation of a mapping from locations in the visual field to the associated groups of neurons in the cortical region (within the visual cortex of the brain). However, using the pRF model is very time consuming. Past research has focused on the creation of Convolutional Neural Networks (CNN) to mimic the pRF model in a fraction of the time, and they have worked well under highly controlled conditions. However, these models have not been thoroughly tested on real human data. This thesis focused on adapting one of these CNNs to accurately predict the retinotopy of a real human subject using a dataset from the Human Connectome Project. The results show promise towards creating a fully functioning CNN, but they also expose new challenges that must be overcome before the model could be used to predict the retinotopy of new human subjects.
ContributorsBurgard, Braeden (Author) / Wang, Yalin (Thesis director) / Ta, Duyan (Committee member) / Barrett, The Honors College (Contributor) / School of International Letters and Cultures (Contributor) / Computer Science and Engineering Program (Contributor) / School of Mathematical and Statistical Sciences (Contributor)
Created2022-05
Description
Copper demand is surging in the U.S. and around the world as countries embrace new forms of energy to combat climate change. But copper mining – while a key strategy to address supply shortages – can serve as a vehicle for injustice by imposing socio-ecological burdens for nearby communities. Due

Copper demand is surging in the U.S. and around the world as countries embrace new forms of energy to combat climate change. But copper mining – while a key strategy to address supply shortages – can serve as a vehicle for injustice by imposing socio-ecological burdens for nearby communities. Due to the growing demand for copper with resulting justice issues, more research is needed to evaluate governance for the mining sector using an environmental justice lens. The National Environmental Policy Act (NEPA) is a key environmental regulation that governs mining in the U.S. Therefore, I used a qualitative case study approach to examine how NEPA requirements shape engagement in public comment opportunities. I selected the Resolution Copper Mine as a case study because of its potential to support the energy transition but pose a significant dilemma for justice: the mine is anticipated to generate 25 percent of the U.S. copper demand each year but disturb lands that hold spiritual significance for Native American Tribes. I used the Institutional Analysis and Development (IAD) framework to analyze institutional dynamics and evaluate the NEPA process for public participation using a procedural justice lens. Drawing on interview data and document analysis, the results show that process rules such as a land exchange bill and the lengths of comment opportunities were among the key barriers for participation. Socioeconomic conditions of communities including access to social resources (i.e. access to internet and technical assistance) and institutional trust posed further barriers for participation. Hence, this study suggests that federal decision-makers should aim to better integrate procedural justice into the NEPA process.
ContributorsLewis, Sydney (Author) / Kellner, Elke (Thesis director) / Janssen, Marco (Committee member) / Barrett, The Honors College (Contributor) / School of Life Sciences (Contributor)
Created2024-05
168541-Thumbnail Image.png
Description
The purpose of the overall project is to create a simulated environment similar to Google map and traffic but simplified for education purposes. Students can choose different traffic patterns and program a car to navigate through the traffic dynamically based on the changing traffic. The environment used in the project

The purpose of the overall project is to create a simulated environment similar to Google map and traffic but simplified for education purposes. Students can choose different traffic patterns and program a car to navigate through the traffic dynamically based on the changing traffic. The environment used in the project is ASU VIPLE (Visual IoT/Robotics Programming Language Environment). It is a visual programming environment for Computer Science education. VIPLE supports a number of devices and platforms, including a traffic simulator developed using Unity game engine. This thesis focuses on creating realistic traffic data for the traffic simulator and implementing dynamic routing algorithm in VIPLE. The traffic data is generated from the recorded real traffic data published at Arizona Maricopa County website. Based on the generated traffic data, VIPLE programs are developed to implement the traffic simulation based on dynamic changing traffic data.
ContributorsZhang, Zhemin (Author) / Chen, Yinong (Thesis advisor) / Wang, Yalin (Thesis advisor) / De Luca, Gennaro (Committee member) / Arizona State University (Publisher)
Created2022
168788-Thumbnail Image.png
Description
Little is known about how cognitive and brain aging patterns differ in older adults with autism spectrum disorder (ASD). However, recent evidence suggests that individuals with ASD may be at greater risk of pathological aging conditions than their neurotypical (NT) counterparts. A growing body of research indicates that older adults

Little is known about how cognitive and brain aging patterns differ in older adults with autism spectrum disorder (ASD). However, recent evidence suggests that individuals with ASD may be at greater risk of pathological aging conditions than their neurotypical (NT) counterparts. A growing body of research indicates that older adults with ASD may experience accelerated cognitive decline and neurodegeneration as they age, although studies are limited by their cross-sectional design in a population with strong age-cohort effects. Studying aging in ASD and identifying biomarkers to predict atypical aging is important because the population of older individuals with ASD is growing. Understanding the unique challenges faced as autistic adults age is necessary to develop treatments to improve quality of life and preserve independence. In this study, a longitudinal design was used to characterize cognitive and brain aging trajectories in ASD as a function of autistic trait severity. Principal components analysis (PCA) was used to derive a cognitive metric that best explains performance variability on tasks measuring memory ability and executive function. The slope of the integrated persistent feature (SIP) was used to quantify functional connectivity; the SIP is a novel, threshold-free graph theory metric which summarizes the speed of information diffusion in the brain. Longitudinal mixed models were using to predict cognitive and brain aging trajectories (measured via the SIP) as a function of autistic trait severity, sex, and their interaction. The sensitivity of the SIP was also compared with traditional graph theory metrics. It was hypothesized that older adults with ASD would experience accelerated cognitive and brain aging and furthermore, age-related changes in brain network topology would predict age-related changes in cognitive performance. For both cognitive and brain aging, autistic traits and sex interacted to predict trajectories, such that older men with high autistic traits were most at risk for poorer outcomes. In men with autism, variability in SIP scores across time points trended toward predicting cognitive aging trajectories. Findings also suggested that autistic traits are more sensitive to differences in brain aging than diagnostic group and that the SIP is more sensitive to brain aging trajectories than other graph theory metrics. However, further research is required to determine how physiological biomarkers such as the SIP are associated with cognitive outcomes.
ContributorsSullivan, Georgia (Author) / Braden, Blair (Thesis advisor) / Kodibagkar, Vikram (Thesis advisor) / Schaefer, Sydney (Committee member) / Wang, Yalin (Committee member) / Arizona State University (Publisher)
Created2022
193542-Thumbnail Image.png
Description
As robots become increasingly integrated into the environments, they need to learn how to interact with the objects around them. Many of these objects are articulated with multiple degrees of freedom (DoF). Multi-DoF objects have complex joints that require specific manipulation orders, but existing methods only consider objects with a

As robots become increasingly integrated into the environments, they need to learn how to interact with the objects around them. Many of these objects are articulated with multiple degrees of freedom (DoF). Multi-DoF objects have complex joints that require specific manipulation orders, but existing methods only consider objects with a single joint. To capture the joint structure and manipulation sequence of any object, I introduce the "Object Kinematic State Machines" (OKSMs), a novel representation that models the kinematic constraints and manipulation sequences of multi-DoF objects. I also present Pokenet, a deep neural network architecture that estimates the OKSMs from the sequence of point cloud data of human demonstrations. I conduct experiments on both simulated and real-world datasets to validate my approach. First, I evaluate the modeling of multi-DoF objects on a simulated dataset, comparing against the current state-of-the-art method. I then assess Pokenet's real-world usability on a dataset collected in my lab, comprising 5,500 data points across 4 objects. Results showcase that my method can successfully estimate joint parameters of novel multi-DoF objects with over 25% more accuracy on average than prior methods.
ContributorsGUPTA, ANMOL (Author) / Gopalan, Nakul (Thesis advisor) / Zhang, Yu (Committee member) / Wang, Yalin (Committee member) / Arizona State University (Publisher)
Created2024
193593-Thumbnail Image.png
Description
In today's data-driven world, privacy is a significant concern. It is crucial to preserve the privacy of sensitive information while visualizing data. This thesis aims to develop new techniques and software tools that support Vega-Lite visualizations while maintaining privacy. Vega-Lite is a visualization grammar based on Wilkinson's grammar of graphics.

In today's data-driven world, privacy is a significant concern. It is crucial to preserve the privacy of sensitive information while visualizing data. This thesis aims to develop new techniques and software tools that support Vega-Lite visualizations while maintaining privacy. Vega-Lite is a visualization grammar based on Wilkinson's grammar of graphics. The project extends Vega-Lite to incorporate privacy algorithms such as k-anonymity, l-diversity, t-closeness, and differential privacy. This is done by using a unique multi-input loop module logic that generates combinations of attributes as a new anonymization method. Differential privacy is implemented by adding controlled noise (Laplace or Exponential) to the sensitive columns in the dataset. The user defines custom rules in the JSON schema, mentioning the privacy methods and the sensitive column. The schema is validated using Another JSON Validation library, and these rules help identify the anonymization techniques to be performed on the dataset before sending it back to the Vega-Lite visualization server. Multiple datasets satisfying the privacy requirements are generated, and their utility scores are provided so that the user can trade-off between privacy and utility on the datasets based on their requirements. The interface developed is user-friendly and intuitive and guides users in using it. It provides appropriate feedback on the privacy-preserving visualizations generated through various utility metrics. This application is helpful for technical or domain experts across multiple domains where privacy is a big concern, such as medical institutions, traffic and urban planning, financial institutions, educational records, and employer-employee relations. This project is novel as it provides a one-stop solution for privacy-preserving visualization. It works on open-source software, Vega-Lite, which several organizations and users use for business and educational purposes.
ContributorsSekar, Manimozhi (Author) / Bryan, Chris (Thesis advisor) / Wang, Yalin (Committee member) / Cao, Zhichao (Committee member) / Arizona State University (Publisher)
Created2024
193355-Thumbnail Image.png
Description
Image denoising, a fundamental task in computer vision, poses significant challenges due to its inherently inverse and ill-posed nature. Despite advancements in traditional methods and supervised learning approaches, particularly in medical imaging such as Medical Resonance Imaging (MRI) scans, the reliance on paired datasets and known noise distributions remains a

Image denoising, a fundamental task in computer vision, poses significant challenges due to its inherently inverse and ill-posed nature. Despite advancements in traditional methods and supervised learning approaches, particularly in medical imaging such as Medical Resonance Imaging (MRI) scans, the reliance on paired datasets and known noise distributions remains a practical hurdle. Recent progress in noise statistical independence theory and diffusion models has revitalized research interest, offering promising avenues for unsupervised denoising. However, existing methods often yield overly smoothed results or introduce hallucinated structures, limiting their clinical applicability. This thesis tackles the core challenge of progressing towards unsupervised denoising of MRI scans. It aims to retain intricate details without smoothing or introducing artificial structures, thus ensuring the production of high-quality MRI images. The thesis makes a three-fold contribution: Firstly, it presents a detailed analysis of traditional techniques, early machine learning algorithms for denoising, and new statistical-based models, with an extensive evaluation study on self-supervised denoising methods highlighting their limitations. Secondly, it conducts an evaluation study on an emerging class of diffusion-based denoising methods, accompanied by additional empirical findings and discussions on their effectiveness and limitations, proposing solutions to enhance their utility. Lastly, it introduces a novel approach, Unsupervised Multi-stage Ensemble Deep Learning with diffusion models for denoising MRI scans (MEDL). Leveraging diffusion models, this approach operates independently of signal or noise priors and incorporates weighted rescaling of multi-stage reconstructions to balance over-smoothing and hallucination tendencies. Evaluation using benchmark datasets demonstrates an average gain of 1dB and 2% in PSNR and SSIM metrics, respectively, over existing approaches.
ContributorsVora, Sahil (Author) / Li, Baoxin (Thesis advisor) / Wang, Yalin (Committee member) / Zhou, Yuxiang (Committee member) / Arizona State University (Publisher)
Created2024
156682-Thumbnail Image.png
Description
Unsupervised learning of time series data, also known as temporal clustering, is a challenging problem in machine learning. This thesis presents a novel algorithm, Deep Temporal Clustering (DTC), to naturally integrate dimensionality reduction and temporal clustering into a single end-to-end learning framework, fully unsupervised. The algorithm utilizes an autoencoder for

Unsupervised learning of time series data, also known as temporal clustering, is a challenging problem in machine learning. This thesis presents a novel algorithm, Deep Temporal Clustering (DTC), to naturally integrate dimensionality reduction and temporal clustering into a single end-to-end learning framework, fully unsupervised. The algorithm utilizes an autoencoder for temporal dimensionality reduction and a novel temporal clustering layer for cluster assignment. Then it jointly optimizes the clustering objective and the dimensionality reduction objective. Based on requirement and application, the temporal clustering layer can be customized with any temporal similarity metric. Several similarity metrics and state-of-the-art algorithms are considered and compared. To gain insight into temporal features that the network has learned for its clustering, a visualization method is applied that generates a region of interest heatmap for the time series. The viability of the algorithm is demonstrated using time series data from diverse domains, ranging from earthquakes to spacecraft sensor data. In each case, the proposed algorithm outperforms traditional methods. The superior performance is attributed to the fully integrated temporal dimensionality reduction and clustering criterion.
ContributorsMadiraju, NaveenSai (Author) / Liang, Jianming (Thesis advisor) / Wang, Yalin (Thesis advisor) / He, Jingrui (Committee member) / Arizona State University (Publisher)
Created2018
157309-Thumbnail Image.png
Description
Anthropogenic land use has irrevocably transformed the natural systems on which humankind relies. Understanding where, why, and how social and economic processes drive globally-important land-use changes, from deforestation to urbanization, has advanced substantially. Illicit and clandestine activities--behavior that is intentionally secret because it breaks formal laws or violates informal norms--are

Anthropogenic land use has irrevocably transformed the natural systems on which humankind relies. Understanding where, why, and how social and economic processes drive globally-important land-use changes, from deforestation to urbanization, has advanced substantially. Illicit and clandestine activities--behavior that is intentionally secret because it breaks formal laws or violates informal norms--are poorly understood, however, despite the recognition of their significant role in land change. This dissertation fills this lacuna by studying illicit and clandestine activity and quantifying its influence on land-use patterns through examining informal urbanization in Mexico City and deforestation Central America. The first chapter introduces the topic, presenting a framework to examine illicit transactions in land systems. The second chapter uses data from interviews with actors involved with land development in Mexico City, demonstrating how economic and political payoffs explain the persistence of four types of informal urban expansion. The third chapter examines how electoral politics influence informal urban expansion and land titling in Mexico City using panel regression. Results show land title distribution increases just before elections, and more titles are extended to loyal voters of the dominant party in power. Urban expansion increases with electoral competition in local elections for borough chiefs and legislators. The fourth chapter tests and confirms the hypothesis that narcotrafficking has a causal effect on forest loss in Central America from 2001-2016 using two proxies of narcoactivity: drug seizures and events from media reports. The fifth chapter explores the spatial signature and pattern of informal urban development. It uses a typology of urban informality identified in chapter two to hypothesize and demonstrate distinct urban expansion patterns from satellite imagery. The sixth and final chapter summarizes the role of illicit and clandestine activity in shaping deforestation and urban expansion through illegal economies, electoral politics, and other informal transactions. Measures of illicit and clandestine activity should--and could--be incorporated into land change models to account for a wider range of relevant causes. This dissertation shines a new light on the previously hidden processes behind ever-easier to detect land-use patterns as earth observing satellites increase spatial and temporal resolution.
ContributorsTellman, Elizabeth (Author) / Turner II, Billie L (Thesis advisor) / Eakin, Hallie (Thesis advisor) / Janssen, Marco (Committee member) / Alba, Felipe de (Committee member) / Jain, Meha (Committee member) / Arizona State University (Publisher)
Created2019
154269-Thumbnail Image.png
Description
Understanding the complexity of temporal and spatial characteristics of gene expression over brain development is one of the crucial research topics in neuroscience. An accurate description of the locations and expression status of relative genes requires extensive experiment resources. The Allen Developing Mouse Brain Atlas provides a large number of

Understanding the complexity of temporal and spatial characteristics of gene expression over brain development is one of the crucial research topics in neuroscience. An accurate description of the locations and expression status of relative genes requires extensive experiment resources. The Allen Developing Mouse Brain Atlas provides a large number of in situ hybridization (ISH) images of gene expression over seven different mouse brain developmental stages. Studying mouse brain models helps us understand the gene expressions in human brains. This atlas collects about thousands of genes and now they are manually annotated by biologists. Due to the high labor cost of manual annotation, investigating an efficient approach to perform automated gene expression annotation on mouse brain images becomes necessary. In this thesis, a novel efficient approach based on machine learning framework is proposed. Features are extracted from raw brain images, and both binary classification and multi-class classification models are built with some supervised learning methods. To generate features, one of the most adopted methods in current research effort is to apply the bag-of-words (BoW) algorithm. However, both the efficiency and the accuracy of BoW are not outstanding when dealing with large-scale data. Thus, an augmented sparse coding method, which is called Stochastic Coordinate Coding, is adopted to generate high-level features in this thesis. In addition, a new multi-label classification model is proposed in this thesis. Label hierarchy is built based on the given brain ontology structure. Experiments have been conducted on the atlas and the results show that this approach is efficient and classifies the images with a relatively higher accuracy.
ContributorsZhao, Xinlin (Author) / Ye, Jieping (Thesis advisor) / Wang, Yalin (Thesis advisor) / Li, Baoxin (Committee member) / Arizona State University (Publisher)
Created2016