Matching Items (147)
168404-Thumbnail Image.png
Description
Communicating with computers through thought has been a remarkable achievement in recent years. This was made possible by the use of Electroencephalography (EEG). Brain-computer interface (BCI) relies heavily on Electroencephalography (EEG) signals for communication between humans and computers. With the advent ofdeep learning, many studies recently applied these techniques to

Communicating with computers through thought has been a remarkable achievement in recent years. This was made possible by the use of Electroencephalography (EEG). Brain-computer interface (BCI) relies heavily on Electroencephalography (EEG) signals for communication between humans and computers. With the advent ofdeep learning, many studies recently applied these techniques to EEG data to perform various tasks like emotion recognition, motor imagery classification, sleep analysis, and many more. Despite the rise of interest in EEG signal classification, very few studies have explored the MindBigData dataset, which collects EEG signals recorded at the stimulus of seeing a digit and thinking about it. This dataset takes us closer to realizing the idea of mind-reading or communication via thought. Thus classifying these signals into the respective digit that the user thinks about is a challenging task. This serves as a motivation to study this dataset and apply existing deep learning techniques to study it. Given the recent success of transformer architecture in different domains like Computer Vision and Natural language processing, this thesis studies transformer architecture for EEG signal classification. Also, it explores other deep learning techniques for the same. As a result, the proposed classification pipeline achieves comparable performance with the existing methods.
ContributorsMuglikar, Omkar Dushyant (Author) / Wang, Yalin (Thesis advisor) / Liang, Jianming (Committee member) / Venkateswara, Hemanth (Committee member) / Arizona State University (Publisher)
Created2021
161945-Thumbnail Image.png
Description
Statistical Shape Modeling is widely used to study the morphometrics of deformable objects in computer vision and biomedical studies. There are mainly two viewpoints to understand the shapes. On one hand, the outer surface of the shape can be taken as a two-dimensional embedding in space. On the other hand,

Statistical Shape Modeling is widely used to study the morphometrics of deformable objects in computer vision and biomedical studies. There are mainly two viewpoints to understand the shapes. On one hand, the outer surface of the shape can be taken as a two-dimensional embedding in space. On the other hand, the outer surface along with its enclosed internal volume can be taken as a three-dimensional embedding of interests. Most studies focus on the surface-based perspective by leveraging the intrinsic features on the tangent plane. But a two-dimensional model may fail to fully represent the realistic properties of shapes with both intrinsic and extrinsic properties. In this thesis, severalStochastic Partial Differential Equations (SPDEs) are thoroughly investigated and several methods are originated from these SPDEs to try to solve the problem of both two-dimensional and three-dimensional shape analyses. The unique physical meanings of these SPDEs inspired the findings of features, shape descriptors, metrics, and kernels in this series of works. Initially, the data generation of high-dimensional shapes, here, the tetrahedral meshes, is introduced. The cerebral cortex is taken as the study target and an automatic pipeline of generating the gray matter tetrahedral mesh is introduced. Then, a discretized Laplace-Beltrami operator (LBO) and a Hamiltonian operator (HO) in tetrahedral domain with Finite Element Method (FEM) are derived. Two high-dimensional shape descriptors are defined based on the solution of the heat equation and Schrödinger’s equation. Considering the fact that high-dimensional shape models usually contain massive redundancies, and the demands on effective landmarks in many applications, a Gaussian process landmarking on tetrahedral meshes is further studied. A SIWKS-based metric space is used to define a geometry-aware Gaussian process. The study of the periodic potential diffusion process further inspired the idea of a new kernel call the geometry-aware convolutional kernel. A series of Bayesian learning methods are then introduced to tackle the problem of shape retrieval and classification. Experiments of every single item are demonstrated. From the popular SPDE such as the heat equation and Schrödinger’s equation to the general potential diffusion equation and the specific periodic potential diffusion equation, it clearly shows that classical SPDEs play an important role in discovering new features, metrics, shape descriptors and kernels. I hope this thesis could be an example of using interdisciplinary knowledge to solve problems.
ContributorsFan, Yonghui (Author) / Wang, Yalin (Thesis advisor) / Lepore, Natasha (Committee member) / Turaga, Pavan (Committee member) / Yang, Yezhou (Committee member) / Arizona State University (Publisher)
Created2021
161970-Thumbnail Image.png
Description
The representation of a patient’s characteristics as the parameters of a model is a key component in many studies of personalized medicine, where the underlying mathematical models are used to describe, explain, and forecast the course of treatment. In this context, clinical observations form the bridge between the mathematical frameworks

The representation of a patient’s characteristics as the parameters of a model is a key component in many studies of personalized medicine, where the underlying mathematical models are used to describe, explain, and forecast the course of treatment. In this context, clinical observations form the bridge between the mathematical frameworks and applications. However, the formulation and theoretical studies of the models and the clinical studies are often not completely compatible, which is one of the main obstacles in the application of mathematical models in practice. The goal of my study is to extend a mathematical framework to model prostate cancer based mainly on the concept of cell-quota within an evolutionary framework and to study the relevant aspects for the model to gain useful insights in practice. Specifically, the first aim is to construct a mathematical model that can explain and predict the observed clinical data under various treatment combinations. The second aim is to find a fundamental model structure that can capture the dynamics of cancer progression within a realistic set of data. Finally, relevant clinical aspects such as how the patient's parameters change over the course of treatment and how to incorporate treatment optimization within a framework of uncertainty quantification, will be examined to construct a useful framework in practice.
ContributorsPhan, Tin (Author) / Kuang, Yang (Thesis advisor) / Kostelich, Eric J (Committee member) / Crook, Sharon (Committee member) / Maley, Carlo (Committee member) / Bryce, Alan (Committee member) / Arizona State University (Publisher)
Created2021
161972-Thumbnail Image.png
Description
Synthetic biology (SB) has become an important field of science focusing on designing and engineering new biological parts and systems, or re-designing existing biological systems for useful purposes. The dramatic growth of SB throughout the past two decades has not only provided us numerous achievements, but also brought us more

Synthetic biology (SB) has become an important field of science focusing on designing and engineering new biological parts and systems, or re-designing existing biological systems for useful purposes. The dramatic growth of SB throughout the past two decades has not only provided us numerous achievements, but also brought us more timely and underexplored problems. In SB's entire history, mathematical modeling has always been an indispensable approach to predict the experimental outcomes, improve experimental design and obtain mechanism-understanding of the biological systems. \textit{Escherichia coli} (\textit{E. coli}) is one of the most important experimental platforms, its growth dynamics is the major research objective in this dissertation. Chapter 2 employs a reaction-diffusion model to predict the \textit{E. coli} colony growth on a semi-solid agar plate under multiple controls. In that chapter, a density-dependent diffusion model with non-monotonic growth to capture the colony's non-linear growth profile is introduced. Findings of the new model to experimental data are compared and contrasted with those from other proposed models. In addition, the cross-sectional profile of the colony are computed and compared with experimental data. \textit{E. coli} colony is also used to perform spatial patterns driven by designed gene circuits. In Chapter 3, a gene circuit (MINPAC) and its corresponding pattern formation results are presented. Specifically, a series of partial differential equation (PDE) models are developed to describe the pattern formation driven by the MINPAC circuit. Model simulations of the patterns based on different experimental conditions and numerical analysis of the models to obtain a deeper understanding of the mechanisms are performed and discussed. Mathematical analysis of the simplified models, including traveling wave analysis and local stability analysis, is also presented and used to explore the control strategies of the pattern formation. The interaction between the gene circuit and the host \textit{E. coli} may be crucial and even greatly affect the experimental outcomes. Chapter 4 focuses on the growth feedback between the circuit and the host cell under different nutrient conditions. Two ordinary differential equation (ODE) models are developed to describe such feedback with nutrient variation. Preliminary results on data fitting using both two models and the model dynamical analysis are included.
ContributorsHe, Changhan (Author) / Kuang, Yang (Thesis advisor) / Wang, Xiao (Committee member) / Kostelich, Eric (Committee member) / Tian, Xiaojun (Committee member) / Gumel, Abba (Committee member) / Arizona State University (Publisher)
Created2021
Description
Graph matching is a fundamental but notoriously difficult problem due to its NP-hard nature, and serves as a cornerstone for a series of applications in machine learning and computer vision, such as image matching, dynamic routing, drug design, to name a few. Although there has been massive previous investigation on

Graph matching is a fundamental but notoriously difficult problem due to its NP-hard nature, and serves as a cornerstone for a series of applications in machine learning and computer vision, such as image matching, dynamic routing, drug design, to name a few. Although there has been massive previous investigation on high-performance graph matching solvers, it still remains a challenging task to tackle the matching problem under real-world scenarios with severe graph uncertainty (e.g., noise, outlier, misleading or ambiguous link).In this dissertation, a main focus is to investigate the essence and propose solutions to graph matching with higher reliability under such uncertainty. To this end, the proposed research was conducted taking into account three perspectives related to reliable graph matching: modeling, optimization and learning. For modeling, graph matching is extended from typical quadratic assignment problem to a more generic mathematical model by introducing a specific family of separable function, achieving higher capacity and reliability. In terms of optimization, a novel high gradient-efficient determinant-based regularization technique is proposed in this research, showing high robustness against outliers. Then learning paradigm for graph matching under intrinsic combinatorial characteristics is explored. First, a study is conducted on the way of filling the gap between discrete problem and its continuous approximation under a deep learning framework. Then this dissertation continues to investigate the necessity of more reliable latent topology of graphs for matching, and propose an effective and flexible framework to obtain it. Coherent findings in this dissertation include theoretical study and several novel algorithms, with rich experiments demonstrating the effectiveness.
ContributorsYu, Tianshu (Author) / Li, Baoxin (Thesis advisor) / Wang, Yalin (Committee member) / Yang, Yezhou (Committee member) / Yang, Yingzhen (Committee member) / Arizona State University (Publisher)
Created2021
168275-Thumbnail Image.png
Description
Graph matching is a fundamental but notoriously difficult problem due to its NP-hard nature, and serves as a cornerstone for a series of applications in machine learning and computer vision, such as image matching, dynamic routing, drug design, to name a few. Although there has been massive previous investigation on

Graph matching is a fundamental but notoriously difficult problem due to its NP-hard nature, and serves as a cornerstone for a series of applications in machine learning and computer vision, such as image matching, dynamic routing, drug design, to name a few. Although there has been massive previous investigation on high-performance graph matching solvers, it still remains a challenging task to tackle the matching problem under real-world scenarios with severe graph uncertainty (e.g., noise, outlier, misleading or ambiguous link).In this dissertation, a main focus is to investigate the essence and propose solutions to graph matching with higher reliability under such uncertainty. To this end, the proposed research was conducted taking into account three perspectives related to reliable graph matching: modeling, optimization and learning. For modeling, graph matching is extended from typical quadratic assignment problem to a more generic mathematical model by introducing a specific family of separable function, achieving higher capacity and reliability. In terms of optimization, a novel high gradient-efficient determinant-based regularization technique is proposed in this research, showing high robustness against outliers. Then learning paradigm for graph matching under intrinsic combinatorial characteristics is explored. First, a study is conducted on the way of filling the gap between discrete problem and its continuous approximation under a deep learning framework. Then this dissertation continues to investigate the necessity of more reliable latent topology of graphs for matching, and propose an effective and flexible framework to obtain it. Coherent findings in this dissertation include theoretical study and several novel algorithms, with rich experiments demonstrating the effectiveness.
ContributorsYu, Tianshu (Author) / Li, Baoxin (Thesis advisor) / Wang, Yalin (Committee member) / Yang, Yezhou (Committee member) / Yang, Yingzhen (Committee member) / Arizona State University (Publisher)
Created2021
168313-Thumbnail Image.png
Description
The fast pace of global urbanization makes cities the hotspots of population density and anthropogenic activities, leading to intensive emissions of heat and carbon dioxide (CO2), a primary greenhouse gas. Urban climate scientists have been actively seeking effective mitigation strategies over the past decades, aiming to improve the environmental quality

The fast pace of global urbanization makes cities the hotspots of population density and anthropogenic activities, leading to intensive emissions of heat and carbon dioxide (CO2), a primary greenhouse gas. Urban climate scientists have been actively seeking effective mitigation strategies over the past decades, aiming to improve the environmental quality for urban dwellers. Prior studies have identified the role of urban green spaces in the relief of urban heat stress. Yet little effort was devoted to quantify their contribution to local and regional CO2 budget. In fact, urban biogenic CO2 fluxes from photosynthesis and respiration are influenced by the microclimate in the built environment and are sensitive to anthropogenic disturbance. The high complexity of the urban ecosystem leads to an outstanding challenge for numerical urban models to disentangling and quantifying the interplay between heat and carbon dynamics.This dissertation aims to advance the simulation of thermal and carbon dynamics in urban land surface models, and to investigate the role of urban greening practices and urban system design in mitigating heat and CO2 emissions. The biogenic CO2 exchange in cities is parameterized by incorporating plant physiological functions into an advanced single-layer urban canopy model in the built environment. The simulation result replicates the microclimate and CO2 flux patterns measured from an eddy covariance system over a residential neighborhood in Phoenix, Arizona with satisfactory accuracy. Moreover, the model decomposes the total CO2 flux from observation and identifies the significant CO2 efflux from soil respiration. The model is then applied to quantify the impact of urban greening practices on heat and biogenic CO2 exchange over designed scenarios. The result shows the use of urban greenery is effective in mitigating both urban heat and carbon emissions, providing environmental co-benefit in cities. Furthermore, to seek the optimal urban system design in terms of thermal comfort and CO2 reduction, a multi-objective optimization algorithm is applied to the machine learning surrogates of the physical urban land surface model. There are manifest trade-offs among ameliorating diverse urban environmental indicators despite the co-benefit from urban greening. The findings of this dissertation, along with its implications on urban planning and landscaping management, would promote sustainable urban development strategies for achieving optimal environmental quality for policy makers, urban residents, and practitioners.
ContributorsLi, Peiyuan (Author) / Wang, Zhihua (Thesis advisor) / Vivoni, Enrique (Committee member) / Huang, Huei-Ping (Committee member) / Myint, Soe (Committee member) / Xu, Tianfang (Committee member) / Arizona State University (Publisher)
Created2021
168749-Thumbnail Image.png
Description
Alzheimer's disease (AD) is a neurodegenerative disease that damages the cognitive abilities of a patient. It is critical to diagnose AD early to begin treatment as soon as possible which can be done through biomarkers. One such biomarker is the beta-amyloid (Aβ) peptide which can be quantified using the centiloid

Alzheimer's disease (AD) is a neurodegenerative disease that damages the cognitive abilities of a patient. It is critical to diagnose AD early to begin treatment as soon as possible which can be done through biomarkers. One such biomarker is the beta-amyloid (Aβ) peptide which can be quantified using the centiloid (CL) scale. For identifying the Aβ biomarker, A deep learning model that can model AD progression by predicting the CL value for brain magnetic resonance images (MRIs) is proposed. Brain MRI images can be obtained through the Alzheimer's Disease Neuroimaging Initiative (ADNI) and Open Access Series of Imaging Studies (OASIS) datasets, however a single model cannot perform well on both datasets at once. Thus, A regularization-based continuous learning framework to perform domain adaptation on the previous model is also proposed which captures the latent information about the relationship between Aβ and AD progression within both datasets.
ContributorsTrinh, Matthew Brian (Author) / Wang, Yalin (Thesis advisor) / Liang, Jianming (Committee member) / Su, Yi (Committee member) / Arizona State University (Publisher)
Created2022
165711-Thumbnail Image.png
Description
The Population Receptive Field (pRF) model is widely used to predict the location (retinotopy) and size of receptive fields on the visual space. Doing so allows for the creation of a mapping from locations in the visual field to the associated groups of neurons in the cortical region (within the

The Population Receptive Field (pRF) model is widely used to predict the location (retinotopy) and size of receptive fields on the visual space. Doing so allows for the creation of a mapping from locations in the visual field to the associated groups of neurons in the cortical region (within the visual cortex of the brain). However, using the pRF model is very time consuming. Past research has focused on the creation of Convolutional Neural Networks (CNN) to mimic the pRF model in a fraction of the time, and they have worked well under highly controlled conditions. However, these models have not been thoroughly tested on real human data. This thesis focused on adapting one of these CNNs to accurately predict the retinotopy of a real human subject using a dataset from the Human Connectome Project. The results show promise towards creating a fully functioning CNN, but they also expose new challenges that must be overcome before the model could be used to predict the retinotopy of new human subjects.
ContributorsBurgard, Braeden (Author) / Wang, Yalin (Thesis director) / Ta, Duyan (Committee member) / Barrett, The Honors College (Contributor) / School of International Letters and Cultures (Contributor) / Computer Science and Engineering Program (Contributor) / School of Mathematical and Statistical Sciences (Contributor)
Created2022-05
Description
Since the 20th century, Arizona has undergone shifts in agricultural practices, driven by urban expansion and crop irrigation regulations. These changes present environmental challenges, altering atmospheric processes and influencing climate dynamics. Given the potential threats of climate change and drought on water availability for agriculture, further modifications in the agricultural

Since the 20th century, Arizona has undergone shifts in agricultural practices, driven by urban expansion and crop irrigation regulations. These changes present environmental challenges, altering atmospheric processes and influencing climate dynamics. Given the potential threats of climate change and drought on water availability for agriculture, further modifications in the agricultural landscape are expected. To understand these land use changes and their impact on carbon dynamics, our study quantified aboveground carbon storage in both cultivated and abandoned agricultural fields. To accomplish this, we employed Python and various geospatial libraries in Jupyter Notebook files, for thorough dataset assembly and visual, quantitative analysis. We focused on nine counties known for high cultivation levels, primarily located in the lower latitudes of Arizona. Our analysis investigated carbon dynamics across not only abandoned and actively cultivated croplands but also neighboring uncultivated land, for which we estimated the extent. Additionally, we compared these trends with those observed in developed land areas. The findings revealed a hierarchy in aboveground carbon storage, with currently cultivated lands having the lowest levels, followed by abandoned croplands and uncultivated wilderness. However, wilderness areas exhibited significant variation in carbon storage by county compared to cultivated and abandoned lands. Developed lands ranked highest in aboveground carbon storage, with the median value being the highest. Despite county-wide variations, abandoned croplands generally contained more carbon than currently cultivated areas, with adjacent wilderness lands containing even more than both. This trend suggests that cultivating croplands in the region reduces aboveground carbon stores, while abandonment allows for some replenishment, though only to a limited extent. Enhancing carbon stores in Arizona can be achieved through active restoration efforts on abandoned cropland. By promoting native plant regeneration and boosting aboveground carbon levels, these measures are crucial for improving carbon sequestration. We strongly advocate for implementing this step to facilitate the regrowth of native plants and enhance overall carbon storage in the region.
ContributorsGoodwin, Emily (Author) / Eikenberry, Steffen (Thesis director) / Kuang, Yang (Committee member) / Barrett, The Honors College (Contributor) / School of Mathematical and Statistical Sciences (Contributor)
Created2024-05