Matching Items (109)
171906-Thumbnail Image.png
Description
Infrastructure systems are facing non-stationary challenges that stem from climate change and the increasingly complex interactions between the social, ecological, and technological systems (SETSs). It is crucial for transportation infrastructures—which enable residents to access opportunities and foster prosperity, quality of life, and social connections—to be resilient under these non-stationary challenges.

Infrastructure systems are facing non-stationary challenges that stem from climate change and the increasingly complex interactions between the social, ecological, and technological systems (SETSs). It is crucial for transportation infrastructures—which enable residents to access opportunities and foster prosperity, quality of life, and social connections—to be resilient under these non-stationary challenges. Vulnerability assessment (VA) examines the potential consequences a system is likely to experience due to exposure to perturbation or stressors and lack of the capacity to adapt. Post-fire debris flow and heat represent particularly challenging problems for infrastructure and users in the arid U.S. West. Post-fire debris flow, which is manifested with heat and drought, produces powerful runoff threatening physical transportation infrastructures. And heat waves have devastating health effects on transportation infrastructure users, including increased mortality rates. VA anticipates the potential consequences of these perturbations and enables infrastructure stakeholders to improve the system's resilience. The current transportation climate VA—which only considers a single direct climate stressor on the infrastructure—falls short of addressing the wildfire and heat challenges. This work proposes advanced transportation climate VA methods to address the complex and multiple climate stressors and the vulnerability of infrastructure users. Two specific regions were chosen to carry out the progressive transportation climate VA: 1) the California transportation networks’ vulnerability to post-fire debris flows, and 2) the transportation infrastructure user’s vulnerability to heat exposure in Phoenix.
ContributorsLi, Rui (Author) / Chester, Mikhail V. (Thesis advisor) / Middel, Ariane (Committee member) / Hondula, David M. (Committee member) / Pendyala, Ram (Committee member) / Arizona State University (Publisher)
Created2022
189274-Thumbnail Image.png
Description
Structural Magnetic Resonance Imaging analysis is a vital component in the study of Alzheimer’s Disease pathology and several techniques exist as part of the existing research conducted. In particular, volumetric approaches in this field are known to be beneficial due to the increased capability to express morphological characteristics when compared

Structural Magnetic Resonance Imaging analysis is a vital component in the study of Alzheimer’s Disease pathology and several techniques exist as part of the existing research conducted. In particular, volumetric approaches in this field are known to be beneficial due to the increased capability to express morphological characteristics when compared to manifold methods. To aid in the improvement of the field, this paper aims to propose an intrinsic volumetric conic system that can be applied to bounded volumetric meshes to enable a more effective study of subjects. The computation of the metric involves the use of heat kernel theory and conformal parameterization on genus-0 surfaces extended to a volumetric domain. Additionally, this paper also explores the use of the ’TetCNN’ architecture on the classification of hippocampal tetrahedral meshes to detect features that correspond to Alzheimer’s indicators. The model tested was able to achieve remarkable results with a measured classification accuracy of above 90% in the task of differentiating between subjects diagnosed with Alzheimer’s and normal control subjects.
ContributorsGeorge, John Varghese (Author) / Wang, Yalin (Thesis advisor) / Hansford, Dianne (Committee member) / Gupta, Vikash (Committee member) / Arizona State University (Publisher)
Created2023
189213-Thumbnail Image.png
Description
This work presents a thorough analysis of reconstruction of global wave fields (governed by the inhomogeneous wave equation and the Maxwell vector wave equation) from sensor time series data of the wave field. Three major problems are considered. First, an analysis of circumstances under which wave fields can be fully

This work presents a thorough analysis of reconstruction of global wave fields (governed by the inhomogeneous wave equation and the Maxwell vector wave equation) from sensor time series data of the wave field. Three major problems are considered. First, an analysis of circumstances under which wave fields can be fully reconstructed from a network of fixed-location sensors is presented. It is proven that, in many cases, wave fields can be fully reconstructed from a single sensor, but that such reconstructions can be sensitive to small perturbations in sensor placement. Generally, multiple sensors are necessary. The next problem considered is how to obtain a global approximation of an electromagnetic wave field in the presence of an amplifying noisy current density from sensor time series data. This type of noise, described in terms of a cylindrical Wiener process, creates a nonequilibrium system, derived from Maxwell’s equations, where variance increases with time. In this noisy system, longer observation times do not generally provide more accurate estimates of the field coefficients. The mean squared error of the estimates can be decomposed into a sum of the squared bias and the variance. As the observation time $\tau$ increases, the bias decreases as $\mathcal{O}(1/\tau)$ but the variance increases as $\mathcal{O}(\tau)$. The contrasting time scales imply the existence of an ``optimal'' observing time (the bias-variance tradeoff). An iterative algorithm is developed to construct global approximations of the electric field using the optimal observing times. Lastly, the effect of sensor acceleration is considered. When the sensor location is fixed, measurements of wave fields composed of plane waves are almost periodic and so can be written in terms of a standard Fourier basis. When the sensor is accelerating, the resulting time series is no longer almost periodic. This phenomenon is related to the Doppler effect, where a time transformation must be performed to obtain the frequency and amplitude information from the time series data. To obtain frequency and amplitude information from accelerating sensor time series data in a general inhomogeneous medium, a randomized algorithm is presented. The algorithm is analyzed and example wave fields are reconstructed.
ContributorsBarclay, Bryce Matthew (Author) / Mahalov, Alex (Thesis advisor) / Kostelich, Eric J (Thesis advisor) / Moustaoui, Mohamed (Committee member) / Motsch, Sebastien (Committee member) / Platte, Rodrigo (Committee member) / Arizona State University (Publisher)
Created2023
Description

Climate is a critical determinant of agricultural productivity, and the ability to accurately predict this productivity is necessary to provide guidance regarding food security and agricultural management. Previous predictions vary in approach due to the myriad of factors influencing agricultural productivity but generally suggest long-term declines in productivity and agricultural

Climate is a critical determinant of agricultural productivity, and the ability to accurately predict this productivity is necessary to provide guidance regarding food security and agricultural management. Previous predictions vary in approach due to the myriad of factors influencing agricultural productivity but generally suggest long-term declines in productivity and agricultural land suitability under climate change. In this paper, I relate predicted climate changes to yield for three major United States crops, namely corn, soybeans, and wheat, using a moderate emissions scenario. By adopting data-driven machine learning approaches, I used the following machine learning methods: random forest (RF), extreme gradient boosting (XGB), and artificial neural networks (ANN) to perform comparative analysis and ensemble methodology. I omitted the western US due to the region's susceptibility to water stress and the prevalence of artificial irrigation as a means to compensate for dry conditions. By considering only climate, the model's results suggest an ensemble mean decline in crop yield of 23.4\% for corn, 19.1\% for soybeans, and 7.8\% for wheat between the years of 2017 and 2100. These results emphasize potential negative impacts of climate change on the current agricultural industry as a result of shifting bio-climactic conditions.

ContributorsSwarup, Shray (Author) / Eikenberry, Steffen (Thesis director) / Mahalov, Alex (Committee member) / Barrett, The Honors College (Contributor) / Computer Science and Engineering Program (Contributor)
Created2023-05
168404-Thumbnail Image.png
Description
Communicating with computers through thought has been a remarkable achievement in recent years. This was made possible by the use of Electroencephalography (EEG). Brain-computer interface (BCI) relies heavily on Electroencephalography (EEG) signals for communication between humans and computers. With the advent ofdeep learning, many studies recently applied these techniques to

Communicating with computers through thought has been a remarkable achievement in recent years. This was made possible by the use of Electroencephalography (EEG). Brain-computer interface (BCI) relies heavily on Electroencephalography (EEG) signals for communication between humans and computers. With the advent ofdeep learning, many studies recently applied these techniques to EEG data to perform various tasks like emotion recognition, motor imagery classification, sleep analysis, and many more. Despite the rise of interest in EEG signal classification, very few studies have explored the MindBigData dataset, which collects EEG signals recorded at the stimulus of seeing a digit and thinking about it. This dataset takes us closer to realizing the idea of mind-reading or communication via thought. Thus classifying these signals into the respective digit that the user thinks about is a challenging task. This serves as a motivation to study this dataset and apply existing deep learning techniques to study it. Given the recent success of transformer architecture in different domains like Computer Vision and Natural language processing, this thesis studies transformer architecture for EEG signal classification. Also, it explores other deep learning techniques for the same. As a result, the proposed classification pipeline achieves comparable performance with the existing methods.
ContributorsMuglikar, Omkar Dushyant (Author) / Wang, Yalin (Thesis advisor) / Liang, Jianming (Committee member) / Venkateswara, Hemanth (Committee member) / Arizona State University (Publisher)
Created2021
161945-Thumbnail Image.png
Description
Statistical Shape Modeling is widely used to study the morphometrics of deformable objects in computer vision and biomedical studies. There are mainly two viewpoints to understand the shapes. On one hand, the outer surface of the shape can be taken as a two-dimensional embedding in space. On the other hand,

Statistical Shape Modeling is widely used to study the morphometrics of deformable objects in computer vision and biomedical studies. There are mainly two viewpoints to understand the shapes. On one hand, the outer surface of the shape can be taken as a two-dimensional embedding in space. On the other hand, the outer surface along with its enclosed internal volume can be taken as a three-dimensional embedding of interests. Most studies focus on the surface-based perspective by leveraging the intrinsic features on the tangent plane. But a two-dimensional model may fail to fully represent the realistic properties of shapes with both intrinsic and extrinsic properties. In this thesis, severalStochastic Partial Differential Equations (SPDEs) are thoroughly investigated and several methods are originated from these SPDEs to try to solve the problem of both two-dimensional and three-dimensional shape analyses. The unique physical meanings of these SPDEs inspired the findings of features, shape descriptors, metrics, and kernels in this series of works. Initially, the data generation of high-dimensional shapes, here, the tetrahedral meshes, is introduced. The cerebral cortex is taken as the study target and an automatic pipeline of generating the gray matter tetrahedral mesh is introduced. Then, a discretized Laplace-Beltrami operator (LBO) and a Hamiltonian operator (HO) in tetrahedral domain with Finite Element Method (FEM) are derived. Two high-dimensional shape descriptors are defined based on the solution of the heat equation and Schrödinger’s equation. Considering the fact that high-dimensional shape models usually contain massive redundancies, and the demands on effective landmarks in many applications, a Gaussian process landmarking on tetrahedral meshes is further studied. A SIWKS-based metric space is used to define a geometry-aware Gaussian process. The study of the periodic potential diffusion process further inspired the idea of a new kernel call the geometry-aware convolutional kernel. A series of Bayesian learning methods are then introduced to tackle the problem of shape retrieval and classification. Experiments of every single item are demonstrated. From the popular SPDE such as the heat equation and Schrödinger’s equation to the general potential diffusion equation and the specific periodic potential diffusion equation, it clearly shows that classical SPDEs play an important role in discovering new features, metrics, shape descriptors and kernels. I hope this thesis could be an example of using interdisciplinary knowledge to solve problems.
ContributorsFan, Yonghui (Author) / Wang, Yalin (Thesis advisor) / Lepore, Natasha (Committee member) / Turaga, Pavan (Committee member) / Yang, Yezhou (Committee member) / Arizona State University (Publisher)
Created2021
161987-Thumbnail Image.png
Description
Machine learning (ML) and deep learning (DL) has become an intrinsic part of multiple fields. The ability to solve complex problems makes machine learning a panacea. In the last few years, there has been an explosion of data generation, which has greatly improvised machine learning models. But this comes with

Machine learning (ML) and deep learning (DL) has become an intrinsic part of multiple fields. The ability to solve complex problems makes machine learning a panacea. In the last few years, there has been an explosion of data generation, which has greatly improvised machine learning models. But this comes with a cost of high computation, which invariably increases power usage and cost of the hardware. In this thesis we explore applications of ML techniques, applied to two completely different fields - arts, media and theater and urban climate research using low-cost and low-powered edge devices. The multi-modal chatbot uses different machine learning techniques: natural language processing (NLP) and computer vision (CV) to understand inputs of the user and accordingly perform in the play and interact with the audience. This system is also equipped with other interactive hardware setups like movable LED systems, together they provide an experiential theatrical play tailored to each user. I will discuss how I used edge devices to achieve this AI system which has created a new genre in theatrical play. I will then discuss MaRTiny, which is an AI-based bio-meteorological system that calculates mean radiant temperature (MRT), which is an important parameter for urban climate research. It is also equipped with a vision system that performs different machine learning tasks like pedestrian and shade detection. The entire system costs around $200 which can potentially replace the existing setup worth $20,000. I will further discuss how I overcame the inaccuracies in MRT value caused by the system, using machine learning methods. These projects although belonging to two very different fields, are implemented using edge devices and use similar ML techniques. In this thesis I will detail out different techniques that are shared between these two projects and how they can be used in several other applications using edge devices.
ContributorsKulkarni, Karthik Kashinath (Author) / Jayasuriya, Suren (Thesis advisor) / Middel, Ariane (Thesis advisor) / Yu, Hongbin (Committee member) / Arizona State University (Publisher)
Created2021
Description
Graph matching is a fundamental but notoriously difficult problem due to its NP-hard nature, and serves as a cornerstone for a series of applications in machine learning and computer vision, such as image matching, dynamic routing, drug design, to name a few. Although there has been massive previous investigation on

Graph matching is a fundamental but notoriously difficult problem due to its NP-hard nature, and serves as a cornerstone for a series of applications in machine learning and computer vision, such as image matching, dynamic routing, drug design, to name a few. Although there has been massive previous investigation on high-performance graph matching solvers, it still remains a challenging task to tackle the matching problem under real-world scenarios with severe graph uncertainty (e.g., noise, outlier, misleading or ambiguous link).In this dissertation, a main focus is to investigate the essence and propose solutions to graph matching with higher reliability under such uncertainty. To this end, the proposed research was conducted taking into account three perspectives related to reliable graph matching: modeling, optimization and learning. For modeling, graph matching is extended from typical quadratic assignment problem to a more generic mathematical model by introducing a specific family of separable function, achieving higher capacity and reliability. In terms of optimization, a novel high gradient-efficient determinant-based regularization technique is proposed in this research, showing high robustness against outliers. Then learning paradigm for graph matching under intrinsic combinatorial characteristics is explored. First, a study is conducted on the way of filling the gap between discrete problem and its continuous approximation under a deep learning framework. Then this dissertation continues to investigate the necessity of more reliable latent topology of graphs for matching, and propose an effective and flexible framework to obtain it. Coherent findings in this dissertation include theoretical study and several novel algorithms, with rich experiments demonstrating the effectiveness.
ContributorsYu, Tianshu (Author) / Li, Baoxin (Thesis advisor) / Wang, Yalin (Committee member) / Yang, Yezhou (Committee member) / Yang, Yingzhen (Committee member) / Arizona State University (Publisher)
Created2021
168275-Thumbnail Image.png
Description
Graph matching is a fundamental but notoriously difficult problem due to its NP-hard nature, and serves as a cornerstone for a series of applications in machine learning and computer vision, such as image matching, dynamic routing, drug design, to name a few. Although there has been massive previous investigation on

Graph matching is a fundamental but notoriously difficult problem due to its NP-hard nature, and serves as a cornerstone for a series of applications in machine learning and computer vision, such as image matching, dynamic routing, drug design, to name a few. Although there has been massive previous investigation on high-performance graph matching solvers, it still remains a challenging task to tackle the matching problem under real-world scenarios with severe graph uncertainty (e.g., noise, outlier, misleading or ambiguous link).In this dissertation, a main focus is to investigate the essence and propose solutions to graph matching with higher reliability under such uncertainty. To this end, the proposed research was conducted taking into account three perspectives related to reliable graph matching: modeling, optimization and learning. For modeling, graph matching is extended from typical quadratic assignment problem to a more generic mathematical model by introducing a specific family of separable function, achieving higher capacity and reliability. In terms of optimization, a novel high gradient-efficient determinant-based regularization technique is proposed in this research, showing high robustness against outliers. Then learning paradigm for graph matching under intrinsic combinatorial characteristics is explored. First, a study is conducted on the way of filling the gap between discrete problem and its continuous approximation under a deep learning framework. Then this dissertation continues to investigate the necessity of more reliable latent topology of graphs for matching, and propose an effective and flexible framework to obtain it. Coherent findings in this dissertation include theoretical study and several novel algorithms, with rich experiments demonstrating the effectiveness.
ContributorsYu, Tianshu (Author) / Li, Baoxin (Thesis advisor) / Wang, Yalin (Committee member) / Yang, Yezhou (Committee member) / Yang, Yingzhen (Committee member) / Arizona State University (Publisher)
Created2021
168749-Thumbnail Image.png
Description
Alzheimer's disease (AD) is a neurodegenerative disease that damages the cognitive abilities of a patient. It is critical to diagnose AD early to begin treatment as soon as possible which can be done through biomarkers. One such biomarker is the beta-amyloid (Aβ) peptide which can be quantified using the centiloid

Alzheimer's disease (AD) is a neurodegenerative disease that damages the cognitive abilities of a patient. It is critical to diagnose AD early to begin treatment as soon as possible which can be done through biomarkers. One such biomarker is the beta-amyloid (Aβ) peptide which can be quantified using the centiloid (CL) scale. For identifying the Aβ biomarker, A deep learning model that can model AD progression by predicting the CL value for brain magnetic resonance images (MRIs) is proposed. Brain MRI images can be obtained through the Alzheimer's Disease Neuroimaging Initiative (ADNI) and Open Access Series of Imaging Studies (OASIS) datasets, however a single model cannot perform well on both datasets at once. Thus, A regularization-based continuous learning framework to perform domain adaptation on the previous model is also proposed which captures the latent information about the relationship between Aβ and AD progression within both datasets.
ContributorsTrinh, Matthew Brian (Author) / Wang, Yalin (Thesis advisor) / Liang, Jianming (Committee member) / Su, Yi (Committee member) / Arizona State University (Publisher)
Created2022