Matching Items (167)
168694-Thumbnail Image.png
Description
Retinotopic map, the map between visual inputs on the retina and neuronal activation in brain visual areas, is one of the central topics in visual neuroscience. For human observers, the map is typically obtained by analyzing functional magnetic resonance imaging (fMRI) signals of cortical responses to slowly moving visual stimuli

Retinotopic map, the map between visual inputs on the retina and neuronal activation in brain visual areas, is one of the central topics in visual neuroscience. For human observers, the map is typically obtained by analyzing functional magnetic resonance imaging (fMRI) signals of cortical responses to slowly moving visual stimuli on the retina. Biological evidences show the retinotopic mapping is topology-preserving/topological (i.e. keep the neighboring relationship after human brain process) within each visual region. Unfortunately, due to limited spatial resolution and the signal-noise ratio of fMRI, state of art retinotopic map is not topological. The topic was to model the topology-preserving condition mathematically, fix non-topological retinotopic map with numerical methods, and improve the quality of retinotopic maps. The impose of topological condition, benefits several applications. With the topological retinotopic maps, one may have a better insight on human retinotopic maps, including better cortical magnification factor quantification, more precise description of retinotopic maps, and potentially better exam ways of in Ophthalmology clinic.
ContributorsTu, Yanshuai (Author) / Wang, Yalin (Thesis advisor) / Lu, Zhong-Lin (Committee member) / Crook, Sharon (Committee member) / Yang, Yezhou (Committee member) / Zhang, Yu (Committee member) / Arizona State University (Publisher)
Created2022
171979-Thumbnail Image.png
Description
Neural tissue is a delicate system comprised of neurons and their synapses, glial cells for support, and vasculature for oxygen and nutrient delivery. This complexity ultimately gives rise to the human brain, a system researchers have become increasingly interested in replicating for artificial intelligence purposes. Some have even gone so

Neural tissue is a delicate system comprised of neurons and their synapses, glial cells for support, and vasculature for oxygen and nutrient delivery. This complexity ultimately gives rise to the human brain, a system researchers have become increasingly interested in replicating for artificial intelligence purposes. Some have even gone so far as to use neuronal cultures as computing hardware, but utilizing an environment closer to a living brain means having to grapple with the same issues faced by clinicians and researchers trying to treat brain disorders. Most outstanding among these are the problems that arise with invasive interfaces. Optical techniques that use fluorescent dyes and proteins have emerged as a solution for noninvasive imaging with single-cell resolution in vitro and in vivo, but feeding in information in the form of neuromodulation still requires implanted electrodes. The implantation process of these electrodes damages nearby neurons and their connections, causes hemorrhaging, and leads to scarring and gliosis that diminish efficacy. Here, a new approach for noninvasive neuromodulation with high spatial precision is described. It makes use of a combination of ultrasound, high frequency acoustic energy that can be focused to submillimeter regions at significant depths, and electric fields, an effective tool for neuromodulation that lacks spatial precision when used in a noninvasive manner. The hypothesis is that, when combined in a specific manner, these will lead to nonlinear effects at neuronal membranes that cause cells only in the region of overlap to be stimulated. Computational modeling confirmed this combination to be uniquely stimulating, contingent on certain physical effects of ultrasound on cell membranes. Subsequent in vitro experiments led to inconclusive results, however, leaving the door open for future experimentation with modified configurations and approaches. The specific combination explored here is also not the only untested technique that may achieve a similar goal.
ContributorsNester, Elliot (Author) / Wang, Yalin (Thesis advisor) / Muthuswamy, Jitendran (Committee member) / Towe, Bruce (Committee member) / Arizona State University (Publisher)
Created2022
171902-Thumbnail Image.png
Description
Beta-Amyloid(Aβ) plaques and tau protein tangles in the brain are now widely recognized as the defining hallmarks of Alzheimer’s disease (AD), followed by structural atrophy detectable on brain magnetic resonance imaging (MRI) scans. However, current methods to detect Aβ/tau pathology are either invasive (lumbar puncture) or quite costly and not

Beta-Amyloid(Aβ) plaques and tau protein tangles in the brain are now widely recognized as the defining hallmarks of Alzheimer’s disease (AD), followed by structural atrophy detectable on brain magnetic resonance imaging (MRI) scans. However, current methods to detect Aβ/tau pathology are either invasive (lumbar puncture) or quite costly and not widely available (positron emission tomography (PET)). And one of the particular neurodegenerative regions is the hippocampus to which the influence of Aβ/tau on has been one of the research projects focuses in the AD pathophysiological progress. In this dissertation, I proposed three novel machine learning and statistical models to examine subtle aspects of the hippocampal morphometry from MRI that are associated with Aβ /tau burden in the brain, measured using PET images. The first model is a novel unsupervised feature reduction model to generate a low-dimensional representation of hippocampal morphometry for each individual subject, which has superior performance in predicting Aβ/tau burden in the brain. The second one is an efficient federated group lasso model to identify the hippocampal subregions where atrophy is strongly associated with abnormal Aβ/Tau. The last one is a federated model for imaging genetics, which can identify genetic and transcriptomic influences on hippocampal morphometry. Finally, I stated the results of these three models that have been published or submitted to peer-reviewed conferences and journals.
ContributorsWu, Jianfeng (Author) / Wang, Yalin (Thesis advisor) / Li, Baoxin (Committee member) / Liang, Jianming (Committee member) / Wang, Junwen (Committee member) / Wu, Teresa (Committee member) / Arizona State University (Publisher)
Created2022
189274-Thumbnail Image.png
Description
Structural Magnetic Resonance Imaging analysis is a vital component in the study of Alzheimer’s Disease pathology and several techniques exist as part of the existing research conducted. In particular, volumetric approaches in this field are known to be beneficial due to the increased capability to express morphological characteristics when compared

Structural Magnetic Resonance Imaging analysis is a vital component in the study of Alzheimer’s Disease pathology and several techniques exist as part of the existing research conducted. In particular, volumetric approaches in this field are known to be beneficial due to the increased capability to express morphological characteristics when compared to manifold methods. To aid in the improvement of the field, this paper aims to propose an intrinsic volumetric conic system that can be applied to bounded volumetric meshes to enable a more effective study of subjects. The computation of the metric involves the use of heat kernel theory and conformal parameterization on genus-0 surfaces extended to a volumetric domain. Additionally, this paper also explores the use of the ’TetCNN’ architecture on the classification of hippocampal tetrahedral meshes to detect features that correspond to Alzheimer’s indicators. The model tested was able to achieve remarkable results with a measured classification accuracy of above 90% in the task of differentiating between subjects diagnosed with Alzheimer’s and normal control subjects.
ContributorsGeorge, John Varghese (Author) / Wang, Yalin (Thesis advisor) / Hansford, Dianne (Committee member) / Gupta, Vikash (Committee member) / Arizona State University (Publisher)
Created2023
171408-Thumbnail Image.png
Description
A remarkable phenomenon in contemporary physics is quantum scarring in classically chaoticsystems, where the wave functions tend to concentrate on classical periodic orbits. Quantum scarring has been studied for more than four decades, but the problem of efficiently detecting quantum scars has remained to be challenging, relying mostly on human visualization of wave

A remarkable phenomenon in contemporary physics is quantum scarring in classically chaoticsystems, where the wave functions tend to concentrate on classical periodic orbits. Quantum scarring has been studied for more than four decades, but the problem of efficiently detecting quantum scars has remained to be challenging, relying mostly on human visualization of wave function patterns. This paper develops a machine learning approach to detecting quantum scars in an automated and highly efficient manner. In particular, this paper exploits Meta learning. The first step is to construct a few-shot classification algorithm, under the requirement that the one-shot classification accuracy be larger than 90%. Then propose a scheme based on a combination of neural networks to improve the accuracy. This paper shows that the machine learning scheme can find the correct quantum scars from thousands images of wave functions, without any human intervention, regardless of the symmetry of the underlying classical system. This will be the first application of Meta learning to quantum systems. Interacting spin networks are fundamental to quantum computing. Data-based tomography oftime-independent spin networks has been achieved, but an open challenge is to ascertain the structures of time-dependent spin networks using time series measurements taken locally from a small subset of the spins. Physically, the dynamical evolution of a spin network under time-dependent driving or perturbation is described by the Heisenberg equation of motion. Motivated by this basic fact, this paper articulates a physics-enhanced machine learning framework whose core is Heisenberg neural networks. This paper demonstrates that, from local measurements, not only the local Hamiltonian can be recovered but the Hamiltonian reflecting the interacting structure of the whole system can also be faithfully reconstructed. Using Heisenberg neural machine on spin networks of a variety of structures. In the extreme case where measurements are taken from only one spin, the achieved tomography fidelity values can reach about 90%. The developed machine learning framework is applicable to any time-dependent systems whose quantum dynamical evolution is governed by the Heisenberg equation of motion.
ContributorsHan, Chendi (Author) / Lai, Ying-Cheng (Thesis advisor) / Yu, Hongbin (Committee member) / Dasarathy, Gautam (Committee member) / Seo, Jae-Sun (Committee member) / Arizona State University (Publisher)
Created2022
168524-Thumbnail Image.png
Description
Few-layer black phosphorous (FLBP) is one of the most important two-dimensional (2D) materials due to its strongly layer-dependent quantized bandstructure, which leads to wavelength-tunable optical and electrical properties. This thesis focuses on the preparation of stable, high-quality FLBP, the characterization of its optical properties, and device applications.Part I presents an

Few-layer black phosphorous (FLBP) is one of the most important two-dimensional (2D) materials due to its strongly layer-dependent quantized bandstructure, which leads to wavelength-tunable optical and electrical properties. This thesis focuses on the preparation of stable, high-quality FLBP, the characterization of its optical properties, and device applications.Part I presents an approach to preparing high-quality, stable FLBP samples by combining O2 plasma etching, boron nitride (BN) sandwiching, and subsequent rapid thermal annealing (RTA). Such a strategy has successfully produced FLBP samples with a record-long lifetime, with 80% of photoluminescence (PL) intensity remaining after 7 months. The improved material quality of FLBP allows the establishment of a more definitive relationship between the layer number and PL energies. Part II presents the study of oxygen incorporation in FLBP. The natural oxidation formed in the air environment is dominated by the formation of interstitial oxygen and dangling oxygen. By the real-time PL and Raman spectroscopy, it is found that continuous laser excitation breaks the bonds of interstitial oxygen, and free oxygen atoms can diffuse around or form dangling oxygen under low heat. RTA at 450 °C can turn the interstitial oxygen into dangling oxygen more thoroughly. Such oxygen-containing samples show similar optical properties to the pristine BP samples. The bandgap of such FLBP samples increases with the concentration of the incorporated oxygen. Part III deals with the investigation of emission natures of the prepared samples. The power- and temperature-dependent measurements demonstrate that PL emissions are dominated by excitons and trions, with a combined percentage larger than 80% at room temperature. Such measurements allow the determination of trion and exciton binding energies of 2-, 3-, and 4-layer BP, with values around 33, 23, 15 meV for trions and 297, 276, 179 meV for excitons at 77K, respectively. Part IV presents the initial exploration of device applications of such FLBP samples. The coupling between photonic crystal cavity (PCC) modes and FLBP's emission is realized by integrating the prepared sandwich structure onto 2D PCC. Electroluminescence has also been achieved by integrating such materials onto interdigital electrodes driven by alternating electric fields.
ContributorsLi, Dongying (Author) / Ning, Cun-Zheng (Thesis advisor) / Vasileska, Dragica (Committee member) / Lai, Ying-Cheng (Committee member) / Yu, Hongbin (Committee member) / Arizona State University (Publisher)
Created2022
168404-Thumbnail Image.png
Description
Communicating with computers through thought has been a remarkable achievement in recent years. This was made possible by the use of Electroencephalography (EEG). Brain-computer interface (BCI) relies heavily on Electroencephalography (EEG) signals for communication between humans and computers. With the advent ofdeep learning, many studies recently applied these techniques to

Communicating with computers through thought has been a remarkable achievement in recent years. This was made possible by the use of Electroencephalography (EEG). Brain-computer interface (BCI) relies heavily on Electroencephalography (EEG) signals for communication between humans and computers. With the advent ofdeep learning, many studies recently applied these techniques to EEG data to perform various tasks like emotion recognition, motor imagery classification, sleep analysis, and many more. Despite the rise of interest in EEG signal classification, very few studies have explored the MindBigData dataset, which collects EEG signals recorded at the stimulus of seeing a digit and thinking about it. This dataset takes us closer to realizing the idea of mind-reading or communication via thought. Thus classifying these signals into the respective digit that the user thinks about is a challenging task. This serves as a motivation to study this dataset and apply existing deep learning techniques to study it. Given the recent success of transformer architecture in different domains like Computer Vision and Natural language processing, this thesis studies transformer architecture for EEG signal classification. Also, it explores other deep learning techniques for the same. As a result, the proposed classification pipeline achieves comparable performance with the existing methods.
ContributorsMuglikar, Omkar Dushyant (Author) / Wang, Yalin (Thesis advisor) / Liang, Jianming (Committee member) / Venkateswara, Hemanth (Committee member) / Arizona State University (Publisher)
Created2021
161945-Thumbnail Image.png
Description
Statistical Shape Modeling is widely used to study the morphometrics of deformable objects in computer vision and biomedical studies. There are mainly two viewpoints to understand the shapes. On one hand, the outer surface of the shape can be taken as a two-dimensional embedding in space. On the other hand,

Statistical Shape Modeling is widely used to study the morphometrics of deformable objects in computer vision and biomedical studies. There are mainly two viewpoints to understand the shapes. On one hand, the outer surface of the shape can be taken as a two-dimensional embedding in space. On the other hand, the outer surface along with its enclosed internal volume can be taken as a three-dimensional embedding of interests. Most studies focus on the surface-based perspective by leveraging the intrinsic features on the tangent plane. But a two-dimensional model may fail to fully represent the realistic properties of shapes with both intrinsic and extrinsic properties. In this thesis, severalStochastic Partial Differential Equations (SPDEs) are thoroughly investigated and several methods are originated from these SPDEs to try to solve the problem of both two-dimensional and three-dimensional shape analyses. The unique physical meanings of these SPDEs inspired the findings of features, shape descriptors, metrics, and kernels in this series of works. Initially, the data generation of high-dimensional shapes, here, the tetrahedral meshes, is introduced. The cerebral cortex is taken as the study target and an automatic pipeline of generating the gray matter tetrahedral mesh is introduced. Then, a discretized Laplace-Beltrami operator (LBO) and a Hamiltonian operator (HO) in tetrahedral domain with Finite Element Method (FEM) are derived. Two high-dimensional shape descriptors are defined based on the solution of the heat equation and Schrödinger’s equation. Considering the fact that high-dimensional shape models usually contain massive redundancies, and the demands on effective landmarks in many applications, a Gaussian process landmarking on tetrahedral meshes is further studied. A SIWKS-based metric space is used to define a geometry-aware Gaussian process. The study of the periodic potential diffusion process further inspired the idea of a new kernel call the geometry-aware convolutional kernel. A series of Bayesian learning methods are then introduced to tackle the problem of shape retrieval and classification. Experiments of every single item are demonstrated. From the popular SPDE such as the heat equation and Schrödinger’s equation to the general potential diffusion equation and the specific periodic potential diffusion equation, it clearly shows that classical SPDEs play an important role in discovering new features, metrics, shape descriptors and kernels. I hope this thesis could be an example of using interdisciplinary knowledge to solve problems.
ContributorsFan, Yonghui (Author) / Wang, Yalin (Thesis advisor) / Lepore, Natasha (Committee member) / Turaga, Pavan (Committee member) / Yang, Yezhou (Committee member) / Arizona State University (Publisher)
Created2021
Description
Graph matching is a fundamental but notoriously difficult problem due to its NP-hard nature, and serves as a cornerstone for a series of applications in machine learning and computer vision, such as image matching, dynamic routing, drug design, to name a few. Although there has been massive previous investigation on

Graph matching is a fundamental but notoriously difficult problem due to its NP-hard nature, and serves as a cornerstone for a series of applications in machine learning and computer vision, such as image matching, dynamic routing, drug design, to name a few. Although there has been massive previous investigation on high-performance graph matching solvers, it still remains a challenging task to tackle the matching problem under real-world scenarios with severe graph uncertainty (e.g., noise, outlier, misleading or ambiguous link).In this dissertation, a main focus is to investigate the essence and propose solutions to graph matching with higher reliability under such uncertainty. To this end, the proposed research was conducted taking into account three perspectives related to reliable graph matching: modeling, optimization and learning. For modeling, graph matching is extended from typical quadratic assignment problem to a more generic mathematical model by introducing a specific family of separable function, achieving higher capacity and reliability. In terms of optimization, a novel high gradient-efficient determinant-based regularization technique is proposed in this research, showing high robustness against outliers. Then learning paradigm for graph matching under intrinsic combinatorial characteristics is explored. First, a study is conducted on the way of filling the gap between discrete problem and its continuous approximation under a deep learning framework. Then this dissertation continues to investigate the necessity of more reliable latent topology of graphs for matching, and propose an effective and flexible framework to obtain it. Coherent findings in this dissertation include theoretical study and several novel algorithms, with rich experiments demonstrating the effectiveness.
ContributorsYu, Tianshu (Author) / Li, Baoxin (Thesis advisor) / Wang, Yalin (Committee member) / Yang, Yezhou (Committee member) / Yang, Yingzhen (Committee member) / Arizona State University (Publisher)
Created2021
168275-Thumbnail Image.png
Description
Graph matching is a fundamental but notoriously difficult problem due to its NP-hard nature, and serves as a cornerstone for a series of applications in machine learning and computer vision, such as image matching, dynamic routing, drug design, to name a few. Although there has been massive previous investigation on

Graph matching is a fundamental but notoriously difficult problem due to its NP-hard nature, and serves as a cornerstone for a series of applications in machine learning and computer vision, such as image matching, dynamic routing, drug design, to name a few. Although there has been massive previous investigation on high-performance graph matching solvers, it still remains a challenging task to tackle the matching problem under real-world scenarios with severe graph uncertainty (e.g., noise, outlier, misleading or ambiguous link).In this dissertation, a main focus is to investigate the essence and propose solutions to graph matching with higher reliability under such uncertainty. To this end, the proposed research was conducted taking into account three perspectives related to reliable graph matching: modeling, optimization and learning. For modeling, graph matching is extended from typical quadratic assignment problem to a more generic mathematical model by introducing a specific family of separable function, achieving higher capacity and reliability. In terms of optimization, a novel high gradient-efficient determinant-based regularization technique is proposed in this research, showing high robustness against outliers. Then learning paradigm for graph matching under intrinsic combinatorial characteristics is explored. First, a study is conducted on the way of filling the gap between discrete problem and its continuous approximation under a deep learning framework. Then this dissertation continues to investigate the necessity of more reliable latent topology of graphs for matching, and propose an effective and flexible framework to obtain it. Coherent findings in this dissertation include theoretical study and several novel algorithms, with rich experiments demonstrating the effectiveness.
ContributorsYu, Tianshu (Author) / Li, Baoxin (Thesis advisor) / Wang, Yalin (Committee member) / Yang, Yezhou (Committee member) / Yang, Yingzhen (Committee member) / Arizona State University (Publisher)
Created2021