Matching Items (33)
Filtering by

Clear all filters

151689-Thumbnail Image.png
Description
Sparsity has become an important modeling tool in areas such as genetics, signal and audio processing, medical image processing, etc. Via the penalization of l-1 norm based regularization, the structured sparse learning algorithms can produce highly accurate models while imposing various predefined structures on the data, such as feature groups

Sparsity has become an important modeling tool in areas such as genetics, signal and audio processing, medical image processing, etc. Via the penalization of l-1 norm based regularization, the structured sparse learning algorithms can produce highly accurate models while imposing various predefined structures on the data, such as feature groups or graphs. In this thesis, I first propose to solve a sparse learning model with a general group structure, where the predefined groups may overlap with each other. Then, I present three real world applications which can benefit from the group structured sparse learning technique. In the first application, I study the Alzheimer's Disease diagnosis problem using multi-modality neuroimaging data. In this dataset, not every subject has all data sources available, exhibiting an unique and challenging block-wise missing pattern. In the second application, I study the automatic annotation and retrieval of fruit-fly gene expression pattern images. Combined with the spatial information, sparse learning techniques can be used to construct effective representation of the expression images. In the third application, I present a new computational approach to annotate developmental stage for Drosophila embryos in the gene expression images. In addition, it provides a stage score that enables one to more finely annotate each embryo so that they are divided into early and late periods of development within standard stage demarcations. Stage scores help us to illuminate global gene activities and changes much better, and more refined stage annotations improve our ability to better interpret results when expression pattern matches are discovered between genes.
ContributorsYuan, Lei (Author) / Ye, Jieping (Thesis advisor) / Wang, Yalin (Committee member) / Xue, Guoliang (Committee member) / Kumar, Sudhir (Committee member) / Arizona State University (Publisher)
Created2013
152019-Thumbnail Image.png
Description
In this thesis, we present the study of several physical properties of relativistic mat- ters under extreme conditions. We start by deriving the rate of the nonleptonic weak processes and the bulk viscosity in several spin-one color superconducting phases of quark matter. We also calculate the bulk viscosity in the

In this thesis, we present the study of several physical properties of relativistic mat- ters under extreme conditions. We start by deriving the rate of the nonleptonic weak processes and the bulk viscosity in several spin-one color superconducting phases of quark matter. We also calculate the bulk viscosity in the nonlinear and anharmonic regime in the normal phase of strange quark matter. We point out several qualitative effects due to the anharmonicity, although quantitatively they appear to be relatively small. In the corresponding study, we take into account the interplay between the non- leptonic and semileptonic weak processes. The results can be important in order to relate accessible observables of compact stars to their internal composition. We also use quantum field theoretical methods to study the transport properties in monolayer graphene in a strong magnetic field. The corresponding quasi-relativistic system re- veals an anomalous quantum Hall effect, whose features are directly connected with the spontaneous flavor symmetry breaking. We study the microscopic origin of Fara- day rotation and magneto-optical transmission in graphene and show that their main features are in agreement with the experimental data.
ContributorsWang, Xinyang, Ph.D (Author) / Shovkovy, Igor (Thesis advisor) / Belitsky, Andrei (Committee member) / Easson, Damien (Committee member) / Peng, Xihong (Committee member) / Vachaspati, Tanmay (Committee member) / Arizona State University (Publisher)
Created2013
150890-Thumbnail Image.png
Description
Numerical simulations are very helpful in understanding the physics of the formation of structure and galaxies. However, it is sometimes difficult to interpret model data with respect to observations, partly due to the difficulties and background noise inherent to observation. The goal, here, is to attempt to bridge this ga

Numerical simulations are very helpful in understanding the physics of the formation of structure and galaxies. However, it is sometimes difficult to interpret model data with respect to observations, partly due to the difficulties and background noise inherent to observation. The goal, here, is to attempt to bridge this gap between simulation and observation by rendering the model output in image format which is then processed by tools commonly used in observational astronomy. Images are synthesized in various filters by folding the output of cosmological simulations of gasdynamics with star-formation and dark matter with the Bruzual- Charlot stellar population synthesis models. A variation of the Virgo-Gadget numerical simulation code is used with the hybrid gas and stellar formation models of Springel and Hernquist (2003). Outputs taken at various redshifts are stacked to create a synthetic view of the simulated star clusters. Source Extractor (SExtractor) is used to find groupings of stellar populations which are considered as galaxies or galaxy building blocks and photometry used to estimate the rest frame luminosities and distribution functions. With further refinements, this is expected to provide support for missions such as JWST, as well as to probe what additional physics are needed to model the data. The results show good agreement in many respects with observed properties of the galaxy luminosity function (LF) over a wide range of high redshifts. In particular, the slope (alpha) when fitted to the standard Schechter function shows excellent agreement both in value and evolution with redshift, when compared with observation. Discrepancies of other properties with observation are seen to be a result of limitations of the simulation and additional feedback mechanisms which are needed.
ContributorsMorgan, Robert (Author) / Windhorst, Rogier A (Thesis advisor) / Scannapieco, Evan (Committee member) / Rhoads, James (Committee member) / Gardner, Carl (Committee member) / Belitsky, Andrei (Committee member) / Arizona State University (Publisher)
Created2012
150947-Thumbnail Image.png
Description
Understanding the temperature structure of protoplanetary disks (PPDs) is paramount to modeling disk evolution and future planet formation. PPDs around T Tauri stars have two primary heating sources, protostellar irradiation, which depends on the flaring of the disk, and accretional heating as viscous coupling between annuli dissipate energy. I have

Understanding the temperature structure of protoplanetary disks (PPDs) is paramount to modeling disk evolution and future planet formation. PPDs around T Tauri stars have two primary heating sources, protostellar irradiation, which depends on the flaring of the disk, and accretional heating as viscous coupling between annuli dissipate energy. I have written a "1.5-D" radiative transfer code to calculate disk temperatures assuming hydrostatic and radiative equilibrium. The model solves for the temperature at all locations simultaneously using Rybicki's method, converges rapidly at high optical depth, and retains full frequency dependence. The likely cause of accretional heating in PPDs is the magnetorotational instability (MRI), which acts where gas ionization is sufficiently high for gas to couple to the magnetic field. This will occur in surface layers of the disk, leaving the interior portions of the disk inactive ("dead zone"). I calculate temperatures in PPDs undergoing such "layered accretion." Since the accretional heating is concentrated far from the midplane, temperatures in the disk's interior are lower than in PPDs modeled with vertically uniform accretion. The method is used to study for the first time disks evolving via the magnetorotational instability, which operates primarily in surface layers. I find that temperatures in layered accretion disks do not significantly differ from those of "passive disks," where no accretional heating exists. Emergent spectra are insensitive to active layer thickness, making it difficult to observationally identify disks undergoing layered vs. uniform accretion. I also calculate the ionization chemistry in PPDs, using an ionization network including multiple charge states of dust grains. Combined with a criterion for the onset of the MRI, I calculate where the MRI can be initiated and the extent of dead zones in PPDs. After accounting for feedback between temperature and active layer thickness, I find the surface density of the actively accreting layers falls rapidly with distance from the protostar, leading to a net outward flow of mass from ~0.1 to 3 AU. The clearing out of the innermost zones is possibly consistent with the observed behavior of recently discovered "transition disks."
ContributorsLesniak, Michael V., III (Author) / Desch, Steven J. (Thesis advisor) / Scannapieco, Evan (Committee member) / Timmes, Francis (Committee member) / Starrfield, Sumner (Committee member) / Belitsky, Andrei (Committee member) / Arizona State University (Publisher)
Created2012
150778-Thumbnail Image.png
Description
This thesis deals with the first measurements done with a cold neutron beam at the Spallation Neutron Source at Oak Ridge National Laboratory. The experimental technique consisted of capturing polarized cold neutrons by nuclei to measure parity-violation in the angular distribution of the gamma rays following neutron capture. The measurements

This thesis deals with the first measurements done with a cold neutron beam at the Spallation Neutron Source at Oak Ridge National Laboratory. The experimental technique consisted of capturing polarized cold neutrons by nuclei to measure parity-violation in the angular distribution of the gamma rays following neutron capture. The measurements presented here for the nuclei Chlorine ( 35Cl) and Aluminum ( 27Al ) are part of a program with the ultimate goal of measuring the asymmetry in the angular distribution of gamma rays emitted in the capture of neutrons on protons, with a precision better than 10-8, in order to extract the weak hadronic coupling constant due to pion exchange interaction with isospin change equal with one ( hπ 1). Based on theoretical calculations asymmetry in the angular distribution of the gamma rays from neutron capture on protons has an estimated size of 5·10-8. This implies that the Al parity violation asymmetry and its uncertainty have to be known with a precision smaller than 4·10-8. The proton target is liquid Hydrogen (H2) contained in an Aluminum vessel. Results are presented for parity violation and parity-conserving asymmetries in Chlorine and Aluminum. The systematic and statistical uncertainties in the calculation of the parity-violating and parity-conserving asymmetries are discussed.
ContributorsBalascuta, Septimiu (Author) / Alarcon, Ricardo (Thesis advisor) / Belitsky, Andrei (Committee member) / Doak, Bruce (Committee member) / Comfort, Joseph (Committee member) / Schmidt, Kevin (Committee member) / Arizona State University (Publisher)
Created2012
149377-Thumbnail Image.png
Description
As the world energy demand increases, semiconductor devices with high energy conversion efficiency become more and more desirable. The energy conversion consists of two distinct processes, namely energy generation and usage. In this dissertation, novel multi-junction solar cells and light emitting diodes (LEDs) are proposed and studied for

As the world energy demand increases, semiconductor devices with high energy conversion efficiency become more and more desirable. The energy conversion consists of two distinct processes, namely energy generation and usage. In this dissertation, novel multi-junction solar cells and light emitting diodes (LEDs) are proposed and studied for high energy conversion efficiency in both processes, respectively. The first half of this dissertation discusses the practically achievable energy conversion efficiency limit of solar cells. Since the demonstration of the Si solar cell in 1954, the performance of solar cells has been improved tremendously and recently reached 41.6% energy conversion efficiency. However, it seems rather challenging to further increase the solar cell efficiency. The state-of-the-art triple junction solar cells are analyzed to help understand the limiting factors. To address these issues, the monolithically integrated II-VI and III-V material system is proposed for solar cell applications. This material system covers the entire solar spectrum with a continuous selection of energy bandgaps and can be grown lattice matched on a GaSb substrate. Moreover, six four-junction solar cells are designed for AM0 and AM1.5D solar spectra based on this material system, and new design rules are proposed. The achievable conversion efficiencies for these designs are calculated using the commercial software package Silvaco with real material parameters. The second half of this dissertation studies the semiconductor luminescence refrigeration, which corresponds to over 100% energy usage efficiency. Although cooling has been realized in rare-earth doped glass by laser pumping, semiconductor based cooling is yet to be realized. In this work, a device structure that monolithically integrates a GaAs hemisphere with an InGaAs/GaAs quantum-well thin slab LED is proposed to realize cooling in semiconductor. The device electrical and optical performance is calculated. The proposed device then is fabricated using nine times photolithography and eight masks. The critical process steps, such as photoresist reflow and dry etch, are simulated to insure successful processing. Optical testing is done with the devices at various laser injection levels and the internal quantum efficiency, external quantum efficiency and extraction efficiency are measured.
ContributorsWu, Songnan (Author) / Zhang, Yong-Hang (Thesis advisor) / Menéndez, Jose (Committee member) / Ponce, Fernando (Committee member) / Belitsky, Andrei (Committee member) / Schroder, Dieter (Committee member) / Arizona State University (Publisher)
Created2010
171764-Thumbnail Image.png
Description
This dissertation constructs a new computational processing framework to robustly and precisely quantify retinotopic maps based on their angle distortion properties. More generally, this framework solves the problem of how to robustly and precisely quantify (angle) distortions of noisy or incomplete (boundary enclosed) 2-dimensional surface to surface mappings. This framework

This dissertation constructs a new computational processing framework to robustly and precisely quantify retinotopic maps based on their angle distortion properties. More generally, this framework solves the problem of how to robustly and precisely quantify (angle) distortions of noisy or incomplete (boundary enclosed) 2-dimensional surface to surface mappings. This framework builds upon the Beltrami Coefficient (BC) description of quasiconformal mappings that directly quantifies local mapping (circles to ellipses) distortions between diffeomorphisms of boundary enclosed plane domains homeomorphic to the unit disk. A new map called the Beltrami Coefficient Map (BCM) was constructed to describe distortions in retinotopic maps. The BCM can be used to fully reconstruct the original target surface (retinal visual field) of retinotopic maps. This dissertation also compared retinotopic maps in the visual processing cascade, which is a series of connected retinotopic maps responsible for visual data processing of physical images captured by the eyes. By comparing the BCM results from a large Human Connectome project (HCP) retinotopic dataset (N=181), a new computational quasiconformal mapping description of the transformed retinal image as it passes through the cascade is proposed, which is not present in any current literature. The description applied on HCP data provided direct visible and quantifiable geometric properties of the cascade in a way that has not been observed before. Because retinotopic maps are generated from in vivo noisy functional magnetic resonance imaging (fMRI), quantifying them comes with a certain degree of uncertainty. To quantify the uncertainties in the quantification results, it is necessary to generate statistical models of retinotopic maps from their BCMs and raw fMRI signals. Considering that estimating retinotopic maps from real noisy fMRI time series data using the population receptive field (pRF) model is a time consuming process, a convolutional neural network (CNN) was constructed and trained to predict pRF model parameters from real noisy fMRI data
ContributorsTa, Duyan Nguyen (Author) / Wang, Yalin (Thesis advisor) / Lu, Zhong-Lin (Committee member) / Hansford, Dianne (Committee member) / Liu, Huan (Committee member) / Li, Baoxin (Committee member) / Arizona State University (Publisher)
Created2022
168694-Thumbnail Image.png
Description
Retinotopic map, the map between visual inputs on the retina and neuronal activation in brain visual areas, is one of the central topics in visual neuroscience. For human observers, the map is typically obtained by analyzing functional magnetic resonance imaging (fMRI) signals of cortical responses to slowly moving visual stimuli

Retinotopic map, the map between visual inputs on the retina and neuronal activation in brain visual areas, is one of the central topics in visual neuroscience. For human observers, the map is typically obtained by analyzing functional magnetic resonance imaging (fMRI) signals of cortical responses to slowly moving visual stimuli on the retina. Biological evidences show the retinotopic mapping is topology-preserving/topological (i.e. keep the neighboring relationship after human brain process) within each visual region. Unfortunately, due to limited spatial resolution and the signal-noise ratio of fMRI, state of art retinotopic map is not topological. The topic was to model the topology-preserving condition mathematically, fix non-topological retinotopic map with numerical methods, and improve the quality of retinotopic maps. The impose of topological condition, benefits several applications. With the topological retinotopic maps, one may have a better insight on human retinotopic maps, including better cortical magnification factor quantification, more precise description of retinotopic maps, and potentially better exam ways of in Ophthalmology clinic.
ContributorsTu, Yanshuai (Author) / Wang, Yalin (Thesis advisor) / Lu, Zhong-Lin (Committee member) / Crook, Sharon (Committee member) / Yang, Yezhou (Committee member) / Zhang, Yu (Committee member) / Arizona State University (Publisher)
Created2022
171902-Thumbnail Image.png
Description
Beta-Amyloid(Aβ) plaques and tau protein tangles in the brain are now widely recognized as the defining hallmarks of Alzheimer’s disease (AD), followed by structural atrophy detectable on brain magnetic resonance imaging (MRI) scans. However, current methods to detect Aβ/tau pathology are either invasive (lumbar puncture) or quite costly and not

Beta-Amyloid(Aβ) plaques and tau protein tangles in the brain are now widely recognized as the defining hallmarks of Alzheimer’s disease (AD), followed by structural atrophy detectable on brain magnetic resonance imaging (MRI) scans. However, current methods to detect Aβ/tau pathology are either invasive (lumbar puncture) or quite costly and not widely available (positron emission tomography (PET)). And one of the particular neurodegenerative regions is the hippocampus to which the influence of Aβ/tau on has been one of the research projects focuses in the AD pathophysiological progress. In this dissertation, I proposed three novel machine learning and statistical models to examine subtle aspects of the hippocampal morphometry from MRI that are associated with Aβ /tau burden in the brain, measured using PET images. The first model is a novel unsupervised feature reduction model to generate a low-dimensional representation of hippocampal morphometry for each individual subject, which has superior performance in predicting Aβ/tau burden in the brain. The second one is an efficient federated group lasso model to identify the hippocampal subregions where atrophy is strongly associated with abnormal Aβ/Tau. The last one is a federated model for imaging genetics, which can identify genetic and transcriptomic influences on hippocampal morphometry. Finally, I stated the results of these three models that have been published or submitted to peer-reviewed conferences and journals.
ContributorsWu, Jianfeng (Author) / Wang, Yalin (Thesis advisor) / Li, Baoxin (Committee member) / Liang, Jianming (Committee member) / Wang, Junwen (Committee member) / Wu, Teresa (Committee member) / Arizona State University (Publisher)
Created2022
161945-Thumbnail Image.png
Description
Statistical Shape Modeling is widely used to study the morphometrics of deformable objects in computer vision and biomedical studies. There are mainly two viewpoints to understand the shapes. On one hand, the outer surface of the shape can be taken as a two-dimensional embedding in space. On the other hand,

Statistical Shape Modeling is widely used to study the morphometrics of deformable objects in computer vision and biomedical studies. There are mainly two viewpoints to understand the shapes. On one hand, the outer surface of the shape can be taken as a two-dimensional embedding in space. On the other hand, the outer surface along with its enclosed internal volume can be taken as a three-dimensional embedding of interests. Most studies focus on the surface-based perspective by leveraging the intrinsic features on the tangent plane. But a two-dimensional model may fail to fully represent the realistic properties of shapes with both intrinsic and extrinsic properties. In this thesis, severalStochastic Partial Differential Equations (SPDEs) are thoroughly investigated and several methods are originated from these SPDEs to try to solve the problem of both two-dimensional and three-dimensional shape analyses. The unique physical meanings of these SPDEs inspired the findings of features, shape descriptors, metrics, and kernels in this series of works. Initially, the data generation of high-dimensional shapes, here, the tetrahedral meshes, is introduced. The cerebral cortex is taken as the study target and an automatic pipeline of generating the gray matter tetrahedral mesh is introduced. Then, a discretized Laplace-Beltrami operator (LBO) and a Hamiltonian operator (HO) in tetrahedral domain with Finite Element Method (FEM) are derived. Two high-dimensional shape descriptors are defined based on the solution of the heat equation and Schrödinger’s equation. Considering the fact that high-dimensional shape models usually contain massive redundancies, and the demands on effective landmarks in many applications, a Gaussian process landmarking on tetrahedral meshes is further studied. A SIWKS-based metric space is used to define a geometry-aware Gaussian process. The study of the periodic potential diffusion process further inspired the idea of a new kernel call the geometry-aware convolutional kernel. A series of Bayesian learning methods are then introduced to tackle the problem of shape retrieval and classification. Experiments of every single item are demonstrated. From the popular SPDE such as the heat equation and Schrödinger’s equation to the general potential diffusion equation and the specific periodic potential diffusion equation, it clearly shows that classical SPDEs play an important role in discovering new features, metrics, shape descriptors and kernels. I hope this thesis could be an example of using interdisciplinary knowledge to solve problems.
ContributorsFan, Yonghui (Author) / Wang, Yalin (Thesis advisor) / Lepore, Natasha (Committee member) / Turaga, Pavan (Committee member) / Yang, Yezhou (Committee member) / Arizona State University (Publisher)
Created2021