Matching Items (137)
149712-Thumbnail Image.png
Description
Type Ia supernovae are important, but mysterious cosmological tools. Their standard brightnesses have enabled cosmologists to measure extreme distances and to discover dark energy. However, the nature of their progenitor mechanisms remains elusive, with many competing models offering only partial clues to their origins. Here, type Ia supernova delay times

Type Ia supernovae are important, but mysterious cosmological tools. Their standard brightnesses have enabled cosmologists to measure extreme distances and to discover dark energy. However, the nature of their progenitor mechanisms remains elusive, with many competing models offering only partial clues to their origins. Here, type Ia supernova delay times are explored using analytical models. Combined with a new observation technique, this model places new constraints on the characteristic time delay between the formation of stars and the first type Ia supernovae. This derived delay time (500 million years) implies low-mass companions for single degenerate progenitor scenarios. In the latter portions of this dissertation, two progenitor mechanisms are simulated in detail; white dwarf collisions and mergers. From the first of these simulations, it is evident that white dwarf collisions offer a viable and unique pathway to producing type Ia supernovae. Many of the combinations of masses simulated produce sufficient quantities of 56Ni (up to 0.51 solar masses) to masquerade as normal type Ia supernovae. Other combinations of masses produce 56Ni yields that span the entire range of supernova brightnesses, from the very dim and underluminous, with 0.14 solar masses, to the over-bright and superluminous, with up to 1.71 solar masses. The 56Ni yield in the collision simulations depends non-linearly on total system mass, mass ratio, and impact parameter. Using the same numerical tools as in the collisions examination, white dwarf mergers are studied in detail. Nearly all of the simulations produce merger remnants consisting of a cold, degenerate core surrounded by a hot accretion disk. The properties of these disks have strong implications for various viscosity treatments that have attempted to pin down the accretion times. Some mass combinations produce super-Chandrasekhar cores on shorter time scales than viscosity driven accretion. A handful of simulations also exhibit helium detonations on the surface of the primary that bear a resemblance to helium novae. Finally, some of the preliminary groundwork that has been laid for constructing a new numerical tool is discussed. This new tool advances the merger simulations further than any research group has done before, and has the potential to answer some of the lingering questions that the merger study has uncovered. The results of thermal diffusion tests using this tool have a remarkable correspondence to analytical predictions.
ContributorsRaskin, Cody (Author) / Scannapieco, Evan (Thesis advisor) / Rhoads, James (Committee member) / Young, Patrick (Committee member) / Mcnamara, Allen (Committee member) / Timmes, Francis (Committee member) / Arizona State University (Publisher)
Created2011
150214-Thumbnail Image.png
Description
Galaxies with strong Lyman-alpha (Lya) emission line (also called Lya galaxies or emitters) offer an unique probe of the epoch of reionization - one of the important phases when most of the neutral hydrogen in the universe was ionized. In addition, Lya galaxies at high redshifts are a powerful tool

Galaxies with strong Lyman-alpha (Lya) emission line (also called Lya galaxies or emitters) offer an unique probe of the epoch of reionization - one of the important phases when most of the neutral hydrogen in the universe was ionized. In addition, Lya galaxies at high redshifts are a powerful tool to study low-mass galaxy formation. Since current observations suggest that the reionization is complete by redshift z~ 6, it is therefore necessary to discover galaxies at z > 6, to use their luminosity function (LF) as a probe of reionization. I found five z = 7.7 candidate Lya galaxies with line fluxes > 7x10-18 erg/s/cm/2 , from three different deep near-infrared (IR) narrowband (NB) imaging surveys in a volume > 4x104Mpc3. From the spectroscopic followup of four candidate galaxies, and with the current spectroscopic sensitivity, the detection of only the brightest candidate galaxy can be ruled out at 5 sigma level. Moreover, these observations successfully demonstrate that the sensitivity necessary for both, the NB imaging as well as the spectroscopic followup of z~ 8 Lya galaxies can be reached with the current instrumentation. While future, more sensitive spectroscopic observations are necessary, the observed Lya LF at z = 7.7 is consistent with z = 6.6 LF, suggesting that the intergalactic medium (IGM) is relatively ionized even at z = 7.7, with neutral fraction xHI≤ 30%. On the theoretical front, while several models of Lya emitters have been developed, the physical nature of Lya emitters is not yet completely known. Moreover, multi-parameter models and their complexities necessitates a simpler model. I have developed a simple, single-parameter model to populate dark mater halos with Lya emitters. The central tenet of this model, different from many of the earlier models, is that the star-formation rate (SFR), and hence the Lya luminosity, is proportional to the mass accretion rate rather than the total halo mass. This simple model is successful in reproducing many observable including LFs, stellar masses, SFRs, and clustering of Lya emitters from z~ 3 to z~ 7. Finally, using this model, I find that the mass accretion, and hence the star-formation in > 30% of Lya emitters at z~ 3 occur through major mergers, and this fraction increases to ~ 50% at z~7.
ContributorsShet Tilvi, Vithal (Author) / Malhotra, Sangeeta (Thesis advisor) / Rhoads, James (Committee member) / Scannapieco, Evan (Committee member) / Young, Patrick (Committee member) / Jansen, Rolf (Committee member) / Arizona State University (Publisher)
Created2011
152370-Thumbnail Image.png
Description
Functional magnetic resonance imaging (fMRI) has been widely used to measure the retinotopic organization of early visual cortex in the human brain. Previous studies have identified multiple visual field maps (VFMs) based on statistical analysis of fMRI signals, but the resulting geometry has not been fully characterized with mathematical models.

Functional magnetic resonance imaging (fMRI) has been widely used to measure the retinotopic organization of early visual cortex in the human brain. Previous studies have identified multiple visual field maps (VFMs) based on statistical analysis of fMRI signals, but the resulting geometry has not been fully characterized with mathematical models. This thesis explores using concepts from computational conformal geometry to create a custom software framework for examining and generating quantitative mathematical models for characterizing the geometry of early visual areas in the human brain. The software framework includes a graphical user interface built on top of a selected core conformal flattening algorithm and various software tools compiled specifically for processing and examining retinotopic data. Three conformal flattening algorithms were implemented and evaluated for speed and how well they preserve the conformal metric. All three algorithms performed well in preserving the conformal metric but the speed and stability of the algorithms varied. The software framework performed correctly on actual retinotopic data collected using the standard travelling-wave experiment. Preliminary analysis of the Beltrami coefficient for the early data set shows that selected regions of V1 that contain reasonably smooth eccentricity and polar angle gradients do show significant local conformality, warranting further investigation of this approach for analysis of early and higher visual cortex.
ContributorsTa, Duyan (Author) / Wang, Yalin (Thesis advisor) / Maciejewski, Ross (Committee member) / Wonka, Peter (Committee member) / Arizona State University (Publisher)
Created2013
152300-Thumbnail Image.png
Description
In blindness research, the corpus callosum (CC) is the most frequently studied sub-cortical structure, due to its important involvement in visual processing. While most callosal analyses from brain structural magnetic resonance images (MRI) are limited to the 2D mid-sagittal slice, we propose a novel framework to capture a complete set

In blindness research, the corpus callosum (CC) is the most frequently studied sub-cortical structure, due to its important involvement in visual processing. While most callosal analyses from brain structural magnetic resonance images (MRI) are limited to the 2D mid-sagittal slice, we propose a novel framework to capture a complete set of 3D morphological differences in the corpus callosum between two groups of subjects. The CCs are segmented from whole brain T1-weighted MRI and modeled as 3D tetrahedral meshes. The callosal surface is divided into superior and inferior patches on which we compute a volumetric harmonic field by solving the Laplace's equation with Dirichlet boundary conditions. We adopt a refined tetrahedral mesh to compute the Laplacian operator, so our computation can achieve sub-voxel accuracy. Thickness is estimated by tracing the streamlines in the harmonic field. We combine areal changes found using surface tensor-based morphometry and thickness information into a vector at each vertex to be used as a metric for the statistical analysis. Group differences are assessed on this combined measure through Hotelling's T2 test. The method is applied to statistically compare three groups consisting of: congenitally blind (CB), late blind (LB; onset > 8 years old) and sighted (SC) subjects. Our results reveal significant differences in several regions of the CC between both blind groups and the sighted groups; and to a lesser extent between the LB and CB groups. These results demonstrate the crucial role of visual deprivation during the developmental period in reshaping the structural architecture of the CC.
ContributorsXu, Liang (Author) / Wang, Yalin (Thesis advisor) / Maciejewski, Ross (Committee member) / Ye, Jieping (Committee member) / Arizona State University (Publisher)
Created2013
151689-Thumbnail Image.png
Description
Sparsity has become an important modeling tool in areas such as genetics, signal and audio processing, medical image processing, etc. Via the penalization of l-1 norm based regularization, the structured sparse learning algorithms can produce highly accurate models while imposing various predefined structures on the data, such as feature groups

Sparsity has become an important modeling tool in areas such as genetics, signal and audio processing, medical image processing, etc. Via the penalization of l-1 norm based regularization, the structured sparse learning algorithms can produce highly accurate models while imposing various predefined structures on the data, such as feature groups or graphs. In this thesis, I first propose to solve a sparse learning model with a general group structure, where the predefined groups may overlap with each other. Then, I present three real world applications which can benefit from the group structured sparse learning technique. In the first application, I study the Alzheimer's Disease diagnosis problem using multi-modality neuroimaging data. In this dataset, not every subject has all data sources available, exhibiting an unique and challenging block-wise missing pattern. In the second application, I study the automatic annotation and retrieval of fruit-fly gene expression pattern images. Combined with the spatial information, sparse learning techniques can be used to construct effective representation of the expression images. In the third application, I present a new computational approach to annotate developmental stage for Drosophila embryos in the gene expression images. In addition, it provides a stage score that enables one to more finely annotate each embryo so that they are divided into early and late periods of development within standard stage demarcations. Stage scores help us to illuminate global gene activities and changes much better, and more refined stage annotations improve our ability to better interpret results when expression pattern matches are discovered between genes.
ContributorsYuan, Lei (Author) / Ye, Jieping (Thesis advisor) / Wang, Yalin (Committee member) / Xue, Guoliang (Committee member) / Kumar, Sudhir (Committee member) / Arizona State University (Publisher)
Created2013
151336-Thumbnail Image.png
Description
Over 2 billion people are using online social network services, such as Facebook, Twitter, Google+, LinkedIn, and Pinterest. Users update their status, post their photos, share their information, and chat with others in these social network sites every day; however, not everyone shares the same amount of information. This thesis

Over 2 billion people are using online social network services, such as Facebook, Twitter, Google+, LinkedIn, and Pinterest. Users update their status, post their photos, share their information, and chat with others in these social network sites every day; however, not everyone shares the same amount of information. This thesis explores methods of linking publicly available data sources as a means of extrapolating missing information of Facebook. An application named "Visual Friends Income Map" has been created on Facebook to collect social network data and explore geodemographic properties to link publicly available data, such as the US census data. Multiple predictors are implemented to link data sets and extrapolate missing information from Facebook with accurate predictions. The location based predictor matches Facebook users' locations with census data at the city level for income and demographic predictions. Age and relationship based predictors are created to improve the accuracy of the proposed location based predictor utilizing social network link information. In the case where a user does not share any location information on their Facebook profile, a kernel density estimation location predictor is created. This predictor utilizes publicly available telephone record information of all people with the same surname of this user in the US to create a likelihood distribution of the user's location. This is combined with the user's IP level information in order to narrow the probability estimation down to a local regional constraint.
ContributorsMao, Jingxian (Author) / Maciejewski, Ross (Thesis advisor) / Farin, Gerald (Committee member) / Wang, Yalin (Committee member) / Arizona State University (Publisher)
Created2012
152019-Thumbnail Image.png
Description
In this thesis, we present the study of several physical properties of relativistic mat- ters under extreme conditions. We start by deriving the rate of the nonleptonic weak processes and the bulk viscosity in several spin-one color superconducting phases of quark matter. We also calculate the bulk viscosity in the

In this thesis, we present the study of several physical properties of relativistic mat- ters under extreme conditions. We start by deriving the rate of the nonleptonic weak processes and the bulk viscosity in several spin-one color superconducting phases of quark matter. We also calculate the bulk viscosity in the nonlinear and anharmonic regime in the normal phase of strange quark matter. We point out several qualitative effects due to the anharmonicity, although quantitatively they appear to be relatively small. In the corresponding study, we take into account the interplay between the non- leptonic and semileptonic weak processes. The results can be important in order to relate accessible observables of compact stars to their internal composition. We also use quantum field theoretical methods to study the transport properties in monolayer graphene in a strong magnetic field. The corresponding quasi-relativistic system re- veals an anomalous quantum Hall effect, whose features are directly connected with the spontaneous flavor symmetry breaking. We study the microscopic origin of Fara- day rotation and magneto-optical transmission in graphene and show that their main features are in agreement with the experimental data.
ContributorsWang, Xinyang, Ph.D (Author) / Shovkovy, Igor (Thesis advisor) / Belitsky, Andrei (Committee member) / Easson, Damien (Committee member) / Peng, Xihong (Committee member) / Vachaspati, Tanmay (Committee member) / Arizona State University (Publisher)
Created2013
152408-Thumbnail Image.png
Description
Quasars, the visible phenomena associated with the active accretion phase of super- massive black holes found in the centers of galaxies, represent one of the most energetic processes in the Universe. As matter falls into the central black hole, it is accelerated and collisionally heated, and the radiation emitted can

Quasars, the visible phenomena associated with the active accretion phase of super- massive black holes found in the centers of galaxies, represent one of the most energetic processes in the Universe. As matter falls into the central black hole, it is accelerated and collisionally heated, and the radiation emitted can outshine the combined light of all the stars in the host galaxy. Studies of quasar host galaxies at ultraviolet to near-infrared wavelengths are fundamentally limited by the precision with which the light from the central quasar accretion can be disentangled from the light of stars in the surrounding host galaxy. In this Dissertation, I discuss direct imaging of quasar host galaxies at redshifts z ≃ 2 and z ≃ 6 using new data obtained with the Hubble Space Telescope. I describe a new method for removing the point source flux using Markov Chain Monte Carlo parameter estimation and simultaneous modeling of the point source and host galaxy. I then discuss applications of this method to understanding the physical properties of high-redshift quasar host galaxies including their structures, luminosities, sizes, and colors, and inferred stellar population properties such as age, mass, and dust content.
ContributorsMechtley, Matt R (Author) / Windhorst, Rogier A (Thesis advisor) / Butler, Nathaniel (Committee member) / Jansen, Rolf A (Committee member) / Rhoads, James (Committee member) / Scowen, Paul (Committee member) / Arizona State University (Publisher)
Created2014
151278-Thumbnail Image.png
Description
This document presents a new implementation of the Smoothed Particles Hydrodynamics algorithm using DirectX 11 and DirectCompute. The main goal of this document is to present to the reader an alternative solution to the largely studied and researched problem of fluid simulation. Most other solutions have been implemented using the

This document presents a new implementation of the Smoothed Particles Hydrodynamics algorithm using DirectX 11 and DirectCompute. The main goal of this document is to present to the reader an alternative solution to the largely studied and researched problem of fluid simulation. Most other solutions have been implemented using the NVIDIA CUDA framework; however, the proposed solution in this document uses the Microsoft general-purpose computing on graphics processing units API. The implementation allows for the simulation of a large number of particles in a real-time scenario. The solution presented here uses the Smoothed Particles Hydrodynamics algorithm to calculate the forces within the fluid; this algorithm provides a Lagrangian approach for discretizes the Navier-Stockes equations into a set of particles. Our solution uses the DirectCompute compute shaders to evaluate each particle using the multithreading and multi-core capabilities of the GPU increasing the overall performance. The solution then describes a method for extracting the fluid surface using the Marching Cubes method and the programmable interfaces exposed by the DirectX pipeline. Particularly, this document presents a method for using the Geometry Shader Stage to generate the triangle mesh as defined by the Marching Cubes method. The implementation results show the ability to simulate over 64K particles at a rate of 900 and 400 frames per second, not including the surface reconstruction steps and including the Marching Cubes steps respectively.
ContributorsFigueroa, Gustavo (Author) / Farin, Gerald (Thesis advisor) / Maciejewski, Ross (Committee member) / Wang, Yalin (Committee member) / Arizona State University (Publisher)
Created2012
150890-Thumbnail Image.png
Description
Numerical simulations are very helpful in understanding the physics of the formation of structure and galaxies. However, it is sometimes difficult to interpret model data with respect to observations, partly due to the difficulties and background noise inherent to observation. The goal, here, is to attempt to bridge this ga

Numerical simulations are very helpful in understanding the physics of the formation of structure and galaxies. However, it is sometimes difficult to interpret model data with respect to observations, partly due to the difficulties and background noise inherent to observation. The goal, here, is to attempt to bridge this gap between simulation and observation by rendering the model output in image format which is then processed by tools commonly used in observational astronomy. Images are synthesized in various filters by folding the output of cosmological simulations of gasdynamics with star-formation and dark matter with the Bruzual- Charlot stellar population synthesis models. A variation of the Virgo-Gadget numerical simulation code is used with the hybrid gas and stellar formation models of Springel and Hernquist (2003). Outputs taken at various redshifts are stacked to create a synthetic view of the simulated star clusters. Source Extractor (SExtractor) is used to find groupings of stellar populations which are considered as galaxies or galaxy building blocks and photometry used to estimate the rest frame luminosities and distribution functions. With further refinements, this is expected to provide support for missions such as JWST, as well as to probe what additional physics are needed to model the data. The results show good agreement in many respects with observed properties of the galaxy luminosity function (LF) over a wide range of high redshifts. In particular, the slope (alpha) when fitted to the standard Schechter function shows excellent agreement both in value and evolution with redshift, when compared with observation. Discrepancies of other properties with observation are seen to be a result of limitations of the simulation and additional feedback mechanisms which are needed.
ContributorsMorgan, Robert (Author) / Windhorst, Rogier A (Thesis advisor) / Scannapieco, Evan (Committee member) / Rhoads, James (Committee member) / Gardner, Carl (Committee member) / Belitsky, Andrei (Committee member) / Arizona State University (Publisher)
Created2012