Matching Items (95)
Filtering by

Clear all filters

150206-Thumbnail Image.png
Description
Proteins are a fundamental unit in biology. Although proteins have been extensively studied, there is still much to investigate. The mechanism by which proteins fold into their native state, how evolution shapes structural dynamics, and the dynamic mechanisms of many diseases are not well understood. In this thesis, protein folding

Proteins are a fundamental unit in biology. Although proteins have been extensively studied, there is still much to investigate. The mechanism by which proteins fold into their native state, how evolution shapes structural dynamics, and the dynamic mechanisms of many diseases are not well understood. In this thesis, protein folding is explored using a multi-scale modeling method including (i) geometric constraint based simulations that efficiently search for native like topologies and (ii) reservoir replica exchange molecular dynamics, which identify the low free energy structures and refines these structures toward the native conformation. A test set of eight proteins and three ancestral steroid receptor proteins are folded to 2.7Å all-atom RMSD from their experimental crystal structures. Protein evolution and disease associated mutations (DAMs) are most commonly studied by in silico multiple sequence alignment methods. Here, however, the structural dynamics are incorporated to give insight into the evolution of three ancestral proteins and the mechanism of several diseases in human ferritin protein. The differences in conformational dynamics of these evolutionary related, functionally diverged ancestral steroid receptor proteins are investigated by obtaining the most collective motion through essential dynamics. Strikingly, this analysis shows that evolutionary diverged proteins of the same family do not share the same dynamic subspace. Rather, those sharing the same function are simultaneously clustered together and distant from those functionally diverged homologs. This dynamics analysis also identifies 77% of mutations (functional and permissive) necessary to evolve new function. In silico methods for prediction of DAMs rely on differences in evolution rate due to purifying selection and therefore the accuracy of DAM prediction decreases at fast and slow evolvable sites. Here, we investigate structural dynamics through computing the contribution of each residue to the biologically relevant fluctuations and from this define a metric: the dynamic stability index (DSI). Using DSI we study the mechanism for three diseases observed in the human ferritin protein. The T30I and R40G DAMs show a loss of dynamic stability at the C-terminus helix and nearby regulatory loop, agreeing with experimental results implicating the same regulatory loop as a cause in cataracts syndrome.
ContributorsGlembo, Tyler J (Author) / Ozkan, Sefika B (Thesis advisor) / Thorpe, Michael F (Committee member) / Ros, Robert (Committee member) / Kumar, Sudhir (Committee member) / Shumway, John (Committee member) / Arizona State University (Publisher)
Created2011
150095-Thumbnail Image.png
Description
Multi-task learning (MTL) aims to improve the generalization performance (of the resulting classifiers) by learning multiple related tasks simultaneously. Specifically, MTL exploits the intrinsic task relatedness, based on which the informative domain knowledge from each task can be shared across multiple tasks and thus facilitate the individual task learning. It

Multi-task learning (MTL) aims to improve the generalization performance (of the resulting classifiers) by learning multiple related tasks simultaneously. Specifically, MTL exploits the intrinsic task relatedness, based on which the informative domain knowledge from each task can be shared across multiple tasks and thus facilitate the individual task learning. It is particularly desirable to share the domain knowledge (among the tasks) when there are a number of related tasks but only limited training data is available for each task. Modeling the relationship of multiple tasks is critical to the generalization performance of the MTL algorithms. In this dissertation, I propose a series of MTL approaches which assume that multiple tasks are intrinsically related via a shared low-dimensional feature space. The proposed MTL approaches are developed to deal with different scenarios and settings; they are respectively formulated as mathematical optimization problems of minimizing the empirical loss regularized by different structures. For all proposed MTL formulations, I develop the associated optimization algorithms to find their globally optimal solution efficiently. I also conduct theoretical analysis for certain MTL approaches by deriving the globally optimal solution recovery condition and the performance bound. To demonstrate the practical performance, I apply the proposed MTL approaches on different real-world applications: (1) Automated annotation of the Drosophila gene expression pattern images; (2) Categorization of the Yahoo web pages. Our experimental results demonstrate the efficiency and effectiveness of the proposed algorithms.
ContributorsChen, Jianhui (Author) / Ye, Jieping (Thesis advisor) / Kumar, Sudhir (Committee member) / Liu, Huan (Committee member) / Xue, Guoliang (Committee member) / Arizona State University (Publisher)
Created2011
152370-Thumbnail Image.png
Description
Functional magnetic resonance imaging (fMRI) has been widely used to measure the retinotopic organization of early visual cortex in the human brain. Previous studies have identified multiple visual field maps (VFMs) based on statistical analysis of fMRI signals, but the resulting geometry has not been fully characterized with mathematical models.

Functional magnetic resonance imaging (fMRI) has been widely used to measure the retinotopic organization of early visual cortex in the human brain. Previous studies have identified multiple visual field maps (VFMs) based on statistical analysis of fMRI signals, but the resulting geometry has not been fully characterized with mathematical models. This thesis explores using concepts from computational conformal geometry to create a custom software framework for examining and generating quantitative mathematical models for characterizing the geometry of early visual areas in the human brain. The software framework includes a graphical user interface built on top of a selected core conformal flattening algorithm and various software tools compiled specifically for processing and examining retinotopic data. Three conformal flattening algorithms were implemented and evaluated for speed and how well they preserve the conformal metric. All three algorithms performed well in preserving the conformal metric but the speed and stability of the algorithms varied. The software framework performed correctly on actual retinotopic data collected using the standard travelling-wave experiment. Preliminary analysis of the Beltrami coefficient for the early data set shows that selected regions of V1 that contain reasonably smooth eccentricity and polar angle gradients do show significant local conformality, warranting further investigation of this approach for analysis of early and higher visual cortex.
ContributorsTa, Duyan (Author) / Wang, Yalin (Thesis advisor) / Maciejewski, Ross (Committee member) / Wonka, Peter (Committee member) / Arizona State University (Publisher)
Created2013
152300-Thumbnail Image.png
Description
In blindness research, the corpus callosum (CC) is the most frequently studied sub-cortical structure, due to its important involvement in visual processing. While most callosal analyses from brain structural magnetic resonance images (MRI) are limited to the 2D mid-sagittal slice, we propose a novel framework to capture a complete set

In blindness research, the corpus callosum (CC) is the most frequently studied sub-cortical structure, due to its important involvement in visual processing. While most callosal analyses from brain structural magnetic resonance images (MRI) are limited to the 2D mid-sagittal slice, we propose a novel framework to capture a complete set of 3D morphological differences in the corpus callosum between two groups of subjects. The CCs are segmented from whole brain T1-weighted MRI and modeled as 3D tetrahedral meshes. The callosal surface is divided into superior and inferior patches on which we compute a volumetric harmonic field by solving the Laplace's equation with Dirichlet boundary conditions. We adopt a refined tetrahedral mesh to compute the Laplacian operator, so our computation can achieve sub-voxel accuracy. Thickness is estimated by tracing the streamlines in the harmonic field. We combine areal changes found using surface tensor-based morphometry and thickness information into a vector at each vertex to be used as a metric for the statistical analysis. Group differences are assessed on this combined measure through Hotelling's T2 test. The method is applied to statistically compare three groups consisting of: congenitally blind (CB), late blind (LB; onset > 8 years old) and sighted (SC) subjects. Our results reveal significant differences in several regions of the CC between both blind groups and the sighted groups; and to a lesser extent between the LB and CB groups. These results demonstrate the crucial role of visual deprivation during the developmental period in reshaping the structural architecture of the CC.
ContributorsXu, Liang (Author) / Wang, Yalin (Thesis advisor) / Maciejewski, Ross (Committee member) / Ye, Jieping (Committee member) / Arizona State University (Publisher)
Created2013
151689-Thumbnail Image.png
Description
Sparsity has become an important modeling tool in areas such as genetics, signal and audio processing, medical image processing, etc. Via the penalization of l-1 norm based regularization, the structured sparse learning algorithms can produce highly accurate models while imposing various predefined structures on the data, such as feature groups

Sparsity has become an important modeling tool in areas such as genetics, signal and audio processing, medical image processing, etc. Via the penalization of l-1 norm based regularization, the structured sparse learning algorithms can produce highly accurate models while imposing various predefined structures on the data, such as feature groups or graphs. In this thesis, I first propose to solve a sparse learning model with a general group structure, where the predefined groups may overlap with each other. Then, I present three real world applications which can benefit from the group structured sparse learning technique. In the first application, I study the Alzheimer's Disease diagnosis problem using multi-modality neuroimaging data. In this dataset, not every subject has all data sources available, exhibiting an unique and challenging block-wise missing pattern. In the second application, I study the automatic annotation and retrieval of fruit-fly gene expression pattern images. Combined with the spatial information, sparse learning techniques can be used to construct effective representation of the expression images. In the third application, I present a new computational approach to annotate developmental stage for Drosophila embryos in the gene expression images. In addition, it provides a stage score that enables one to more finely annotate each embryo so that they are divided into early and late periods of development within standard stage demarcations. Stage scores help us to illuminate global gene activities and changes much better, and more refined stage annotations improve our ability to better interpret results when expression pattern matches are discovered between genes.
ContributorsYuan, Lei (Author) / Ye, Jieping (Thesis advisor) / Wang, Yalin (Committee member) / Xue, Guoliang (Committee member) / Kumar, Sudhir (Committee member) / Arizona State University (Publisher)
Created2013
151291-Thumbnail Image.png
Description
The contemporary architectural pedagogy is far removed from its ancestry: the classical Beaux-Arts and polytechnic schools of the 19th century and the Bauhaus and Vkhutemas models of the modern period. Today, the "digital" has invaded the academy and shapes pedagogical practices, epistemologies, and ontologies within it, and this invasion is

The contemporary architectural pedagogy is far removed from its ancestry: the classical Beaux-Arts and polytechnic schools of the 19th century and the Bauhaus and Vkhutemas models of the modern period. Today, the "digital" has invaded the academy and shapes pedagogical practices, epistemologies, and ontologies within it, and this invasion is reflected in teaching practices, principles, and tools. Much of this digital integration goes unremarked and may not even be explicitly taught. In this qualitative research project, interviews with 18 leading architecture lecturers, professors, and deans from programs across the United States were conducted. These interviews focused on advanced practices of digital architecture, such as the use of digital tools, and how these practices are viewed. These interviews yielded a wealth of information about the uses (and abuses) of advanced digital technologies within the architectural academy, and the results were analyzed using the methods of phenomenology and grounded theory. Most schools use digital technologies to some extent, although this extent varies greatly. While some schools have abandoned hand-drawing and other hand-based craft almost entirely, others have retained traditional techniques and use digital technologies sparingly. Reasons for using digital design processes include industry pressure as well as the increased ability to solve problems and the speed with which they could be solved. Despite the prevalence of digital design, most programs did not teach related design software explicitly, if at all, instead requiring students (especially graduate students) to learn to use them outside the design studio. Some of the problems with digital design identified in the interviews include social problems such as alienation as well as issues like understanding scale and embodiment of skill.
ContributorsAlqabandy, Hamad (Author) / Brandt, Beverly (Thesis advisor) / Mesch, Claudia (Committee member) / Newton, David (Committee member) / Arizona State University (Publisher)
Created2012
151336-Thumbnail Image.png
Description
Over 2 billion people are using online social network services, such as Facebook, Twitter, Google+, LinkedIn, and Pinterest. Users update their status, post their photos, share their information, and chat with others in these social network sites every day; however, not everyone shares the same amount of information. This thesis

Over 2 billion people are using online social network services, such as Facebook, Twitter, Google+, LinkedIn, and Pinterest. Users update their status, post their photos, share their information, and chat with others in these social network sites every day; however, not everyone shares the same amount of information. This thesis explores methods of linking publicly available data sources as a means of extrapolating missing information of Facebook. An application named "Visual Friends Income Map" has been created on Facebook to collect social network data and explore geodemographic properties to link publicly available data, such as the US census data. Multiple predictors are implemented to link data sets and extrapolate missing information from Facebook with accurate predictions. The location based predictor matches Facebook users' locations with census data at the city level for income and demographic predictions. Age and relationship based predictors are created to improve the accuracy of the proposed location based predictor utilizing social network link information. In the case where a user does not share any location information on their Facebook profile, a kernel density estimation location predictor is created. This predictor utilizes publicly available telephone record information of all people with the same surname of this user in the US to create a likelihood distribution of the user's location. This is combined with the user's IP level information in order to narrow the probability estimation down to a local regional constraint.
ContributorsMao, Jingxian (Author) / Maciejewski, Ross (Thesis advisor) / Farin, Gerald (Committee member) / Wang, Yalin (Committee member) / Arizona State University (Publisher)
Created2012
151929-Thumbnail Image.png
Description
The entire history of HIV-1 is hidden in its ten thousand bases, where information regarding its evolutionary traversal through the human population can only be unlocked with fine-scale sequence analysis. Measurable footprints of mutation and recombination have imparted upon us a wealth of knowledge, from multiple chimpanzee-to-human transmissions to patterns

The entire history of HIV-1 is hidden in its ten thousand bases, where information regarding its evolutionary traversal through the human population can only be unlocked with fine-scale sequence analysis. Measurable footprints of mutation and recombination have imparted upon us a wealth of knowledge, from multiple chimpanzee-to-human transmissions to patterns of neutralizing antibody and drug resistance. Extracting maximum understanding from such diverse data can only be accomplished by analyzing the viral population from many angles. This body of work explores two primary aspects of HIV sequence evolution, point mutation and recombination, through cross-sectional (inter-individual) and longitudinal (intra-individual) investigations, respectively. Cross-sectional Analysis: The role of Haiti in the subtype B pandemic has been hotly debated for years; while there have been many studies, up to this point, no one has incorporated the well-known mechanism of retroviral recombination into their biological model. Prior to the use of recombination detection, multiple analyses produced trees where subtype B appears to have first entered Haiti, followed by a jump into the rest of the world. The results presented here contest the Haiti-first theory of the pandemic and instead suggest simultaneous entries of subtype B into Haiti and the rest of the world. Longitudinal Analysis: Potential N-linked glycosylation sites (PNGS) are the most evolutionarily dynamic component of one of the most evolutionarily dynamic proteins known to date. While the number of mutations associated with the increase or decrease of PNGS frequency over time is high, there are a set of relatively stable sites that persist within and between longitudinally sampled individuals. Here, I identify the most conserved stable PNGSs and suggest their potential roles in host-virus interplay. In addition, I have identified, for the first time, what may be a gp-120-based environmental preference for N-linked glycosylation sites.
ContributorsHepp, Crystal Marie, 1981- (Author) / Rosenberg, Michael S. (Thesis advisor) / Hedrick, Philip (Committee member) / Escalante, Ananias (Committee member) / Kumar, Sudhir (Committee member) / Arizona State University (Publisher)
Created2013
151278-Thumbnail Image.png
Description
This document presents a new implementation of the Smoothed Particles Hydrodynamics algorithm using DirectX 11 and DirectCompute. The main goal of this document is to present to the reader an alternative solution to the largely studied and researched problem of fluid simulation. Most other solutions have been implemented using the

This document presents a new implementation of the Smoothed Particles Hydrodynamics algorithm using DirectX 11 and DirectCompute. The main goal of this document is to present to the reader an alternative solution to the largely studied and researched problem of fluid simulation. Most other solutions have been implemented using the NVIDIA CUDA framework; however, the proposed solution in this document uses the Microsoft general-purpose computing on graphics processing units API. The implementation allows for the simulation of a large number of particles in a real-time scenario. The solution presented here uses the Smoothed Particles Hydrodynamics algorithm to calculate the forces within the fluid; this algorithm provides a Lagrangian approach for discretizes the Navier-Stockes equations into a set of particles. Our solution uses the DirectCompute compute shaders to evaluate each particle using the multithreading and multi-core capabilities of the GPU increasing the overall performance. The solution then describes a method for extracting the fluid surface using the Marching Cubes method and the programmable interfaces exposed by the DirectX pipeline. Particularly, this document presents a method for using the Geometry Shader Stage to generate the triangle mesh as defined by the Marching Cubes method. The implementation results show the ability to simulate over 64K particles at a rate of 900 and 400 frames per second, not including the surface reconstruction steps and including the Marching Cubes steps respectively.
ContributorsFigueroa, Gustavo (Author) / Farin, Gerald (Thesis advisor) / Maciejewski, Ross (Committee member) / Wang, Yalin (Committee member) / Arizona State University (Publisher)
Created2012
150829-Thumbnail Image.png
Description
In the middle of the 20th century, juried annuals of Native American painting in art museums were unique opportunities because of their select focus on two-dimensional art as opposed to "craft" objects and their inclusion of artists from across the United States. Their first fifteen years were critical for patronage

In the middle of the 20th century, juried annuals of Native American painting in art museums were unique opportunities because of their select focus on two-dimensional art as opposed to "craft" objects and their inclusion of artists from across the United States. Their first fifteen years were critical for patronage and widespread acceptance of modern easel painting. Held at the Philbrook Art Center in Tulsa (1946-1979), the Denver Art Museum (1951-1954), and the Museum of New Mexico Art Gallery in Santa Fe (1956-1965), they were significant not only for the accolades and prestige they garnered for award winners, but also for setting standards of quality and style at the time. During the early years of the annuals, the art was changing, some moving away from conventional forms derived from the early art training of the 1920s and 30s in the Southwest and Oklahoma, and incorporating modern themes and styles acquired through expanded opportunities for travel and education. The competitions reinforced and reflected a variety of attitudes about contemporary art which ranged from preserving the authenticity of the traditional style to encouraging experimentation. Ultimately becoming sites of conflict, the museums that hosted annuals contested the directions in which artists were working. Exhibition catalogs, archived documents, and newspaper and magazine articles about the annuals provide details on the exhibits and the changes that occurred over time. The museums' guidelines and motivations, and the statistics on the award winners reveal attitudes toward the art. The institutions' reactions in the face of controversy and their adjustments to the annuals' guidelines impart the compromises each made as they adapted to new trends that occurred in Native American painting over a fifteen year period. This thesis compares the approaches of three museums to their juried annuals and establishes the existence of a variety of attitudes on contemporary Native American painting from 1946-1960. Through this collection of institutional views, the competitions maintained a patronage base for traditional style painting while providing opportunities for experimentation, paving the way for the great variety and artistic progress of Native American painting today.
ContributorsPeters, Stephanie (Author) / Duncan, Kate (Thesis advisor) / Fahlman, Betsy (Thesis advisor) / Mesch, Claudia (Committee member) / Arizona State University (Publisher)
Created2012