Matching Items (142)
Filtering by

Clear all filters

149650-Thumbnail Image.png
Description
A synbody is a newly developed protein binding peptide which can be rapidly produced by chemical methods. The advantages of the synbody producing process make it a potential human proteome binding reagent. Most of the synbodies are designed to bind to specific proteins. The peptides incorporated in a synbody are

A synbody is a newly developed protein binding peptide which can be rapidly produced by chemical methods. The advantages of the synbody producing process make it a potential human proteome binding reagent. Most of the synbodies are designed to bind to specific proteins. The peptides incorporated in a synbody are discovered with peptide microarray technology. Nevertheless, the targets for unknown synbodies can also be discovered by searching through a protein mixture. The first part of this thesis mainly focuses on the process of target searching, which was performed with immunoprecipitation assays and mass spectrometry analysis. Proteins are pulled down from the cell lysate by certain synbodies, and then these proteins are identified using mass spectrometry. After excluding non-specific bindings, the interaction between a synbody and its real target(s) can be verified with affinity measurements. As a specific example, the binding between 1-4-KCap synbody and actin was discovered. This result proved the feasibility of the mass spectrometry based method and also suggested that a high throughput synbody discovery platform for the human proteome could be developed. Besides the application of synbody development, the peptide microarray technology can also be used for immunosignatures. The composition of all types of antibodies existing in one's blood is related to an individual's health condition. A method, called immunosignaturing, has been developed for early disease diagnosis based on this principle. CIM10K microarray slides work as a platform for blood antibody detection in immunosignaturing. During the analysis of an immunosignature, the data from these slides needs to be validated by using landing light peptides. The second part of this thesis focuses on the validation of the data. A biotinylated peptide was used as a landing light on the new CIM10K slides. The data was collected in several rounds of tests and indicated that the variation among landing lights was significantly reduced by using the newly prepared biotinylated peptide compared with old peptide mixture. Several suggestions for further landing light improvement are proposed based on the results.
ContributorsSun, Minyao (Author) / Johnston, Stephen Albert (Thesis advisor) / Diehnelt, Chris Wayne (Committee member) / Stafford, Phillip (Committee member) / Arizona State University (Publisher)
Created2011
150250-Thumbnail Image.png
Description
Immunosignaturing is a new immunodiagnostic technology that uses random-sequence peptide microarrays to profile the humoral immune response. Though the peptides have little sequence homology to any known protein, binding of serum antibodies may be detected, and the pattern correlated to disease states. The aim of my dissertation is to analyze

Immunosignaturing is a new immunodiagnostic technology that uses random-sequence peptide microarrays to profile the humoral immune response. Though the peptides have little sequence homology to any known protein, binding of serum antibodies may be detected, and the pattern correlated to disease states. The aim of my dissertation is to analyze the factors affecting the binding patterns using monoclonal antibodies and determine how much information may be extracted from the sequences. Specifically, I examined the effects of antibody concentration, competition, peptide density, and antibody valence. Peptide binding could be detected at the low concentrations relevant to immunosignaturing, and a monoclonal's signature could even be detected in the presences of 100 fold excess naive IgG. I also found that peptide density was important, but this effect was not due to bivalent binding. Next, I examined in more detail how a polyreactive antibody binds to the random sequence peptides compared to protein sequence derived peptides, and found that it bound to many peptides from both sets, but with low apparent affinity. An in depth look at how the peptide physicochemical properties and sequence complexity revealed that there were some correlations with properties, but they were generally small and varied greatly between antibodies. However, on a limited diversity but larger peptide library, I found that sequence complexity was important for antibody binding. The redundancy on that library did enable the identification of specific sub-sequences recognized by an antibody. The current immunosignaturing platform has little repetition of sub-sequences, so I evaluated several methods to infer antibody epitopes. I found two methods that had modest prediction accuracy, and I developed a software application called GuiTope to facilitate the epitope prediction analysis. None of the methods had sufficient accuracy to identify an unknown antigen from a database. In conclusion, the characteristics of the immunosignaturing platform observed through monoclonal antibody experiments demonstrate its promise as a new diagnostic technology. However, a major limitation is the difficulty in connecting the signature back to the original antigen, though larger peptide libraries could facilitate these predictions.
ContributorsHalperin, Rebecca (Author) / Johnston, Stephen A. (Thesis advisor) / Bordner, Andrew (Committee member) / Taylor, Thomas (Committee member) / Stafford, Phillip (Committee member) / Arizona State University (Publisher)
Created2011
150131-Thumbnail Image.png
Description
African Swine Fever (ASF), endemic in many African countries, is now spreading to other continents. Though ASF is capable of incurring serious economic losses in affected countries, no vaccine exists to provide immunity to animals. Disease control relies largely on rapid diagnosis and the implementation of movement restrictions and strict

African Swine Fever (ASF), endemic in many African countries, is now spreading to other continents. Though ASF is capable of incurring serious economic losses in affected countries, no vaccine exists to provide immunity to animals. Disease control relies largely on rapid diagnosis and the implementation of movement restrictions and strict eradication programs. Developing a scalable, accurate and low cost diagnostic for ASF will be of great help for the current situation. CIM's 10K random peptide microarray is a new high-throughput platform that allows systematic investigations of immune responses associated with disease and shows promise as a diagnostic tool. In this study, this new technology was applied to characterize the immune responses of ASF virus (ASFV) infections and immunizations. Six sets of sera from ASFV antigen immunized pigs, 6 sera from infected pigs and 20 sera samples from unexposed pigs were tested and analyzed statistically. Results show that both ASFV antigen immunized pigs and ASFV viral infected pigs can be distinguished from unexposed pigs. Since it appears that immune responses to other viral infections are also distinguishable on this platform, it holds the potential of being useful in developing a new ASF diagnostic. The ability of this platform to identify specific ASFV antibody epitopes was also explored. A subtle motif was found to be shared among a set of peptides displaying the highest reactivity for an antigen specific antibody. However, this motif does not seem to match with any antibody epitopes predicted by a linear antibody epitope prediction.
ContributorsXiao, Liang (Author) / Sykes, Kathryn (Thesis advisor) / Zhao, Zhan-Gong (Committee member) / Stafford, Phillip (Committee member) / Arizona State University (Publisher)
Created2011
152370-Thumbnail Image.png
Description
Functional magnetic resonance imaging (fMRI) has been widely used to measure the retinotopic organization of early visual cortex in the human brain. Previous studies have identified multiple visual field maps (VFMs) based on statistical analysis of fMRI signals, but the resulting geometry has not been fully characterized with mathematical models.

Functional magnetic resonance imaging (fMRI) has been widely used to measure the retinotopic organization of early visual cortex in the human brain. Previous studies have identified multiple visual field maps (VFMs) based on statistical analysis of fMRI signals, but the resulting geometry has not been fully characterized with mathematical models. This thesis explores using concepts from computational conformal geometry to create a custom software framework for examining and generating quantitative mathematical models for characterizing the geometry of early visual areas in the human brain. The software framework includes a graphical user interface built on top of a selected core conformal flattening algorithm and various software tools compiled specifically for processing and examining retinotopic data. Three conformal flattening algorithms were implemented and evaluated for speed and how well they preserve the conformal metric. All three algorithms performed well in preserving the conformal metric but the speed and stability of the algorithms varied. The software framework performed correctly on actual retinotopic data collected using the standard travelling-wave experiment. Preliminary analysis of the Beltrami coefficient for the early data set shows that selected regions of V1 that contain reasonably smooth eccentricity and polar angle gradients do show significant local conformality, warranting further investigation of this approach for analysis of early and higher visual cortex.
ContributorsTa, Duyan (Author) / Wang, Yalin (Thesis advisor) / Maciejewski, Ross (Committee member) / Wonka, Peter (Committee member) / Arizona State University (Publisher)
Created2013
152300-Thumbnail Image.png
Description
In blindness research, the corpus callosum (CC) is the most frequently studied sub-cortical structure, due to its important involvement in visual processing. While most callosal analyses from brain structural magnetic resonance images (MRI) are limited to the 2D mid-sagittal slice, we propose a novel framework to capture a complete set

In blindness research, the corpus callosum (CC) is the most frequently studied sub-cortical structure, due to its important involvement in visual processing. While most callosal analyses from brain structural magnetic resonance images (MRI) are limited to the 2D mid-sagittal slice, we propose a novel framework to capture a complete set of 3D morphological differences in the corpus callosum between two groups of subjects. The CCs are segmented from whole brain T1-weighted MRI and modeled as 3D tetrahedral meshes. The callosal surface is divided into superior and inferior patches on which we compute a volumetric harmonic field by solving the Laplace's equation with Dirichlet boundary conditions. We adopt a refined tetrahedral mesh to compute the Laplacian operator, so our computation can achieve sub-voxel accuracy. Thickness is estimated by tracing the streamlines in the harmonic field. We combine areal changes found using surface tensor-based morphometry and thickness information into a vector at each vertex to be used as a metric for the statistical analysis. Group differences are assessed on this combined measure through Hotelling's T2 test. The method is applied to statistically compare three groups consisting of: congenitally blind (CB), late blind (LB; onset > 8 years old) and sighted (SC) subjects. Our results reveal significant differences in several regions of the CC between both blind groups and the sighted groups; and to a lesser extent between the LB and CB groups. These results demonstrate the crucial role of visual deprivation during the developmental period in reshaping the structural architecture of the CC.
ContributorsXu, Liang (Author) / Wang, Yalin (Thesis advisor) / Maciejewski, Ross (Committee member) / Ye, Jieping (Committee member) / Arizona State University (Publisher)
Created2013
151689-Thumbnail Image.png
Description
Sparsity has become an important modeling tool in areas such as genetics, signal and audio processing, medical image processing, etc. Via the penalization of l-1 norm based regularization, the structured sparse learning algorithms can produce highly accurate models while imposing various predefined structures on the data, such as feature groups

Sparsity has become an important modeling tool in areas such as genetics, signal and audio processing, medical image processing, etc. Via the penalization of l-1 norm based regularization, the structured sparse learning algorithms can produce highly accurate models while imposing various predefined structures on the data, such as feature groups or graphs. In this thesis, I first propose to solve a sparse learning model with a general group structure, where the predefined groups may overlap with each other. Then, I present three real world applications which can benefit from the group structured sparse learning technique. In the first application, I study the Alzheimer's Disease diagnosis problem using multi-modality neuroimaging data. In this dataset, not every subject has all data sources available, exhibiting an unique and challenging block-wise missing pattern. In the second application, I study the automatic annotation and retrieval of fruit-fly gene expression pattern images. Combined with the spatial information, sparse learning techniques can be used to construct effective representation of the expression images. In the third application, I present a new computational approach to annotate developmental stage for Drosophila embryos in the gene expression images. In addition, it provides a stage score that enables one to more finely annotate each embryo so that they are divided into early and late periods of development within standard stage demarcations. Stage scores help us to illuminate global gene activities and changes much better, and more refined stage annotations improve our ability to better interpret results when expression pattern matches are discovered between genes.
ContributorsYuan, Lei (Author) / Ye, Jieping (Thesis advisor) / Wang, Yalin (Committee member) / Xue, Guoliang (Committee member) / Kumar, Sudhir (Committee member) / Arizona State University (Publisher)
Created2013
151291-Thumbnail Image.png
Description
The contemporary architectural pedagogy is far removed from its ancestry: the classical Beaux-Arts and polytechnic schools of the 19th century and the Bauhaus and Vkhutemas models of the modern period. Today, the "digital" has invaded the academy and shapes pedagogical practices, epistemologies, and ontologies within it, and this invasion is

The contemporary architectural pedagogy is far removed from its ancestry: the classical Beaux-Arts and polytechnic schools of the 19th century and the Bauhaus and Vkhutemas models of the modern period. Today, the "digital" has invaded the academy and shapes pedagogical practices, epistemologies, and ontologies within it, and this invasion is reflected in teaching practices, principles, and tools. Much of this digital integration goes unremarked and may not even be explicitly taught. In this qualitative research project, interviews with 18 leading architecture lecturers, professors, and deans from programs across the United States were conducted. These interviews focused on advanced practices of digital architecture, such as the use of digital tools, and how these practices are viewed. These interviews yielded a wealth of information about the uses (and abuses) of advanced digital technologies within the architectural academy, and the results were analyzed using the methods of phenomenology and grounded theory. Most schools use digital technologies to some extent, although this extent varies greatly. While some schools have abandoned hand-drawing and other hand-based craft almost entirely, others have retained traditional techniques and use digital technologies sparingly. Reasons for using digital design processes include industry pressure as well as the increased ability to solve problems and the speed with which they could be solved. Despite the prevalence of digital design, most programs did not teach related design software explicitly, if at all, instead requiring students (especially graduate students) to learn to use them outside the design studio. Some of the problems with digital design identified in the interviews include social problems such as alienation as well as issues like understanding scale and embodiment of skill.
ContributorsAlqabandy, Hamad (Author) / Brandt, Beverly (Thesis advisor) / Mesch, Claudia (Committee member) / Newton, David (Committee member) / Arizona State University (Publisher)
Created2012
151336-Thumbnail Image.png
Description
Over 2 billion people are using online social network services, such as Facebook, Twitter, Google+, LinkedIn, and Pinterest. Users update their status, post their photos, share their information, and chat with others in these social network sites every day; however, not everyone shares the same amount of information. This thesis

Over 2 billion people are using online social network services, such as Facebook, Twitter, Google+, LinkedIn, and Pinterest. Users update their status, post their photos, share their information, and chat with others in these social network sites every day; however, not everyone shares the same amount of information. This thesis explores methods of linking publicly available data sources as a means of extrapolating missing information of Facebook. An application named "Visual Friends Income Map" has been created on Facebook to collect social network data and explore geodemographic properties to link publicly available data, such as the US census data. Multiple predictors are implemented to link data sets and extrapolate missing information from Facebook with accurate predictions. The location based predictor matches Facebook users' locations with census data at the city level for income and demographic predictions. Age and relationship based predictors are created to improve the accuracy of the proposed location based predictor utilizing social network link information. In the case where a user does not share any location information on their Facebook profile, a kernel density estimation location predictor is created. This predictor utilizes publicly available telephone record information of all people with the same surname of this user in the US to create a likelihood distribution of the user's location. This is combined with the user's IP level information in order to narrow the probability estimation down to a local regional constraint.
ContributorsMao, Jingxian (Author) / Maciejewski, Ross (Thesis advisor) / Farin, Gerald (Committee member) / Wang, Yalin (Committee member) / Arizona State University (Publisher)
Created2012
151278-Thumbnail Image.png
Description
This document presents a new implementation of the Smoothed Particles Hydrodynamics algorithm using DirectX 11 and DirectCompute. The main goal of this document is to present to the reader an alternative solution to the largely studied and researched problem of fluid simulation. Most other solutions have been implemented using the

This document presents a new implementation of the Smoothed Particles Hydrodynamics algorithm using DirectX 11 and DirectCompute. The main goal of this document is to present to the reader an alternative solution to the largely studied and researched problem of fluid simulation. Most other solutions have been implemented using the NVIDIA CUDA framework; however, the proposed solution in this document uses the Microsoft general-purpose computing on graphics processing units API. The implementation allows for the simulation of a large number of particles in a real-time scenario. The solution presented here uses the Smoothed Particles Hydrodynamics algorithm to calculate the forces within the fluid; this algorithm provides a Lagrangian approach for discretizes the Navier-Stockes equations into a set of particles. Our solution uses the DirectCompute compute shaders to evaluate each particle using the multithreading and multi-core capabilities of the GPU increasing the overall performance. The solution then describes a method for extracting the fluid surface using the Marching Cubes method and the programmable interfaces exposed by the DirectX pipeline. Particularly, this document presents a method for using the Geometry Shader Stage to generate the triangle mesh as defined by the Marching Cubes method. The implementation results show the ability to simulate over 64K particles at a rate of 900 and 400 frames per second, not including the surface reconstruction steps and including the Marching Cubes steps respectively.
ContributorsFigueroa, Gustavo (Author) / Farin, Gerald (Thesis advisor) / Maciejewski, Ross (Committee member) / Wang, Yalin (Committee member) / Arizona State University (Publisher)
Created2012
150829-Thumbnail Image.png
Description
In the middle of the 20th century, juried annuals of Native American painting in art museums were unique opportunities because of their select focus on two-dimensional art as opposed to "craft" objects and their inclusion of artists from across the United States. Their first fifteen years were critical for patronage

In the middle of the 20th century, juried annuals of Native American painting in art museums were unique opportunities because of their select focus on two-dimensional art as opposed to "craft" objects and their inclusion of artists from across the United States. Their first fifteen years were critical for patronage and widespread acceptance of modern easel painting. Held at the Philbrook Art Center in Tulsa (1946-1979), the Denver Art Museum (1951-1954), and the Museum of New Mexico Art Gallery in Santa Fe (1956-1965), they were significant not only for the accolades and prestige they garnered for award winners, but also for setting standards of quality and style at the time. During the early years of the annuals, the art was changing, some moving away from conventional forms derived from the early art training of the 1920s and 30s in the Southwest and Oklahoma, and incorporating modern themes and styles acquired through expanded opportunities for travel and education. The competitions reinforced and reflected a variety of attitudes about contemporary art which ranged from preserving the authenticity of the traditional style to encouraging experimentation. Ultimately becoming sites of conflict, the museums that hosted annuals contested the directions in which artists were working. Exhibition catalogs, archived documents, and newspaper and magazine articles about the annuals provide details on the exhibits and the changes that occurred over time. The museums' guidelines and motivations, and the statistics on the award winners reveal attitudes toward the art. The institutions' reactions in the face of controversy and their adjustments to the annuals' guidelines impart the compromises each made as they adapted to new trends that occurred in Native American painting over a fifteen year period. This thesis compares the approaches of three museums to their juried annuals and establishes the existence of a variety of attitudes on contemporary Native American painting from 1946-1960. Through this collection of institutional views, the competitions maintained a patronage base for traditional style painting while providing opportunities for experimentation, paving the way for the great variety and artistic progress of Native American painting today.
ContributorsPeters, Stephanie (Author) / Duncan, Kate (Thesis advisor) / Fahlman, Betsy (Thesis advisor) / Mesch, Claudia (Committee member) / Arizona State University (Publisher)
Created2012