Matching Items (42)
Filtering by

Clear all filters

150250-Thumbnail Image.png
Description
Immunosignaturing is a new immunodiagnostic technology that uses random-sequence peptide microarrays to profile the humoral immune response. Though the peptides have little sequence homology to any known protein, binding of serum antibodies may be detected, and the pattern correlated to disease states. The aim of my dissertation is to analyze

Immunosignaturing is a new immunodiagnostic technology that uses random-sequence peptide microarrays to profile the humoral immune response. Though the peptides have little sequence homology to any known protein, binding of serum antibodies may be detected, and the pattern correlated to disease states. The aim of my dissertation is to analyze the factors affecting the binding patterns using monoclonal antibodies and determine how much information may be extracted from the sequences. Specifically, I examined the effects of antibody concentration, competition, peptide density, and antibody valence. Peptide binding could be detected at the low concentrations relevant to immunosignaturing, and a monoclonal's signature could even be detected in the presences of 100 fold excess naive IgG. I also found that peptide density was important, but this effect was not due to bivalent binding. Next, I examined in more detail how a polyreactive antibody binds to the random sequence peptides compared to protein sequence derived peptides, and found that it bound to many peptides from both sets, but with low apparent affinity. An in depth look at how the peptide physicochemical properties and sequence complexity revealed that there were some correlations with properties, but they were generally small and varied greatly between antibodies. However, on a limited diversity but larger peptide library, I found that sequence complexity was important for antibody binding. The redundancy on that library did enable the identification of specific sub-sequences recognized by an antibody. The current immunosignaturing platform has little repetition of sub-sequences, so I evaluated several methods to infer antibody epitopes. I found two methods that had modest prediction accuracy, and I developed a software application called GuiTope to facilitate the epitope prediction analysis. None of the methods had sufficient accuracy to identify an unknown antigen from a database. In conclusion, the characteristics of the immunosignaturing platform observed through monoclonal antibody experiments demonstrate its promise as a new diagnostic technology. However, a major limitation is the difficulty in connecting the signature back to the original antigen, though larger peptide libraries could facilitate these predictions.
ContributorsHalperin, Rebecca (Author) / Johnston, Stephen A. (Thesis advisor) / Bordner, Andrew (Committee member) / Taylor, Thomas (Committee member) / Stafford, Phillip (Committee member) / Arizona State University (Publisher)
Created2011
150288-Thumbnail Image.png
Description
In an effort to begin validating the large number of discovered candidate biomarkers, proteomics is beginning to shift from shotgun proteomic experiments towards targeted proteomic approaches that provide solutions to automation and economic concerns. Such approaches to validate biomarkers necessitate the mass spectrometric analysis of hundreds to thousands of human

In an effort to begin validating the large number of discovered candidate biomarkers, proteomics is beginning to shift from shotgun proteomic experiments towards targeted proteomic approaches that provide solutions to automation and economic concerns. Such approaches to validate biomarkers necessitate the mass spectrometric analysis of hundreds to thousands of human samples. As this takes place, a serendipitous opportunity has become evident. By the virtue that as one narrows the focus towards "single" protein targets (instead of entire proteomes) using pan-antibody-based enrichment techniques, a discovery science has emerged, so to speak. This is due to the largely unknown context in which "single" proteins exist in blood (i.e. polymorphisms, transcript variants, and posttranslational modifications) and hence, targeted proteomics has applications for established biomarkers. Furthermore, besides protein heterogeneity accounting for interferences with conventional immunometric platforms, it is becoming evident that this formerly hidden dimension of structural information also contains rich-pathobiological information. Consequently, targeted proteomics studies that aim to ascertain a protein's genuine presentation within disease- stratified populations and serve as a stepping-stone within a biomarker translational pipeline are of clinical interest. Roughly 128 million Americans are pre-diabetic, diabetic, and/or have kidney disease and public and private spending for treating these diseases is in the hundreds of billions of dollars. In an effort to create new solutions for the early detection and management of these conditions, described herein is the design, development, and translation of mass spectrometric immunoassays targeted towards diabetes and kidney disease. Population proteomics experiments were performed for the following clinically relevant proteins: insulin, C-peptide, RANTES, and parathyroid hormone. At least thirty-eight protein isoforms were detected. Besides the numerous disease correlations confronted within the disease-stratified cohorts, certain isoforms also appeared to be causally related to the underlying pathophysiology and/or have therapeutic implications. Technical advancements include multiplexed isoform quantification as well a "dual- extraction" methodology for eliminating non-specific proteins while simultaneously validating isoforms. Industrial efforts towards widespread clinical adoption are also described. Consequently, this work lays a foundation for the translation of mass spectrometric immunoassays into the clinical arena and simultaneously presents the most recent advancements concerning the mass spectrometric immunoassay approach.
ContributorsOran, Paul (Author) / Nelson, Randall (Thesis advisor) / Hayes, Mark (Thesis advisor) / Ros, Alexandra (Committee member) / Williams, Peter (Committee member) / Arizona State University (Publisher)
Created2011
151725-Thumbnail Image.png
Description
Woody plant encroachment is a worldwide phenomenon linked to water availability in semiarid systems. Nevertheless, the implications of woody plant encroachment on the hydrologic cycle are poorly understood, especially at the catchment scale. This study takes place in a pair of small semiarid rangeland undergoing the encroachment of Prosopis velutina

Woody plant encroachment is a worldwide phenomenon linked to water availability in semiarid systems. Nevertheless, the implications of woody plant encroachment on the hydrologic cycle are poorly understood, especially at the catchment scale. This study takes place in a pair of small semiarid rangeland undergoing the encroachment of Prosopis velutina Woot., or velvet mesquite tree. The similarly-sized basins are in close proximity, leading to equivalent meteorological and soil conditions. One basin was treated for mesquite in 1974, while the other represents the encroachment process. A sensor network was installed to measure ecohydrological states and fluxes, including precipitation, runoff, soil moisture and evapotranspiration. Observations from June 1, 2011 through September 30, 2012 are presented to describe the seasonality and spatial variability of ecohydrological conditions during the North American Monsoon (NAM). Runoff observations are linked to historical changes in runoff production in each watershed. Observations indicate that the mesquite-treated basin generates more runoff pulses and greater runoff volume for small rainfall events, while the mesquite-encroached basin generates more runoff volume for large rainfall events. A distributed hydrologic model is applied to both basins to investigate the runoff threshold processes experienced during the NAM. Vegetation in the two basins is classified into grass, mesquite, or bare soil using high-resolution imagery. Model predictions are used to investigate the vegetation controls on soil moisture, evapotranspiration, and runoff generation. The distributed model shows that grass and mesquite sites retain the highest levels of soil moisture. The model also captures the runoff generation differences between the two watersheds that have been observed over the past decade. Generally, grass sites in the mesquite-treated basin have less plant interception and evapotranspiration, leading to higher soil moisture that supports greater runoff for small rainfall events. For large rainfall events, the mesquite-encroached basin produces greater runoff due to its higher fraction of bare soil. The results of this study show that a distributed hydrologic model can be used to explain runoff threshold processes linked to woody plant encroachment at the catchment-scale and provides useful interpretations for rangeland management in semiarid areas.
ContributorsPierini, Nicole A (Author) / Vivoni, Enrique R (Thesis advisor) / Wang, Zhi-Hua (Committee member) / Mays, Larry W. (Committee member) / Arizona State University (Publisher)
Created2013
151436-Thumbnail Image.png
Description
Signal processing techniques have been used extensively in many engineering problems and in recent years its application has extended to non-traditional research fields such as biological systems. Many of these applications require extraction of a signal or parameter of interest from degraded measurements. One such application is mass spectrometry immunoassay

Signal processing techniques have been used extensively in many engineering problems and in recent years its application has extended to non-traditional research fields such as biological systems. Many of these applications require extraction of a signal or parameter of interest from degraded measurements. One such application is mass spectrometry immunoassay (MSIA) which has been one of the primary methods of biomarker discovery techniques. MSIA analyzes protein molecules as potential biomarkers using time of flight mass spectrometry (TOF-MS). Peak detection in TOF-MS is important for biomarker analysis and many other MS related application. Though many peak detection algorithms exist, most of them are based on heuristics models. One of the ways of detecting signal peaks is by deploying stochastic models of the signal and noise observations. Likelihood ratio test (LRT) detector, based on the Neyman-Pearson (NP) lemma, is an uniformly most powerful test to decision making in the form of a hypothesis test. The primary goal of this dissertation is to develop signal and noise models for the electrospray ionization (ESI) TOF-MS data. A new method is proposed for developing the signal model by employing first principles calculations based on device physics and molecular properties. The noise model is developed by analyzing MS data from careful experiments in the ESI mass spectrometer. A non-flat baseline in MS data is common. The reasons behind the formation of this baseline has not been fully comprehended. A new signal model explaining the presence of baseline is proposed, though detailed experiments are needed to further substantiate the model assumptions. Signal detection schemes based on these signal and noise models are proposed. A maximum likelihood (ML) method is introduced for estimating the signal peak amplitudes. The performance of the detection methods and ML estimation are evaluated with Monte Carlo simulation which shows promising results. An application of these methods is proposed for fractional abundance calculation for biomarker analysis, which is mathematically robust and fundamentally different than the current algorithms. Biomarker panels for type 2 diabetes and cardiovascular disease are analyzed using existing MS analysis algorithms. Finally, a support vector machine based multi-classification algorithm is developed for evaluating the biomarkers' effectiveness in discriminating type 2 diabetes and cardiovascular diseases and is shown to perform better than a linear discriminant analysis based classifier.
ContributorsBuddi, Sai (Author) / Taylor, Thomas (Thesis advisor) / Cochran, Douglas (Thesis advisor) / Nelson, Randall (Committee member) / Duman, Tolga (Committee member) / Arizona State University (Publisher)
Created2012
150929-Thumbnail Image.png
Description
This thesis examines the application of statistical signal processing approaches to data arising from surveys intended to measure psychological and sociological phenomena underpinning human social dynamics. The use of signal processing methods for analysis of signals arising from measurement of social, biological, and other non-traditional phenomena has been an important

This thesis examines the application of statistical signal processing approaches to data arising from surveys intended to measure psychological and sociological phenomena underpinning human social dynamics. The use of signal processing methods for analysis of signals arising from measurement of social, biological, and other non-traditional phenomena has been an important and growing area of signal processing research over the past decade. Here, we explore the application of statistical modeling and signal processing concepts to data obtained from the Global Group Relations Project, specifically to understand and quantify the effects and interactions of social psychological factors related to intergroup conflicts. We use Bayesian networks to specify prospective models of conditional dependence. Bayesian networks are determined between social psychological factors and conflict variables, and modeled by directed acyclic graphs, while the significant interactions are modeled as conditional probabilities. Since the data are sparse and multi-dimensional, we regress Gaussian mixture models (GMMs) against the data to estimate the conditional probabilities of interest. The parameters of GMMs are estimated using the expectation-maximization (EM) algorithm. However, the EM algorithm may suffer from over-fitting problem due to the high dimensionality and limited observations entailed in this data set. Therefore, the Akaike information criterion (AIC) and the Bayesian information criterion (BIC) are used for GMM order estimation. To assist intuitive understanding of the interactions of social variables and the intergroup conflicts, we introduce a color-based visualization scheme. In this scheme, the intensities of colors are proportional to the conditional probabilities observed.
ContributorsLiu, Hui (Author) / Taylor, Thomas (Thesis advisor) / Cochran, Douglas (Thesis advisor) / Zhang, Junshan (Committee member) / Arizona State University (Publisher)
Created2012
150439-Thumbnail Image.png
Description
This dissertation describes a novel, low cost strategy of using particle streak (track) images for accurate micro-channel velocity field mapping. It is shown that 2-dimensional, 2-component fields can be efficiently obtained using the spatial variation of particle track lengths in micro-channels. The velocity field is a critical performance feature of

This dissertation describes a novel, low cost strategy of using particle streak (track) images for accurate micro-channel velocity field mapping. It is shown that 2-dimensional, 2-component fields can be efficiently obtained using the spatial variation of particle track lengths in micro-channels. The velocity field is a critical performance feature of many microfluidic devices. Since it is often the case that un-modeled micro-scale physics frustrates principled design methodologies, particle based velocity field estimation is an essential design and validation tool. Current technologies that achieve this goal use particle constellation correlation strategies and rely heavily on costly, high-speed imaging hardware. The proposed image/ video processing based method achieves comparable accuracy for fraction of the cost. In the context of micro-channel velocimetry, the usability of particle streaks has been poorly studied so far. Their use has remained restricted mostly to bulk flow measurements and occasional ad-hoc uses in microfluidics. A second look at the usability of particle streak lengths in this work reveals that they can be efficiently used, after approximately 15 years from their first use for micro-channel velocimetry. Particle tracks in steady, smooth microfluidic flows is mathematically modeled and a framework for using experimentally observed particle track lengths for local velocity field estimation is introduced here, followed by algorithm implementation and quantitative verification. Further, experimental considerations and image processing techniques that can facilitate the proposed methods are also discussed in this dissertation. Unavailability of benchmarked particle track image data motivated the implementation of a simulation framework with the capability to generate exposure time controlled particle track image sequence for velocity vector fields. This dissertation also describes this work and shows that arbitrary velocity fields designed in computational fluid dynamics software tools can be used to obtain such images. Apart from aiding gold-standard data generation, such images would find use for quick microfluidic flow field visualization and help improve device designs.
ContributorsMahanti, Prasun (Author) / Cochran, Douglas (Thesis advisor) / Taylor, Thomas (Thesis advisor) / Hayes, Mark (Committee member) / Zhang, Junshan (Committee member) / Arizona State University (Publisher)
Created2011
151170-Thumbnail Image.png
Description
Cancer claims hundreds of thousands of lives every year in US alone. Finding ways for early detection of cancer onset is crucial for better management and treatment of cancer. Thus, biomarkers especially protein biomarkers, being the functional units which reflect dynamic physiological changes, need to be discovered. Though important, there

Cancer claims hundreds of thousands of lives every year in US alone. Finding ways for early detection of cancer onset is crucial for better management and treatment of cancer. Thus, biomarkers especially protein biomarkers, being the functional units which reflect dynamic physiological changes, need to be discovered. Though important, there are only a few approved protein cancer biomarkers till date. To accelerate this process, fast, comprehensive and affordable assays are required which can be applied to large population studies. For this, these assays should be able to comprehensively characterize and explore the molecular diversity of nominally "single" proteins across populations. This information is usually unavailable with commonly used immunoassays such as ELISA (enzyme linked immunosorbent assay) which either ignore protein microheterogeneity, or are confounded by it. To this end, mass spectrometric immuno assays (MSIA) for three different human plasma proteins have been developed. These proteins viz. IGF-1, hemopexin and tetranectin have been found in reported literature to show correlations with many diseases along with several carcinomas. Developed assays were used to extract entire proteins from plasma samples and subsequently analyzed on mass spectrometric platforms. Matrix assisted laser desorption ionization (MALDI) and electrospray ionization (ESI) mass spectrometric techniques where used due to their availability and suitability for the analysis. This resulted in visibility of different structural forms of these proteins showing their structural micro-heterogeneity which is invisible to commonly used immunoassays. These assays are fast, comprehensive and can be applied in large sample studies to analyze proteins for biomarker discovery.
ContributorsRai, Samita (Author) / Nelson, Randall (Thesis advisor) / Hayes, Mark (Thesis advisor) / Borges, Chad (Committee member) / Ros, Alexandra (Committee member) / Arizona State University (Publisher)
Created2012
141432-Thumbnail Image.png
Description

This study examines the impact of spatial landscape configuration (e.g., clustered, dispersed) on land-surface temperatures (LST) over Phoenix, Arizona, and Las Vegas, Nevada, USA. We classified detailed land-cover types via object-based image analysis (OBIA) using Geoeye-1 at 3-m resolution (Las Vegas) and QuickBird at 2.4-m resolution (Phoenix). Spatial autocorrelation (local

This study examines the impact of spatial landscape configuration (e.g., clustered, dispersed) on land-surface temperatures (LST) over Phoenix, Arizona, and Las Vegas, Nevada, USA. We classified detailed land-cover types via object-based image analysis (OBIA) using Geoeye-1 at 3-m resolution (Las Vegas) and QuickBird at 2.4-m resolution (Phoenix). Spatial autocorrelation (local Moran’s I ) was then used to test for spatial dependence and to determine how clustered or dispersed points were arranged. Next, we used Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) data acquired over Phoenix (daytime on 10 June and nighttime on 17 October 2011) and Las Vegas (daytime on 6 July and nighttime on 27 August 2005) to examine day- and nighttime LST with regard to the spatial arrangement of anthropogenic and vegetation features. Local Moran’s I values of each land-cover type were spatially correlated to surface temperature. The spatial configuration of grass and trees shows strong negative correlations with LST, implying that clustered vegetation lowers surface temperatures more effectively. In contrast, clustered spatial arrangements of anthropogenic land-cover types, especially impervious surfaces and open soil, elevate LST. These findings suggest that city planners and managers should, where possible, incorporate clustered grass and trees to disperse unmanaged soil and paved surfaces, and fill open unmanaged soil with vegetation. Our findings are in line with national efforts to augment and strengthen green infrastructure, complete streets, parking management, and transit-oriented development practices, and reduce sprawling, unwalkable housing development.

ContributorsMyint, Soe Win (Author) / Zheng, Baojuan (Author) / Talen, Emily (Author) / Fan, Chao (Author) / Kaplan, Shari (Author) / Middel, Ariane (Author) / Smith, Martin (Author) / Huang, Huei-Ping (Author) / Brazel, Anthony J. (Author)
Created2015-06-29
141440-Thumbnail Image.png
Description

Engineered pavements cover a large fraction of cities and offer significant potential for urban heat island mitigation. Though rapidly increasing research efforts have been devoted to the study of pavement materials, thermal interactions between buildings and the ambient environment are mostly neglected. In this study, numerical models featuring a realistic

Engineered pavements cover a large fraction of cities and offer significant potential for urban heat island mitigation. Though rapidly increasing research efforts have been devoted to the study of pavement materials, thermal interactions between buildings and the ambient environment are mostly neglected. In this study, numerical models featuring a realistic representation of building-environment thermal interactions, were applied to quantify the effect of pavements on the urban thermal environment at multiple scales. It was found that performance of pavements inside the canyon was largely determined by the canyon geometry. In a high-density residential area, modifying pavements had insignificant effect on the wall temperature and building energy consumption. At a regional scale, various pavement types were also found to have a limited cooling effect on land surface temperature and 2-m air temperature for metropolitan Phoenix. In the context of global climate change, the effect of pavement was evaluated in terms of the equivalent CO2 emission. Equivalent CO2 emission offset by reflective pavements in urban canyons was only about 13.9e46.6% of that without building canopies, depending on the canyon geometry. This study revealed the importance of building-environment thermal interactions in determining thermal conditions inside the urban canopy.

ContributorsYang, Jiachuan (Author) / Wang, Zhi-Hua (Author) / Kaloush, Kamil (Author) / Dylla, Heather (Author)
Created2016-08-22
149386-Thumbnail Image.png
Description
Peptides offer great promise as targeted affinity ligands, but the space of possible peptide sequences is vast, making experimental identification of lead candidates expensive, difficult, and uncertain. Computational modeling can narrow the search by estimating the affinity and specificity of a given peptide in relation to a predetermined protein

Peptides offer great promise as targeted affinity ligands, but the space of possible peptide sequences is vast, making experimental identification of lead candidates expensive, difficult, and uncertain. Computational modeling can narrow the search by estimating the affinity and specificity of a given peptide in relation to a predetermined protein target. The predictive performance of computational models of interactions of intermediate-length peptides with proteins can be improved by taking into account the stochastic nature of the encounter and binding dynamics. A theoretical case is made for the hypothesis that, because of the flexibility of the peptide and the structural complexity of the target protein, interactions are best characterized by an ensemble of possible bound configurations rather than a single “lock and key” fit. A model incorporating these factors is proposed and evaluated. A comprehensive dataset of 3,924 peptide-protein interface structures was extracted from the Protein Data Bank (PDB) and descriptors were computed characterizing the geometry and energetics of each interface. The characteristics of these interfaces are shown to be generally consistent with the proposed model, and heuristics for design and selection of peptide ligands are derived. The curated and energy-minimized interface structure dataset and a relational database containing the detailed results of analysis and energy modeling are made publicly available via a web repository. A novel analytical technique based on the proposed theoretical model, Virtual Scanning Probe Mapping (VSPM), is implemented in software to analyze the interaction between a target protein of known structure and a peptide of specified sequence, producing a spatial map indicating the most likely peptide binding regions on the protein target. The resulting predictions are shown to be superior to those of two other published methods, and support the validity of the stochastic binding model.
ContributorsEmery, Jack Scott (Author) / Pizziconi, Vincent B (Thesis advisor) / Woodbury, Neal W (Thesis advisor) / Guilbeau, Eric J (Committee member) / Stafford, Phillip (Committee member) / Taylor, Thomas (Committee member) / Towe, Bruce C (Committee member) / Arizona State University (Publisher)
Created2010