Matching Items (4)
Filtering by

Clear all filters

151939-Thumbnail Image.png
Description
Random peptide microarrays are a powerful tool for both the treatment and diagnostics of infectious diseases. On the treatment side, selected random peptides on the microarray have either binding or lytic potency against certain pathogens cells, thus they can be synthesized into new antimicrobial agents, denoted as synbodies (synthetic antibodies).

Random peptide microarrays are a powerful tool for both the treatment and diagnostics of infectious diseases. On the treatment side, selected random peptides on the microarray have either binding or lytic potency against certain pathogens cells, thus they can be synthesized into new antimicrobial agents, denoted as synbodies (synthetic antibodies). On the diagnostic side, serum containing specific infection-related antibodies create unique and distinct "pathogen-immunosignatures" on the random peptide microarray distinct from the healthy control serum, and this different mode of binding can be used as a more precise measurement than traditional ELISA tests. My thesis project is separated into these two parts: the first part falls into the treatment side and the second one focuses on the diagnostic side. My first chapter shows that a substitution amino acid peptide library helps to improve the activity of a recently reported synthetic antimicrobial peptide selected by the random peptide microarray. By substituting one or two amino acids of the original lead peptide, the new substitutes show changed hemolytic effects against mouse red blood cells and changed potency against two pathogens: Staphylococcus aureus and Pseudomonas aeruginosa. Two new substitutes are then combined together to form the synbody, which shows a significantly antimicrobial potency against Staphylococcus aureus (<0.5uM). In the second chapter, I explore the possibility of using the 10K Ver.2 random peptide microarray to monitor the humoral immune response of dengue. Over 2.5 billion people (40% of the world's population) live in dengue transmitting areas. However, currently there is no efficient dengue treatment or vaccine. Here, with limited dengue patient serum samples, we show that the immunosignature has the potential to not only distinguish the dengue infection from non-infected people, but also the primary dengue infection from the secondary dengue infections, dengue infection from West Nile Virus (WNV) infection, and even between different dengue serotypes. By further bioinformatic analysis, we demonstrate that the significant peptides selected to distinguish dengue infected and normal samples may indicate the epitopes responsible for the immune response.
ContributorsWang, Xiao (Author) / Johnston, Stephen Albert (Thesis advisor) / Blattman, Joseph (Committee member) / Arntzen, Charles (Committee member) / Arizona State University (Publisher)
Created2013
155356-Thumbnail Image.png
Description
The past decade has seen a drastic increase in collaboration between Computer Science (CS) and Molecular Biology (MB). Current foci in CS such as deep learning require very large amounts of data, and MB research can often be rapidly advanced by analysis and models from CS. One of the places

The past decade has seen a drastic increase in collaboration between Computer Science (CS) and Molecular Biology (MB). Current foci in CS such as deep learning require very large amounts of data, and MB research can often be rapidly advanced by analysis and models from CS. One of the places where CS could aid MB is during analysis of sequences to find binding sites, prediction of folding patterns of proteins. Maintenance and replication of stem-like cells is possible for long terms as well as differentiation of these cells into various tissue types. These behaviors are possible by controlling the expression of specific genes. These genes then cascade into a network effect by either promoting or repressing downstream gene expression. The expression level of all gene transcripts within a single cell can be analyzed using single cell RNA sequencing (scRNA-seq). A significant portion of noise in scRNA-seq data are results of extrinsic factors and could only be removed by customized scRNA-seq analysis pipeline. scRNA-seq experiments utilize next-gen sequencing to measure genome scale gene expression levels with single cell resolution.

Almost every step during analysis and quantification requires the use of an often empirically determined threshold, which makes quantification of noise less accurate. In addition, each research group often develops their own data analysis pipeline making it impossible to compare data from different groups. To remedy this problem a streamlined and standardized scRNA-seq data analysis and normalization protocol was designed and developed. After analyzing multiple experiments we identified the possible pipeline stages, and tools needed. Our pipeline is capable of handling data with adapters and barcodes, which was not the case with pipelines from some experiments. Our pipeline can be used to analyze single experiment scRNA-seq data and also to compare scRNA-seq data across experiments. Various processes like data gathering, file conversion, and data merging were automated in the pipeline. The main focus was to standardize and normalize single-cell RNA-seq data to minimize technical noise introduced by disparate platforms.
ContributorsBalachandran, Parithi (Author) / Wang, Xiao (Thesis advisor) / Brafman, David (Committee member) / Lockhart, Thurmon (Committee member) / Arizona State University (Publisher)
Created2017
157920-Thumbnail Image.png
Description
Fusion proteins that specifically interact with biochemical marks on chromosomes represent a new class of synthetic transcriptional regulators that decode cell state information rather than deoxyribose nucleic acid (DNA) sequences. In multicellular organisms, information relevant to cell state, tissue identity, and oncogenesis is often encoded as biochemical modifications of histones,

Fusion proteins that specifically interact with biochemical marks on chromosomes represent a new class of synthetic transcriptional regulators that decode cell state information rather than deoxyribose nucleic acid (DNA) sequences. In multicellular organisms, information relevant to cell state, tissue identity, and oncogenesis is often encoded as biochemical modifications of histones, which are bound to DNA in eukaryotic nuclei and regulate gene expression states. In 2011, Haynes et al. showed that a synthetic regulator called the Polycomb chromatin Transcription Factor (PcTF), a fusion protein that binds methylated histones, reactivated an artificially-silenced luciferase reporter gene. These synthetic transcription activators are derived from the polycomb repressive complex (PRC) and associate with the epigenetic silencing mark H3K27me3 to reactivate the expression of silenced genes. It is demonstrated here that the duration of epigenetic silencing does not perturb reactivation via PcTF fusion proteins. After 96 hours PcTF shows the strongest reactivation activity. A variant called Pc2TF, which has roughly double the affinity for H3K27me3 in vitro, reactivated the silenced luciferase gene by at least 2-fold in living cells.
ContributorsVargas, Daniel A. (Author) / Haynes, Karmella (Thesis advisor) / Wang, Xiao (Committee member) / Mills, Jeremy (Committee member) / Arizona State University (Publisher)
Created2019
158301-Thumbnail Image.png
Description
The human transcriptional regulatory machine utilizes hundreds of transcription factors which bind to specific genic sites resulting in either activation or repression of targeted genes. Networks comprised of nodes and edges can be constructed to model the relationships of regulators and their targets. Within these biological networks small enriched structural

The human transcriptional regulatory machine utilizes hundreds of transcription factors which bind to specific genic sites resulting in either activation or repression of targeted genes. Networks comprised of nodes and edges can be constructed to model the relationships of regulators and their targets. Within these biological networks small enriched structural patterns containing at least three nodes can be identified as potential building blocks from which a network is organized. A first iteration computational pipeline was designed to generate a disease specific gene regulatory network for motif detection using established computational tools. The first goal was to identify motifs that can express themselves in a state that results in differential patient survival in one of the 32 different cancer types studied. This study identified issues for detecting strongly correlated motifs that also effect patient survival, yielding preliminary results for possible driving cancer etiology. Second, a comparison was performed for the topology of network motifs across multiple different data types to identify possible divergence from a conserved enrichment pattern in network perturbing diseases. The topology of enriched motifs across all the datasets converged upon a single conserved pattern reported in a previous study which did not appear to diverge dependent upon the type of disease. This report highlights possible methods to improve detection of disease driving motifs that can aid in identifying possible treatment targets in cancer. Finally, networks where only minimally perturbed, suggesting that regulatory programs were run from evolved circuits into a cancer context.
ContributorsStriker, Shawn Scott (Author) / Plaisier, Christopher (Thesis advisor) / Brafman, David (Committee member) / Wang, Xiao (Committee member) / Arizona State University (Publisher)
Created2020