Matching Items (68)
Filtering by

Clear all filters

151689-Thumbnail Image.png
Description
Sparsity has become an important modeling tool in areas such as genetics, signal and audio processing, medical image processing, etc. Via the penalization of l-1 norm based regularization, the structured sparse learning algorithms can produce highly accurate models while imposing various predefined structures on the data, such as feature groups

Sparsity has become an important modeling tool in areas such as genetics, signal and audio processing, medical image processing, etc. Via the penalization of l-1 norm based regularization, the structured sparse learning algorithms can produce highly accurate models while imposing various predefined structures on the data, such as feature groups or graphs. In this thesis, I first propose to solve a sparse learning model with a general group structure, where the predefined groups may overlap with each other. Then, I present three real world applications which can benefit from the group structured sparse learning technique. In the first application, I study the Alzheimer's Disease diagnosis problem using multi-modality neuroimaging data. In this dataset, not every subject has all data sources available, exhibiting an unique and challenging block-wise missing pattern. In the second application, I study the automatic annotation and retrieval of fruit-fly gene expression pattern images. Combined with the spatial information, sparse learning techniques can be used to construct effective representation of the expression images. In the third application, I present a new computational approach to annotate developmental stage for Drosophila embryos in the gene expression images. In addition, it provides a stage score that enables one to more finely annotate each embryo so that they are divided into early and late periods of development within standard stage demarcations. Stage scores help us to illuminate global gene activities and changes much better, and more refined stage annotations improve our ability to better interpret results when expression pattern matches are discovered between genes.
ContributorsYuan, Lei (Author) / Ye, Jieping (Thesis advisor) / Wang, Yalin (Committee member) / Xue, Guoliang (Committee member) / Kumar, Sudhir (Committee member) / Arizona State University (Publisher)
Created2013
151940-Thumbnail Image.png
Description
Biological systems are complex in many dimensions as endless transportation and communication networks all function simultaneously. Our ability to intervene within both healthy and diseased systems is tied directly to our ability to understand and model core functionality. The progress in increasingly accurate and thorough high-throughput measurement technologies has provided

Biological systems are complex in many dimensions as endless transportation and communication networks all function simultaneously. Our ability to intervene within both healthy and diseased systems is tied directly to our ability to understand and model core functionality. The progress in increasingly accurate and thorough high-throughput measurement technologies has provided a deluge of data from which we may attempt to infer a representation of the true genetic regulatory system. A gene regulatory network model, if accurate enough, may allow us to perform hypothesis testing in the form of computational experiments. Of great importance to modeling accuracy is the acknowledgment of biological contexts within the models -- i.e. recognizing the heterogeneous nature of the true biological system and the data it generates. This marriage of engineering, mathematics and computer science with systems biology creates a cycle of progress between computer simulation and lab experimentation, rapidly translating interventions and treatments for patients from the bench to the bedside. This dissertation will first discuss the landscape for modeling the biological system, explore the identification of targets for intervention in Boolean network models of biological interactions, and explore context specificity both in new graphical depictions of models embodying context-specific genomic regulation and in novel analysis approaches designed to reveal embedded contextual information. Overall, the dissertation will explore a spectrum of biological modeling with a goal towards therapeutic intervention, with both formal and informal notions of biological context, in such a way that will enable future work to have an even greater impact in terms of direct patient benefit on an individualized level.
ContributorsVerdicchio, Michael (Author) / Kim, Seungchan (Thesis advisor) / Baral, Chitta (Committee member) / Stolovitzky, Gustavo (Committee member) / Collofello, James (Committee member) / Arizona State University (Publisher)
Created2013
152165-Thumbnail Image.png
Description
Surgery as a profession requires significant training to improve both clinical decision making and psychomotor proficiency. In the medical knowledge domain, tools have been developed, validated, and accepted for evaluation of surgeons' competencies. However, assessment of the psychomotor skills still relies on the Halstedian model of apprenticeship, wherein surgeons are

Surgery as a profession requires significant training to improve both clinical decision making and psychomotor proficiency. In the medical knowledge domain, tools have been developed, validated, and accepted for evaluation of surgeons' competencies. However, assessment of the psychomotor skills still relies on the Halstedian model of apprenticeship, wherein surgeons are observed during residency for judgment of their skills. Although the value of this method of skills assessment cannot be ignored, novel methodologies of objective skills assessment need to be designed, developed, and evaluated that augment the traditional approach. Several sensor-based systems have been developed to measure a user's skill quantitatively, but use of sensors could interfere with skill execution and thus limit the potential for evaluating real-life surgery. However, having a method to judge skills automatically in real-life conditions should be the ultimate goal, since only with such features that a system would be widely adopted. This research proposes a novel video-based approach for observing surgeons' hand and surgical tool movements in minimally invasive surgical training exercises as well as during laparoscopic surgery. Because our system does not require surgeons to wear special sensors, it has the distinct advantage over alternatives of offering skills assessment in both learning and real-life environments. The system automatically detects major skill-measuring features from surgical task videos using a computing system composed of a series of computer vision algorithms and provides on-screen real-time performance feedback for more efficient skill learning. Finally, the machine-learning approach is used to develop an observer-independent composite scoring model through objective and quantitative measurement of surgical skills. To increase effectiveness and usability of the developed system, it is integrated with a cloud-based tool, which automatically assesses surgical videos upload to the cloud.
ContributorsIslam, Gazi (Author) / Li, Baoxin (Thesis advisor) / Liang, Jianming (Thesis advisor) / Dinu, Valentin (Committee member) / Greenes, Robert (Committee member) / Smith, Marshall (Committee member) / Kahol, Kanav (Committee member) / Patel, Vimla L. (Committee member) / Arizona State University (Publisher)
Created2013
152740-Thumbnail Image.png
Description
Genomic structural variation (SV) is defined as gross alterations in the genome broadly classified as insertions/duplications, deletions inversions and translocations. DNA sequencing ushered structural variant discovery beyond laboratory detection techniques to high resolution informatics approaches. Bioinformatics tools for computational discovery of SVs however are still missing variants in the complex

Genomic structural variation (SV) is defined as gross alterations in the genome broadly classified as insertions/duplications, deletions inversions and translocations. DNA sequencing ushered structural variant discovery beyond laboratory detection techniques to high resolution informatics approaches. Bioinformatics tools for computational discovery of SVs however are still missing variants in the complex cancer genome. This study aimed to define genomic context leading to tool failure and design novel algorithm addressing this context. Methods: The study tested the widely held but unproven hypothesis that tools fail to detect variants which lie in repeat regions. Publicly available 1000-Genomes dataset with experimentally validated variants was tested with SVDetect-tool for presence of true positives (TP) SVs versus false negative (FN) SVs, expecting that FNs would be overrepresented in repeat regions. Further, the novel algorithm designed to informatically capture the biological etiology of translocations (non-allelic homologous recombination and 3&ndashD; placement of chromosomes in cells –context) was tested using simulated dataset. Translocations were created in known translocation hotspots and the novel&ndashalgorithm; tool compared with SVDetect and BreakDancer. Results: 53% of false negative (FN) deletions were within repeat structure compared to 81% true positive (TP) deletions. Similarly, 33% FN insertions versus 42% TP, 26% FN duplication versus 57% TP and 54% FN novel sequences versus 62% TP were within repeats. Repeat structure was not driving the tool's inability to detect variants and could not be used as context. The novel algorithm with a redefined context, when tested against SVDetect and BreakDancer was able to detect 10/10 simulated translocations with 30X coverage dataset and 100% allele frequency, while SVDetect captured 4/10 and BreakDancer detected 6/10. For 15X coverage dataset with 100% allele frequency, novel algorithm was able to detect all ten translocations albeit with fewer reads supporting the same. BreakDancer detected 4/10 and SVDetect detected 2/10 Conclusion: This study showed that presence of repetitive elements in general within a structural variant did not influence the tool's ability to capture it. This context-based algorithm proved better than current tools even with half the genome coverage than accepted protocol and provides an important first step for novel translocation discovery in cancer genome.
ContributorsShetty, Sheetal (Author) / Dinu, Valentin (Thesis advisor) / Bussey, Kimberly (Committee member) / Scotch, Matthew (Committee member) / Wallstrom, Garrick (Committee member) / Arizona State University (Publisher)
Created2014
152672-Thumbnail Image.png
Description
Photosynthesis is the primary source of energy for most living organisms. Light harvesting complexes (LHC) play a vital role in harvesting sunlight and passing it on to the protein complexes of the electron transfer chain which create the electrochemical potential across the membrane which drives ATP synthesis. phycobilisomes (PBS) are

Photosynthesis is the primary source of energy for most living organisms. Light harvesting complexes (LHC) play a vital role in harvesting sunlight and passing it on to the protein complexes of the electron transfer chain which create the electrochemical potential across the membrane which drives ATP synthesis. phycobilisomes (PBS) are the most important LHCs in cyanobacteria. PBS is a complex of three light harvesting proteins: phycoerythrin (PE), phycocyanin (PC) and allophycocyanin (APC). This work has been done on a newly discovered cyanobacterium called Leptolyngbya Heron Island (L.HI). This study has three important goals: 1) Sequencing, assembly and annotation of the L.HI genome - Since this is a newly discovered cyanobacterium, its genome was not previously elucidated. Illumina sequencing, a type of next generation sequencing (NGS) technology was employed to sequence the genome. Unfortunately, the natural isolate contained other contaminating and potentially symbiotic bacterial populations. A novel bioinformatics strategy for separating DNA from contaminating bacterial populations from that of L.HI was devised which involves a combination of tetranucleotide frequency, %(G+C), BLAST analysis and gene annotation. 2) Structural elucidation of phycoerythrin - Phycoerythrin is the most important protein in the PBS assembly because it is one of the few light harvesting proteins which absorbs green light. The protein was crystallized and its structure solved to a resolution of 2Å. This protein contains two chemically distinct types of chromophores: phycourobilin and phycoerythrobilin. Energy transfer calculations indicate that there is unidirectional flow of energy from phycourobilin to phycoerythrobilin. Energy transfer time constants using Forster energy transfer theory have been found to be consistent with experimental data available in literature. 3) Effect of chromatic acclimation on photosystems - Chromatic acclimation is a phenomenon in which an organism modulates the ratio of PE/PC with change in light conditions. Our investigation in case of L.HI has revealed that the PE is expressed more in green light than PC in red light. This leads to unequal harvesting of light in these two states. Therefore, photosystem II expression is increased in red-light acclimatized cells coupled with an increase in number of PBS.
ContributorsPaul, Robin (Author) / Fromme, Petra (Thesis advisor) / Ros, Alexandra (Committee member) / Roberson, Robert (Committee member) / Arizona State University (Publisher)
Created2014
152851-Thumbnail Image.png
Description
Peptide microarrays are to proteomics as sequencing is to genomics. As microarrays become more content-rich, higher resolution proteomic studies will parallel deep sequencing of nucleic acids. Antigen-antibody interactions can be studied at a much higher resolution using microarrays than was possible only a decade ago. My dissertation focuses on testing

Peptide microarrays are to proteomics as sequencing is to genomics. As microarrays become more content-rich, higher resolution proteomic studies will parallel deep sequencing of nucleic acids. Antigen-antibody interactions can be studied at a much higher resolution using microarrays than was possible only a decade ago. My dissertation focuses on testing the feasibility of using either the Immunosignature platform, based on non-natural peptide sequences, or a pathogen peptide microarray, which uses bioinformatically-selected peptides from pathogens for creating sensitive diagnostics. Both diagnostic applications use relatively little serum from infected individuals, but each approaches diagnosis of disease differently. The first project compares pathogen epitope peptide (life-space) and non-natural (random-space) peptide microarrays while using them for the early detection of Coccidioidomycosis (Valley Fever). The second project uses NIAID category A, B and C priority pathogen epitope peptides in a multiplexed microarray platform to assess the feasibility of using epitope peptides to simultaneously diagnose multiple exposures using a single assay. Cross-reactivity is a consistent feature of several antigen-antibody based immunodiagnostics. This work utilizes microarray optimization and bioinformatic approaches to distill the underlying disease specific antibody signature pattern. Circumventing inherent cross-reactivity observed in antibody binding to peptides was crucial to achieve the goal of this work to accurately distinguishing multiple exposures simultaneously.
ContributorsNavalkar, Krupa Arun (Author) / Johnston, Stephen A. (Thesis advisor) / Stafford, Phillip (Thesis advisor) / Sykes, Kathryn (Committee member) / Jacobs, Bertram (Committee member) / Arizona State University (Publisher)
Created2014
152875-Thumbnail Image.png
Description
Protein-surface interactions, no matter structured or unstructured, are important in both biological and man-made systems. Unstructured interactions are more difficult to study with conventional techniques due to the lack of a specific binding structure. In this dissertation, a novel approach is employed to study the unstructured interactions between proteins and

Protein-surface interactions, no matter structured or unstructured, are important in both biological and man-made systems. Unstructured interactions are more difficult to study with conventional techniques due to the lack of a specific binding structure. In this dissertation, a novel approach is employed to study the unstructured interactions between proteins and heterogonous surfaces, by looking at a large number of different binding partners at surfaces and using the binding information to understand the chemistry of binding. In this regard, surface-bound peptide arrays are used as a model for the study. Specifically, in Chapter 2, the effects of charge, hydrophobicity and length of surface-bound peptides on binding affinity for specific globular proteins (&beta-galactosidase and &alpha1-antitrypsin) and relative binding of different proteins were examined with LC Sciences peptide array platform. While the general charge and hydrophobicity of the peptides are certainly important, more surprising is that &beta-galactosidase affinity for the surface does not simply increase with the length of the peptide. Another interesting observation that leads to the next part of the study is that even very short surface-bound peptides can have both strong and selective interactions with proteins. Hence, in Chapter 3, selected tetrapeptide sequences with known binding characteristics to &beta-galactosidase are used as building blocks to create longer sequences to see if the binding function can be added together. The conclusion is that while adding two component sequences together can either greatly increase or decrease overall binding and specificity, the contribution to the binding affinity and specificity of the individual binding components is strongly dependent on their position in the peptide. Finally, in Chapter 4, another array platform is utilized to overcome the limitations associated with LC Sciences. It is found that effects of peptide sequence properties on IgG binding with HealthTell array are quiet similar to what was observed with &beta-galactosidase on LC Science array surface. In summary, the approach presented in this dissertation can provide binding information for both structured and unstructured interactions taking place at complex surfaces and has the potential to help develop surfaces covered with specific short peptide sequences with relatively specific protein interaction profiles.
ContributorsWang, Wei (Author) / Woodbury, Neal W (Thesis advisor) / Liu, Yan (Committee member) / Chaput, John (Committee member) / Arizona State University (Publisher)
Created2014
152833-Thumbnail Image.png
Description
In many fields one needs to build predictive models for a set of related machine learning tasks, such as information retrieval, computer vision and biomedical informatics. Traditionally these tasks are treated independently and the inference is done separately for each task, which ignores important connections among the tasks. Multi-task learning

In many fields one needs to build predictive models for a set of related machine learning tasks, such as information retrieval, computer vision and biomedical informatics. Traditionally these tasks are treated independently and the inference is done separately for each task, which ignores important connections among the tasks. Multi-task learning aims at simultaneously building models for all tasks in order to improve the generalization performance, leveraging inherent relatedness of these tasks. In this thesis, I firstly propose a clustered multi-task learning (CMTL) formulation, which simultaneously learns task models and performs task clustering. I provide theoretical analysis to establish the equivalence between the CMTL formulation and the alternating structure optimization, which learns a shared low-dimensional hypothesis space for different tasks. Then I present two real-world biomedical informatics applications which can benefit from multi-task learning. In the first application, I study the disease progression problem and present multi-task learning formulations for disease progression. In the formulations, the prediction at each point is a regression task and multiple tasks at different time points are learned simultaneously, leveraging the temporal smoothness among the tasks. The proposed formulations have been tested extensively on predicting the progression of the Alzheimer's disease, and experimental results demonstrate the effectiveness of the proposed models. In the second application, I present a novel data-driven framework for densifying the electronic medical records (EMR) to overcome the sparsity problem in predictive modeling using EMR. The densification of each patient is a learning task, and the proposed algorithm simultaneously densify all patients. As such, the densification of one patient leverages useful information from other patients.
ContributorsZhou, Jiayu (Author) / Ye, Jieping (Thesis advisor) / Mittelmann, Hans (Committee member) / Li, Baoxin (Committee member) / Wang, Yalin (Committee member) / Arizona State University (Publisher)
Created2014
152847-Thumbnail Image.png
Description
The processes of a human somatic cell are very complex with various genetic mechanisms governing its fate. Such cells undergo various genetic mutations, which translate to the genetic aberrations that we see in cancer. There are more than 100 types of cancer, each having many more subtypes with aberrations being

The processes of a human somatic cell are very complex with various genetic mechanisms governing its fate. Such cells undergo various genetic mutations, which translate to the genetic aberrations that we see in cancer. There are more than 100 types of cancer, each having many more subtypes with aberrations being unique to each. In the past two decades, the widespread application of high-throughput genomic technologies, such as micro-arrays and next-generation sequencing, has led to the revelation of many such aberrations. Known types and subtypes can be readily identified using gene-expression profiling and more importantly, high-throughput genomic datasets have helped identify novel sub-types with distinct signatures. Recent studies showing usage of gene-expression profiling in clinical decision making in breast cancer patients underscore the utility of high-throughput datasets. Beyond prognosis, understanding the underlying cellular processes is essential for effective cancer treatment. Various high-throughput techniques are now available to look at a particular aspect of a genetic mechanism in cancer tissue. To look at these mechanisms individually is akin to looking at a broken watch; taking apart each of its parts, looking at them individually and finally making a list of all the faulty ones. Integrative approaches are needed to transform one-dimensional cancer signatures into multi-dimensional interaction and regulatory networks, consequently bettering our understanding of cellular processes in cancer. Here, I attempt to (i) address ways to effectively identify high quality variants when multiple assays on the same sample samples are available through two novel tools, snpSniffer and NGSPE; (ii) glean new biological insight into multiple myeloma through two novel integrative analysis approaches making use of disparate high-throughput datasets. While these methods focus on multiple myeloma datasets, the informatics approaches are applicable to all cancer datasets and will thus help advance cancer genomics.
ContributorsYellapantula, Venkata (Author) / Dinu, Valentin (Thesis advisor) / Scotch, Matthew (Committee member) / Wallstrom, Garrick (Committee member) / Keats, Jonathan (Committee member) / Arizona State University (Publisher)
Created2014
153508-Thumbnail Image.png
Description
Telomerase enzyme is a truly remarkable enzyme specialized for the addition of short, highly repetitive DNA sequences onto linear eukaryotic chromosome ends. The telomerase enzyme functions as a ribonucleoprotein, minimally composed of the highly conserved catalytic telomerase reverse transcriptase and essential telomerase RNA component containing an internalized short template

Telomerase enzyme is a truly remarkable enzyme specialized for the addition of short, highly repetitive DNA sequences onto linear eukaryotic chromosome ends. The telomerase enzyme functions as a ribonucleoprotein, minimally composed of the highly conserved catalytic telomerase reverse transcriptase and essential telomerase RNA component containing an internalized short template region within the vastly larger non-coding RNA. Even among closely related groups of species, telomerase RNA is astonishingly divergent in sequence, length, and secondary structure. This massive disparity is highly prohibitive for telomerase RNA identification from previously unexplored groups of species, which is fundamental for secondary structure determination. Combined biochemical enrichment and computational screening methods were employed for the discovery of numerous telomerase RNAs from the poorly characterized echinoderm lineage. This resulted in the revelation that--while closely related to the vertebrate lineage and grossly resembling vertebrate telomerase RNA--the echinoderm telomerase RNA central domain varies extensively in structure and sequence, diverging even within echinoderms amongst sea urchins and brittle stars. Furthermore, the origins of telomerase RNA within the eukaryotic lineage have remained a persistent mystery. The ancient Trypanosoma telomerase RNA was previously identified, however, a functionally verified secondary structure remained elusive. Synthetic Trypanosoma telomerase was generated for molecular dissection of Trypanosoma telomerase RNA revealing two RNA domains functionally equivalent to those found in known telomerase RNAs, yet structurally distinct. This work demonstrates that telomerase RNA is uncommonly divergent in gross architecture, while retaining critical universal elements.
ContributorsPodlevsky, Joshua (Author) / Chen, Julian (Thesis advisor) / Mangone, Marco (Committee member) / Kusumi, Kenro (Committee member) / Wilson-Rawls, Norma (Committee member) / Arizona State University (Publisher)
Created2015