Matching Items (19)
Filtering by

Clear all filters

Description
In many classication problems data samples cannot be collected easily, example in drug trials, biological experiments and study on cancer patients. In many situations the data set size is small and there are many outliers. When classifying such data, example cancer vs normal patients the consequences of mis-classication are probably

In many classication problems data samples cannot be collected easily, example in drug trials, biological experiments and study on cancer patients. In many situations the data set size is small and there are many outliers. When classifying such data, example cancer vs normal patients the consequences of mis-classication are probably more important than any other data type, because the data point could be a cancer patient or the classication decision could help determine what gene might be over expressed and perhaps a cause of cancer. These mis-classications are typically higher in the presence of outlier data points. The aim of this thesis is to develop a maximum margin classier that is suited to address the lack of robustness of discriminant based classiers (like the Support Vector Machine (SVM)) to noise and outliers. The underlying notion is to adopt and develop a natural loss function that is more robust to outliers and more representative of the true loss function of the data. It is demonstrated experimentally that SVM's are indeed susceptible to outliers and that the new classier developed, here coined as Robust-SVM (RSVM), is superior to all studied classier on the synthetic datasets. It is superior to the SVM in both the synthetic and experimental data from biomedical studies and is competent to a classier derived on similar lines when real life data examples are considered.
ContributorsGupta, Sidharth (Author) / Kim, Seungchan (Thesis advisor) / Welfert, Bruno (Committee member) / Li, Baoxin (Committee member) / Arizona State University (Publisher)
Created2011
152165-Thumbnail Image.png
Description
Surgery as a profession requires significant training to improve both clinical decision making and psychomotor proficiency. In the medical knowledge domain, tools have been developed, validated, and accepted for evaluation of surgeons' competencies. However, assessment of the psychomotor skills still relies on the Halstedian model of apprenticeship, wherein surgeons are

Surgery as a profession requires significant training to improve both clinical decision making and psychomotor proficiency. In the medical knowledge domain, tools have been developed, validated, and accepted for evaluation of surgeons' competencies. However, assessment of the psychomotor skills still relies on the Halstedian model of apprenticeship, wherein surgeons are observed during residency for judgment of their skills. Although the value of this method of skills assessment cannot be ignored, novel methodologies of objective skills assessment need to be designed, developed, and evaluated that augment the traditional approach. Several sensor-based systems have been developed to measure a user's skill quantitatively, but use of sensors could interfere with skill execution and thus limit the potential for evaluating real-life surgery. However, having a method to judge skills automatically in real-life conditions should be the ultimate goal, since only with such features that a system would be widely adopted. This research proposes a novel video-based approach for observing surgeons' hand and surgical tool movements in minimally invasive surgical training exercises as well as during laparoscopic surgery. Because our system does not require surgeons to wear special sensors, it has the distinct advantage over alternatives of offering skills assessment in both learning and real-life environments. The system automatically detects major skill-measuring features from surgical task videos using a computing system composed of a series of computer vision algorithms and provides on-screen real-time performance feedback for more efficient skill learning. Finally, the machine-learning approach is used to develop an observer-independent composite scoring model through objective and quantitative measurement of surgical skills. To increase effectiveness and usability of the developed system, it is integrated with a cloud-based tool, which automatically assesses surgical videos upload to the cloud.
ContributorsIslam, Gazi (Author) / Li, Baoxin (Thesis advisor) / Liang, Jianming (Thesis advisor) / Dinu, Valentin (Committee member) / Greenes, Robert (Committee member) / Smith, Marshall (Committee member) / Kahol, Kanav (Committee member) / Patel, Vimla L. (Committee member) / Arizona State University (Publisher)
Created2013
152309-Thumbnail Image.png
Description
Vertebrate genomes demonstrate a remarkable range of sizes from 0.3 to 133 gigabase pairs. The proliferation of repeat elements are a major genomic expansion. In particular, long interspersed nuclear elements (LINES) are autonomous retrotransposons that have the ability to "cut and paste" themselves into a host genome through a mechanism

Vertebrate genomes demonstrate a remarkable range of sizes from 0.3 to 133 gigabase pairs. The proliferation of repeat elements are a major genomic expansion. In particular, long interspersed nuclear elements (LINES) are autonomous retrotransposons that have the ability to "cut and paste" themselves into a host genome through a mechanism called target-primed reverse transcription. LINES have been called "junk DNA," "viral DNA," and "selfish" DNA, and were once thought to be parasitic elements. However, LINES, which diversified before the emergence of many early vertebrates, has strongly shaped the evolution of eukaryotic genomes. This thesis will evaluate LINE abundance, diversity and activity in four anole lizards. An intrageneric analysis will be conducted using comparative phylogenetics and bioinformatics. Comparisons within the Anolis genus, which derives from a single lineage of an adaptive radiation, will be conducted to explore the relationship between LINE retrotransposon activity and causal changes in genomic size and composition.
ContributorsMay, Catherine (Author) / Kusumi, Kenro (Thesis advisor) / Gadau, Juergen (Committee member) / Rawls, Jeffery A (Committee member) / Arizona State University (Publisher)
Created2013
149307-Thumbnail Image.png
Description
Continuous advancements in biomedical research have resulted in the production of vast amounts of scientific data and literature discussing them. The ultimate goal of computational biology is to translate these large amounts of data into actual knowledge of the complex biological processes and accurate life science models. The ability to

Continuous advancements in biomedical research have resulted in the production of vast amounts of scientific data and literature discussing them. The ultimate goal of computational biology is to translate these large amounts of data into actual knowledge of the complex biological processes and accurate life science models. The ability to rapidly and effectively survey the literature is necessary for the creation of large scale models of the relationships among biomedical entities as well as hypothesis generation to guide biomedical research. To reduce the effort and time spent in performing these activities, an intelligent search system is required. Even though many systems aid in navigating through this wide collection of documents, the vastness and depth of this information overload can be overwhelming. An automated extraction system coupled with a cognitive search and navigation service over these document collections would not only save time and effort, but also facilitate discovery of the unknown information implicitly conveyed in the texts. This thesis presents the different approaches used for large scale biomedical named entity recognition, and the challenges faced in each. It also proposes BioEve: an integrative framework to fuse a faceted search with information extraction to provide a search service that addresses the user's desire for "completeness" of the query results, not just the top-ranked ones. This information extraction system enables discovery of important semantic relationships between entities such as genes, diseases, drugs, and cell lines and events from biomedical text on MEDLINE, which is the largest publicly available database of the world's biomedical journal literature. It is an innovative search and discovery service that makes it easier to search
avigate and discover knowledge hidden in life sciences literature. To demonstrate the utility of this system, this thesis also details a prototype enterprise quality search and discovery service that helps researchers with a guided step-by-step query refinement, by suggesting concepts enriched in intermediate results, and thereby facilitating the "discover more as you search" paradigm.
ContributorsKanwar, Pradeep (Author) / Davulcu, Hasan (Thesis advisor) / Dinu, Valentin (Committee member) / Li, Baoxin (Committee member) / Arizona State University (Publisher)
Created2010
154703-Thumbnail Image.png
Description
Cardiovascular disease (CVD) is the leading cause of mortality yet largely preventable, but the key to prevention is to identify at-risk individuals before adverse events. For predicting individual CVD risk, carotid intima-media thickness (CIMT), a noninvasive ultrasound method, has proven to be valuable, offering several advantages over CT coronary artery

Cardiovascular disease (CVD) is the leading cause of mortality yet largely preventable, but the key to prevention is to identify at-risk individuals before adverse events. For predicting individual CVD risk, carotid intima-media thickness (CIMT), a noninvasive ultrasound method, has proven to be valuable, offering several advantages over CT coronary artery calcium score. However, each CIMT examination includes several ultrasound videos, and interpreting each of these CIMT videos involves three operations: (1) select three enddiastolic ultrasound frames (EUF) in the video, (2) localize a region of interest (ROI) in each selected frame, and (3) trace the lumen-intima interface and the media-adventitia interface in each ROI to measure CIMT. These operations are tedious, laborious, and time consuming, a serious limitation that hinders the widespread utilization of CIMT in clinical practice. To overcome this limitation, this paper presents a new system to automate CIMT video interpretation. Our extensive experiments demonstrate that the suggested system significantly outperforms the state-of-the-art methods. The superior performance is attributable to our unified framework based on convolutional neural networks (CNNs) coupled with our informative image representation and effective post-processing of the CNN outputs, which are uniquely designed for each of the above three operations.
ContributorsShin, Jaeyul (Author) / Liang, Jianming (Thesis advisor) / Maciejewski, Ross (Committee member) / Li, Baoxin (Committee member) / Arizona State University (Publisher)
Created2016
153977-Thumbnail Image.png
Description
Rapid advancements in genomic technologies have increased our understanding of rare human disease. Generation of multiple types of biological data including genetic variation from genome or exome, expression from transcriptome, methylation patterns from epigenome, protein complexity from proteome and metabolite information from metabolome is feasible. "Omics" tools provide comprehensive view

Rapid advancements in genomic technologies have increased our understanding of rare human disease. Generation of multiple types of biological data including genetic variation from genome or exome, expression from transcriptome, methylation patterns from epigenome, protein complexity from proteome and metabolite information from metabolome is feasible. "Omics" tools provide comprehensive view into biological mechanisms that impact disease trait and risk. In spite of available data types and ability to collect them simultaneously from patients, researchers still rely on their independent analysis. Combining information from multiple biological data can reduce missing information, increase confidence in single data findings, and provide a more complete view of genotype-phenotype correlations. Although rare disease genetics has been greatly improved by exome sequencing, a substantial portion of clinical patients remain undiagnosed. Multiple frameworks for integrative analysis of genomic and transcriptomic data are presented with focus on identifying functional genetic variations in patients with undiagnosed, rare childhood conditions. Direct quantitation of X inactivation ratio was developed from genomic and transcriptomic data using allele specific expression and segregation analysis to determine magnitude and inheritance mode of X inactivation. This approach was applied in two families revealing non-random X inactivation in female patients. Expression based analysis of X inactivation showed high correlation with standard clinical assay. These findings improved understanding of molecular mechanisms underlying X-linked disorders. In addition multivariate outlier analysis of gene and exon level data from RNA-seq using Mahalanobis distance, and its integration of distance scores with genomic data found genotype-phenotype correlations in variant prioritization process in 25 families. Mahalanobis distance scores revealed variants with large transcriptional impact in patients. In this dataset, frameshift variants were more likely result in outlier expression signatures than other types of functional variants. Integration of outlier estimates with genetic variants corroborated previously identified, presumed causal variants and highlighted new candidate in previously un-diagnosed case. Integrative genomic approaches in easily attainable tissue will facilitate the search for biomarkers that impact disease trait, uncover pharmacogenomics targets, provide novel insight into molecular underpinnings of un-characterized conditions, and help improve analytical approaches that use large datasets.
ContributorsSzelinger, Szabolcs (Author) / Craig, David W. (Thesis advisor) / Kusumi, Kenro (Thesis advisor) / Narayan, Vinodh (Committee member) / Rosenberg, Michael S. (Committee member) / Huentelman, Matthew J (Committee member) / Arizona State University (Publisher)
Created2015
152833-Thumbnail Image.png
Description
In many fields one needs to build predictive models for a set of related machine learning tasks, such as information retrieval, computer vision and biomedical informatics. Traditionally these tasks are treated independently and the inference is done separately for each task, which ignores important connections among the tasks. Multi-task learning

In many fields one needs to build predictive models for a set of related machine learning tasks, such as information retrieval, computer vision and biomedical informatics. Traditionally these tasks are treated independently and the inference is done separately for each task, which ignores important connections among the tasks. Multi-task learning aims at simultaneously building models for all tasks in order to improve the generalization performance, leveraging inherent relatedness of these tasks. In this thesis, I firstly propose a clustered multi-task learning (CMTL) formulation, which simultaneously learns task models and performs task clustering. I provide theoretical analysis to establish the equivalence between the CMTL formulation and the alternating structure optimization, which learns a shared low-dimensional hypothesis space for different tasks. Then I present two real-world biomedical informatics applications which can benefit from multi-task learning. In the first application, I study the disease progression problem and present multi-task learning formulations for disease progression. In the formulations, the prediction at each point is a regression task and multiple tasks at different time points are learned simultaneously, leveraging the temporal smoothness among the tasks. The proposed formulations have been tested extensively on predicting the progression of the Alzheimer's disease, and experimental results demonstrate the effectiveness of the proposed models. In the second application, I present a novel data-driven framework for densifying the electronic medical records (EMR) to overcome the sparsity problem in predictive modeling using EMR. The densification of each patient is a learning task, and the proposed algorithm simultaneously densify all patients. As such, the densification of one patient leverages useful information from other patients.
ContributorsZhou, Jiayu (Author) / Ye, Jieping (Thesis advisor) / Mittelmann, Hans (Committee member) / Li, Baoxin (Committee member) / Wang, Yalin (Committee member) / Arizona State University (Publisher)
Created2014
153508-Thumbnail Image.png
Description
Telomerase enzyme is a truly remarkable enzyme specialized for the addition of short, highly repetitive DNA sequences onto linear eukaryotic chromosome ends. The telomerase enzyme functions as a ribonucleoprotein, minimally composed of the highly conserved catalytic telomerase reverse transcriptase and essential telomerase RNA component containing an internalized short template

Telomerase enzyme is a truly remarkable enzyme specialized for the addition of short, highly repetitive DNA sequences onto linear eukaryotic chromosome ends. The telomerase enzyme functions as a ribonucleoprotein, minimally composed of the highly conserved catalytic telomerase reverse transcriptase and essential telomerase RNA component containing an internalized short template region within the vastly larger non-coding RNA. Even among closely related groups of species, telomerase RNA is astonishingly divergent in sequence, length, and secondary structure. This massive disparity is highly prohibitive for telomerase RNA identification from previously unexplored groups of species, which is fundamental for secondary structure determination. Combined biochemical enrichment and computational screening methods were employed for the discovery of numerous telomerase RNAs from the poorly characterized echinoderm lineage. This resulted in the revelation that--while closely related to the vertebrate lineage and grossly resembling vertebrate telomerase RNA--the echinoderm telomerase RNA central domain varies extensively in structure and sequence, diverging even within echinoderms amongst sea urchins and brittle stars. Furthermore, the origins of telomerase RNA within the eukaryotic lineage have remained a persistent mystery. The ancient Trypanosoma telomerase RNA was previously identified, however, a functionally verified secondary structure remained elusive. Synthetic Trypanosoma telomerase was generated for molecular dissection of Trypanosoma telomerase RNA revealing two RNA domains functionally equivalent to those found in known telomerase RNAs, yet structurally distinct. This work demonstrates that telomerase RNA is uncommonly divergent in gross architecture, while retaining critical universal elements.
ContributorsPodlevsky, Joshua (Author) / Chen, Julian (Thesis advisor) / Mangone, Marco (Committee member) / Kusumi, Kenro (Committee member) / Wilson-Rawls, Norma (Committee member) / Arizona State University (Publisher)
Created2015
153394-Thumbnail Image.png
Description
As a promising solution to the problem of acquiring and storing large amounts of image and video data, spatial-multiplexing camera architectures have received lot of attention in the recent past. Such architectures have the attractive feature of combining a two-step process of acquisition and compression of pixel measurements in a

As a promising solution to the problem of acquiring and storing large amounts of image and video data, spatial-multiplexing camera architectures have received lot of attention in the recent past. Such architectures have the attractive feature of combining a two-step process of acquisition and compression of pixel measurements in a conventional camera, into a single step. A popular variant is the single-pixel camera that obtains measurements of the scene using a pseudo-random measurement matrix. Advances in compressive sensing (CS) theory in the past decade have supplied the tools that, in theory, allow near-perfect reconstruction of an image from these measurements even for sub-Nyquist sampling rates. However, current state-of-the-art reconstruction algorithms suffer from two drawbacks -- They are (1) computationally very expensive and (2) incapable of yielding high fidelity reconstructions for high compression ratios. In computer vision, the final goal is usually to perform an inference task using the images acquired and not signal recovery. With this motivation, this thesis considers the possibility of inference directly from compressed measurements, thereby obviating the need to use expensive reconstruction algorithms. It is often the case that non-linear features are used for inference tasks in computer vision. However, currently, it is unclear how to extract such features from compressed measurements. Instead, using the theoretical basis provided by the Johnson-Lindenstrauss lemma, discriminative features using smashed correlation filters are derived and it is shown that it is indeed possible to perform reconstruction-free inference at high compression ratios with only a marginal loss in accuracy. As a specific inference problem in computer vision, face recognition is considered, mainly beyond the visible spectrum such as in the short wave infra-red region (SWIR), where sensors are expensive.
ContributorsLohit, Suhas Anand (Author) / Turaga, Pavan (Thesis advisor) / Spanias, Andreas (Committee member) / Li, Baoxin (Committee member) / Arizona State University (Publisher)
Created2015
153689-Thumbnail Image.png
Description
Damage to the central nervous system due to spinal cord or traumatic brain injury, as well as degenerative musculoskeletal disorders such as arthritis, drastically impact the quality of life. Regeneration of complex structures is quite limited in mammals, though other vertebrates possess this ability. Lizards are the most closely related

Damage to the central nervous system due to spinal cord or traumatic brain injury, as well as degenerative musculoskeletal disorders such as arthritis, drastically impact the quality of life. Regeneration of complex structures is quite limited in mammals, though other vertebrates possess this ability. Lizards are the most closely related organism to humans that can regenerate de novo skeletal muscle, hyaline cartilage, spinal cord, vasculature, and skin. Progress in studying the cellular and molecular mechanisms of lizard regeneration has previously been limited by a lack of genomic resources. Building on the release of the genome of the green anole, Anolis carolinensis, we developed a second generation, robust RNA-Seq-based genome annotation, and performed the first transcriptomic analysis of tail regeneration in this species. In order to investigate gene expression in regenerating tissue, we performed whole transcriptome and microRNA transcriptome analysis of regenerating tail tip and base and associated tissues, identifying key genetic targets in the regenerative process. These studies have identified components of a genetic program for regeneration in the lizard that includes both developmental and adult repair mechanisms shared with mammals, indicating value in the translation of these findings to future regenerative therapies.
ContributorsHutchins, Elizabeth (Author) / Kusumi, Kenro (Thesis advisor) / Rawls, Jeffrey A. (Committee member) / Denardo, Dale F. (Committee member) / Huentelman, Matthew J. (Committee member) / Arizona State University (Publisher)
Created2015