Matching Items (84)

Filtering by

Clear all filters

152300-Thumbnail Image.png

Combining thickness information with surface tensor-based morphometry for the 3D statistical analysis of the corpus callosum

Description

In blindness research, the corpus callosum (CC) is the most frequently studied sub-cortical structure, due to its important involvement in visual processing. While most callosal analyses from brain structural magnetic resonance images (MRI) are limited to the 2D mid-sagittal slice,

In blindness research, the corpus callosum (CC) is the most frequently studied sub-cortical structure, due to its important involvement in visual processing. While most callosal analyses from brain structural magnetic resonance images (MRI) are limited to the 2D mid-sagittal slice, we propose a novel framework to capture a complete set of 3D morphological differences in the corpus callosum between two groups of subjects. The CCs are segmented from whole brain T1-weighted MRI and modeled as 3D tetrahedral meshes. The callosal surface is divided into superior and inferior patches on which we compute a volumetric harmonic field by solving the Laplace's equation with Dirichlet boundary conditions. We adopt a refined tetrahedral mesh to compute the Laplacian operator, so our computation can achieve sub-voxel accuracy. Thickness is estimated by tracing the streamlines in the harmonic field. We combine areal changes found using surface tensor-based morphometry and thickness information into a vector at each vertex to be used as a metric for the statistical analysis. Group differences are assessed on this combined measure through Hotelling's T2 test. The method is applied to statistically compare three groups consisting of: congenitally blind (CB), late blind (LB; onset > 8 years old) and sighted (SC) subjects. Our results reveal significant differences in several regions of the CC between both blind groups and the sighted groups; and to a lesser extent between the LB and CB groups. These results demonstrate the crucial role of visual deprivation during the developmental period in reshaping the structural architecture of the CC.

Contributors

Agent

Created

Date Created
2013

152309-Thumbnail Image.png

Genomic diversity and abundance of LINE retrotransposons in 4 anole lizards

Description

Vertebrate genomes demonstrate a remarkable range of sizes from 0.3 to 133 gigabase pairs. The proliferation of repeat elements are a major genomic expansion. In particular, long interspersed nuclear elements (LINES) are autonomous retrotransposons that have the ability to "cut

Vertebrate genomes demonstrate a remarkable range of sizes from 0.3 to 133 gigabase pairs. The proliferation of repeat elements are a major genomic expansion. In particular, long interspersed nuclear elements (LINES) are autonomous retrotransposons that have the ability to "cut and paste" themselves into a host genome through a mechanism called target-primed reverse transcription. LINES have been called "junk DNA," "viral DNA," and "selfish" DNA, and were once thought to be parasitic elements. However, LINES, which diversified before the emergence of many early vertebrates, has strongly shaped the evolution of eukaryotic genomes. This thesis will evaluate LINE abundance, diversity and activity in four anole lizards. An intrageneric analysis will be conducted using comparative phylogenetics and bioinformatics. Comparisons within the Anolis genus, which derives from a single lineage of an adaptive radiation, will be conducted to explore the relationship between LINE retrotransposon activity and causal changes in genomic size and composition.

Contributors

Agent

Created

Date Created
2013

153845-Thumbnail Image.png

Contextual computing: tracking healthcare providers in the Emergency Department via Bluetooth beacons

Description

Hospital Emergency Departments (EDs) are frequently crowded. The Center for

Medicare and Medicaid Services (CMS) collects performance measurements from EDs

such as that of the door to clinician time. The door to clinician time is the time at which a

Hospital Emergency Departments (EDs) are frequently crowded. The Center for

Medicare and Medicaid Services (CMS) collects performance measurements from EDs

such as that of the door to clinician time. The door to clinician time is the time at which a

patient is first seen by a clinician. Current methods for documenting the door to clinician

time are in written form and may contain inaccuracies. The goal of this thesis is to

provide a method for automatic and accurate retrieval and documentation of the door to

clinician time. To automatically collect door to clinician times, single board computers

were installed in patient rooms that logged the time whenever they saw a specific

Bluetooth emission from a device that the clinician carried. The Bluetooth signal is used

to calculate the distance of the clinician from the single board computer. The logged time

and distance calculation is then sent to the server where it is determined if the clinician

was in the room seeing the patient at the time logged. The times automatically collected

were compared with the handwritten times recorded by clinicians and have shown that

they are justifiably accurate to the minute.

Contributors

Agent

Created

Date Created
2015

151939-Thumbnail Image.png

Specific amino acid substitutions improve the activity and specificity of an antimicrobial peptide & serodiagnosis by immunosignature: a multiplexing tool for monitoring the humoral immune response to dengue

Description

Random peptide microarrays are a powerful tool for both the treatment and diagnostics of infectious diseases. On the treatment side, selected random peptides on the microarray have either binding or lytic potency against certain pathogens cells, thus they can be

Random peptide microarrays are a powerful tool for both the treatment and diagnostics of infectious diseases. On the treatment side, selected random peptides on the microarray have either binding or lytic potency against certain pathogens cells, thus they can be synthesized into new antimicrobial agents, denoted as synbodies (synthetic antibodies). On the diagnostic side, serum containing specific infection-related antibodies create unique and distinct "pathogen-immunosignatures" on the random peptide microarray distinct from the healthy control serum, and this different mode of binding can be used as a more precise measurement than traditional ELISA tests. My thesis project is separated into these two parts: the first part falls into the treatment side and the second one focuses on the diagnostic side. My first chapter shows that a substitution amino acid peptide library helps to improve the activity of a recently reported synthetic antimicrobial peptide selected by the random peptide microarray. By substituting one or two amino acids of the original lead peptide, the new substitutes show changed hemolytic effects against mouse red blood cells and changed potency against two pathogens: Staphylococcus aureus and Pseudomonas aeruginosa. Two new substitutes are then combined together to form the synbody, which shows a significantly antimicrobial potency against Staphylococcus aureus (<0.5uM). In the second chapter, I explore the possibility of using the 10K Ver.2 random peptide microarray to monitor the humoral immune response of dengue. Over 2.5 billion people (40% of the world's population) live in dengue transmitting areas. However, currently there is no efficient dengue treatment or vaccine. Here, with limited dengue patient serum samples, we show that the immunosignature has the potential to not only distinguish the dengue infection from non-infected people, but also the primary dengue infection from the secondary dengue infections, dengue infection from West Nile Virus (WNV) infection, and even between different dengue serotypes. By further bioinformatic analysis, we demonstrate that the significant peptides selected to distinguish dengue infected and normal samples may indicate the epitopes responsible for the immune response.

Contributors

Agent

Created

Date Created
2013

151940-Thumbnail Image.png

Gene regulatory networks: modeling, intervention and context

Description

Biological systems are complex in many dimensions as endless transportation and communication networks all function simultaneously. Our ability to intervene within both healthy and diseased systems is tied directly to our ability to understand and model core functionality. The progress

Biological systems are complex in many dimensions as endless transportation and communication networks all function simultaneously. Our ability to intervene within both healthy and diseased systems is tied directly to our ability to understand and model core functionality. The progress in increasingly accurate and thorough high-throughput measurement technologies has provided a deluge of data from which we may attempt to infer a representation of the true genetic regulatory system. A gene regulatory network model, if accurate enough, may allow us to perform hypothesis testing in the form of computational experiments. Of great importance to modeling accuracy is the acknowledgment of biological contexts within the models -- i.e. recognizing the heterogeneous nature of the true biological system and the data it generates. This marriage of engineering, mathematics and computer science with systems biology creates a cycle of progress between computer simulation and lab experimentation, rapidly translating interventions and treatments for patients from the bench to the bedside. This dissertation will first discuss the landscape for modeling the biological system, explore the identification of targets for intervention in Boolean network models of biological interactions, and explore context specificity both in new graphical depictions of models embodying context-specific genomic regulation and in novel analysis approaches designed to reveal embedded contextual information. Overall, the dissertation will explore a spectrum of biological modeling with a goal towards therapeutic intervention, with both formal and informal notions of biological context, in such a way that will enable future work to have an even greater impact in terms of direct patient benefit on an individualized level.

Contributors

Agent

Created

Date Created
2013

151180-Thumbnail Image.png

Computational methods for knowledge integration in the analysis of large-scale biological networks

Description

As we migrate into an era of personalized medicine, understanding how bio-molecules interact with one another to form cellular systems is one of the key focus areas of systems biology. Several challenges such as the dynamic nature of cellular systems,

As we migrate into an era of personalized medicine, understanding how bio-molecules interact with one another to form cellular systems is one of the key focus areas of systems biology. Several challenges such as the dynamic nature of cellular systems, uncertainty due to environmental influences, and the heterogeneity between individual patients render this a difficult task. In the last decade, several algorithms have been proposed to elucidate cellular systems from data, resulting in numerous data-driven hypotheses. However, due to the large number of variables involved in the process, many of which are unknown or not measurable, such computational approaches often lead to a high proportion of false positives. This renders interpretation of the data-driven hypotheses extremely difficult. Consequently, a dismal proportion of these hypotheses are subject to further experimental validation, eventually limiting their potential to augment existing biological knowledge. This dissertation develops a framework of computational methods for the analysis of such data-driven hypotheses leveraging existing biological knowledge. Specifically, I show how biological knowledge can be mapped onto these hypotheses and subsequently augmented through novel hypotheses. Biological hypotheses are learnt in three levels of abstraction -- individual interactions, functional modules and relationships between pathways, corresponding to three complementary aspects of biological systems. The computational methods developed in this dissertation are applied to high throughput cancer data, resulting in novel hypotheses with potentially significant biological impact.

Contributors

Agent

Created

Date Created
2012

151203-Thumbnail Image.png

The development of a validated clinically meaningful endpoint for the evaluation of tear film stability as a measure of ocular surface protection for use in the diagnosis and evaluation of dry eye disease

Description

This dissertation presents methods for the evaluation of ocular surface protection during natural blink function. The evaluation of ocular surface protection is especially important in the diagnosis of dry eye and the evaluation of dry eye severity in clinical trials.

This dissertation presents methods for the evaluation of ocular surface protection during natural blink function. The evaluation of ocular surface protection is especially important in the diagnosis of dry eye and the evaluation of dry eye severity in clinical trials. Dry eye is a highly prevalent disease affecting vast numbers (between 11% and 22%) of an aging population. There is only one approved therapy with limited efficacy, which results in a huge unmet need. The reason so few drugs have reached approval is a lack of a recognized therapeutic pathway with reproducible endpoints. While the interplay between blink function and ocular surface protection has long been recognized, all currently used evaluation techniques have addressed blink function in isolation from tear film stability, the gold standard of which is Tear Film Break-Up Time (TFBUT). In the first part of this research a manual technique of calculating ocular surface protection during natural blink function through the use of video analysis is developed and evaluated for it's ability to differentiate between dry eye and normal subjects, the results are compared with that of TFBUT. In the second part of this research the technique is improved in precision and automated through the use of video analysis algorithms. This software, called the OPI 2.0 System, is evaluated for accuracy and precision, and comparisons are made between the OPI 2.0 System and other currently recognized dry eye diagnostic techniques (e.g. TFBUT). In the third part of this research the OPI 2.0 System is deployed for use in the evaluation of subjects before, immediately after and 30 minutes after exposure to a controlled adverse environment (CAE), once again the results are compared and contrasted against commonly used dry eye endpoints. The results demonstrate that the evaluation of ocular surface protection using the OPI 2.0 System offers superior accuracy to the current standard, TFBUT.

Contributors

Agent

Created

Date Created
2012

151499-Thumbnail Image.png

A study on home based Parkinson's disease monitoring and evaluation: design, development, and evaluation

Description

Parkinson's disease, the most prevalent movement disorder of the central nervous system, is a chronic condition that affects more than 1000,000 U.S. residents and about 3% of the population over the age of 65. The characteristic symptoms include tremors, bradykinesia,

Parkinson's disease, the most prevalent movement disorder of the central nervous system, is a chronic condition that affects more than 1000,000 U.S. residents and about 3% of the population over the age of 65. The characteristic symptoms include tremors, bradykinesia, rigidity and impaired postural stability. Current therapy based on augmentation or replacement of dopamine is designed to improve patients' motor performance but often leads to levodopa-induced complications, such as dyskinesia and motor fluctuation. With the disease progress, clinicians must closely monitor patients' progress in order to identify any complications or decline in motor function as soon as possible in PD management. Unfortunately, current clinical assessment for Parkinson's is subjective and mostly influenced by brief observations during patient visits. Thus improvement or decline in patients' motor function in between visits is extremely difficult to assess. This may hamper clinicians while making informed decisions about the course of therapy for Parkinson's patients and could negatively impact clinical care. In this study we explored new approaches for PD assessment that aim to provide home-based PD assessment and monitoring. By extending the disease assessment to home, the healthcare burden on patients and their family can be reduced, and the disease progress can be more closely monitored by physicians. To achieve these aims, two novel approaches have been designed, developed and validated. The first approach is a questionnaire based self-evaluation metric, which estimate the PD severity through using self-evaluation score on pre-designed questions. Based on the results of the first approach, a smart phone based approach was invented. The approach takes advantage of the mobile computing technology and clinical decision support approach to evaluate the motor performance of patient daily activity and provide the longitudinal disease assessment and monitoring. Both approaches have been validated on recruited PD patients at the movement disorder program of Barrow Neurological Clinic (BNC) at St Joseph's Hospital and Medical Center. The results of validation tests showed favorable accuracy on detecting and assessing critical symptoms of PD, and shed light on promising future of implementing mobile platform based PD evaluation and monitoring tools to facilitate PD management.

Contributors

Agent

Created

Date Created
2013

150250-Thumbnail Image.png

Characterization and analysis of a novel platform for profiling the antibody response

Description

Immunosignaturing is a new immunodiagnostic technology that uses random-sequence peptide microarrays to profile the humoral immune response. Though the peptides have little sequence homology to any known protein, binding of serum antibodies may be detected, and the pattern correlated to

Immunosignaturing is a new immunodiagnostic technology that uses random-sequence peptide microarrays to profile the humoral immune response. Though the peptides have little sequence homology to any known protein, binding of serum antibodies may be detected, and the pattern correlated to disease states. The aim of my dissertation is to analyze the factors affecting the binding patterns using monoclonal antibodies and determine how much information may be extracted from the sequences. Specifically, I examined the effects of antibody concentration, competition, peptide density, and antibody valence. Peptide binding could be detected at the low concentrations relevant to immunosignaturing, and a monoclonal's signature could even be detected in the presences of 100 fold excess naive IgG. I also found that peptide density was important, but this effect was not due to bivalent binding. Next, I examined in more detail how a polyreactive antibody binds to the random sequence peptides compared to protein sequence derived peptides, and found that it bound to many peptides from both sets, but with low apparent affinity. An in depth look at how the peptide physicochemical properties and sequence complexity revealed that there were some correlations with properties, but they were generally small and varied greatly between antibodies. However, on a limited diversity but larger peptide library, I found that sequence complexity was important for antibody binding. The redundancy on that library did enable the identification of specific sub-sequences recognized by an antibody. The current immunosignaturing platform has little repetition of sub-sequences, so I evaluated several methods to infer antibody epitopes. I found two methods that had modest prediction accuracy, and I developed a software application called GuiTope to facilitate the epitope prediction analysis. None of the methods had sufficient accuracy to identify an unknown antigen from a database. In conclusion, the characteristics of the immunosignaturing platform observed through monoclonal antibody experiments demonstrate its promise as a new diagnostic technology. However, a major limitation is the difficulty in connecting the signature back to the original antigen, though larger peptide libraries could facilitate these predictions.

Contributors

Agent

Created

Date Created
2011

149928-Thumbnail Image.png

Integrative analyses of diverse biological data sources

Description

The technology expansion seen in the last decade for genomics research has permitted the generation of large-scale data sources pertaining to molecular biological assays, genomics, proteomics, transcriptomics and other modern omics catalogs. New methods to analyze, integrate and visualize these

The technology expansion seen in the last decade for genomics research has permitted the generation of large-scale data sources pertaining to molecular biological assays, genomics, proteomics, transcriptomics and other modern omics catalogs. New methods to analyze, integrate and visualize these data types are essential to unveil relevant disease mechanisms. Towards these objectives, this research focuses on data integration within two scenarios: (1) transcriptomic, proteomic and functional information and (2) real-time sensor-based measurements motivated by single-cell technology. To assess relationships between protein abundance, transcriptomic and functional data, a nonlinear model was explored at static and temporal levels. The successful integration of these heterogeneous data sources through the stochastic gradient boosted tree approach and its improved predictability are some highlights of this work. Through the development of an innovative validation subroutine based on a permutation approach and the use of external information (i.e., operons), lack of a priori knowledge for undetected proteins was overcome. The integrative methodologies allowed for the identification of undetected proteins for Desulfovibrio vulgaris and Shewanella oneidensis for further biological exploration in laboratories towards finding functional relationships. In an effort to better understand diseases such as cancer at different developmental stages, the Microscale Life Science Center headquartered at the Arizona State University is pursuing single-cell studies by developing novel technologies. This research arranged and applied a statistical framework that tackled the following challenges: random noise, heterogeneous dynamic systems with multiple states, and understanding cell behavior within and across different Barrett's esophageal epithelial cell lines using oxygen consumption curves. These curves were characterized with good empirical fit using nonlinear models with simple structures which allowed extraction of a large number of features. Application of a supervised classification model to these features and the integration of experimental factors allowed for identification of subtle patterns among different cell types visualized through multidimensional scaling. Motivated by the challenges of analyzing real-time measurements, we further explored a unique two-dimensional representation of multiple time series using a wavelet approach which showcased promising results towards less complex approximations. Also, the benefits of external information were explored to improve the image representation.

Contributors

Agent

Created

Date Created
2011