Matching Items (82)
Filtering by

Clear all filters

156777-Thumbnail Image.png
Description
Clinical Decision Support (CDS) is primarily associated with alerts, reminders, order entry, rule-based invocation, diagnostic aids, and on-demand information retrieval. While valuable, these foci have been in production use for decades, and do not provide a broader, interoperable means of plugging structured clinical knowledge into live electronic health record (EHR)

Clinical Decision Support (CDS) is primarily associated with alerts, reminders, order entry, rule-based invocation, diagnostic aids, and on-demand information retrieval. While valuable, these foci have been in production use for decades, and do not provide a broader, interoperable means of plugging structured clinical knowledge into live electronic health record (EHR) ecosystems for purposes of orchestrating the user experiences of patients and clinicians. To date, the gap between knowledge representation and user-facing EHR integration has been considered an “implementation concern” requiring unscalable manual human efforts and governance coordination. Drafting a questionnaire engineered to meet the specifications of the HL7 CDS Knowledge Artifact specification, for example, carries no reasonable expectation that it may be imported and deployed into a live system without significant burdens. Dramatic reduction of the time and effort gap in the research and application cycle could be revolutionary. Doing so, however, requires both a floor-to-ceiling precoordination of functional boundaries in the knowledge management lifecycle, as well as formalization of the human processes by which this occurs.

This research introduces ARTAKA: Architecture for Real-Time Application of Knowledge Artifacts, as a concrete floor-to-ceiling technological blueprint for both provider heath IT (HIT) and vendor organizations to incrementally introduce value into existing systems dynamically. This is made possible by service-ization of curated knowledge artifacts, then injected into a highly scalable backend infrastructure by automated orchestration through public marketplaces. Supplementary examples of client app integration are also provided. Compilation of knowledge into platform-specific form has been left flexible, in so far as implementations comply with ARTAKA’s Context Event Service (CES) communication and Health Services Platform (HSP) Marketplace service packaging standards.

Towards the goal of interoperable human processes, ARTAKA’s treatment of knowledge artifacts as a specialized form of software allows knowledge engineers to operate as a type of software engineering practice. Thus, nearly a century of software development processes, tools, policies, and lessons offer immediate benefit: in some cases, with remarkable parity. Analyses of experimentation is provided with guidelines in how choice aspects of software development life cycles (SDLCs) apply to knowledge artifact development in an ARTAKA environment.

Portions of this culminating document have been further initiated with Standards Developing Organizations (SDOs) intended to ultimately produce normative standards, as have active relationships with other bodies.
ContributorsLee, Preston Victor (Author) / Dinu, Valentin (Thesis advisor) / Sottara, Davide (Committee member) / Greenes, Robert (Committee member) / Arizona State University (Publisher)
Created2018
156520-Thumbnail Image.png
Description
Study of canine cancer’s molecular underpinnings holds great potential for informing veterinary and human oncology. Sporadic canine cancers are highly abundant (~4 million diagnoses/year in the United States) and the dog’s unique genomic architecture due to selective inbreeding, alongside the high similarity between dog and human genomes both confer power

Study of canine cancer’s molecular underpinnings holds great potential for informing veterinary and human oncology. Sporadic canine cancers are highly abundant (~4 million diagnoses/year in the United States) and the dog’s unique genomic architecture due to selective inbreeding, alongside the high similarity between dog and human genomes both confer power for improving understanding of cancer genes. However, characterization of canine cancer genome landscapes has been limited. It is hindered by lack of canine-specific tools and resources. To enable robust and reproducible comparative genomic analysis of canine cancers, I have developed a workflow for somatic and germline variant calling in canine cancer genomic data. I have first adapted a human cancer genomics pipeline to create a semi-automated canine pipeline used to map genomic landscapes of canine melanoma, lung adenocarcinoma, osteosarcoma and lymphoma. This pipeline also forms the backbone of my novel comparative genomics workflow.

Practical impediments to comparative genomic analysis of dog and human include challenges identifying similarities in mutation type and function across species. For example, canine genes could have evolved different functions and their human orthologs may perform different functions. Hence, I undertook a systematic statistical evaluation of dog and human cancer genes and assessed functional similarities and differences between orthologs to improve understanding of the roles of these genes in cancer across species. I tested this pipeline canine and human Diffuse Large B-Cell Lymphoma (DLBCL), given that canine DLBCL is the most comprehensively genomically characterized canine cancer. Logistic regression with genes bearing somatic coding mutations in each cancer was used to determine if conservation metrics (sequence identity, network placement, etc.) could explain co-mutation of genes in both species. Using this model, I identified 25 co-mutated and evolutionarily similar genes that may be compelling cross-species cancer genes. For example, PCLO was identified as a co-mutated conserved gene with PCLO having been previously identified as recurrently mutated in human DLBCL, but with an unclear role in oncogenesis. Further investigation of these genes might shed new light on the biology of lymphoma in dogs and human and this approach may more broadly serve to prioritize new genes for comparative cancer biology studies.
ContributorsSivaprakasam, Karthigayini (Author) / Dinu, Valentin (Thesis advisor) / Trent, Jeffrey (Thesis advisor) / Hendricks, William (Committee member) / Runger, George C. (Committee member) / Arizona State University (Publisher)
Created2018
153975-Thumbnail Image.png
Description
Breast cancer is the most common cancer and currently the second leading cause of death among women in the United States. Patients’ five-year relative survival rate decreases from 99% to 25% when breast cancer is diagnosed late. Immune checkpoint blockage has shown to be a promising therapy to improve patients’

Breast cancer is the most common cancer and currently the second leading cause of death among women in the United States. Patients’ five-year relative survival rate decreases from 99% to 25% when breast cancer is diagnosed late. Immune checkpoint blockage has shown to be a promising therapy to improve patients’ outcome in many other cancers. However, due to the lack of early diagnosis, the treatment is normally given in the later stages. An early diagnosis system for breast cancer could potentially revolutionize current treatment strategies, improve patients’ outcomes and even eradicate the disease. The current breast cancer diagnostic methods cannot meet this demand. A simple, effective, noninvasive and inexpensive early diagnostic technology is needed. Immunosignature technology leverages the power of the immune system to find cancer early. Antibodies targeting tumor antigens in the blood are probed on a high-throughput random peptide array and generate a specific binding pattern called the immunosignature.

In this dissertation, I propose a scenario for using immunosignature technology to detect breast cancer early and to implement an early treatment strategy by using the PD-L1 immune checkpoint inhibitor. I develop a methodology to describe the early diagnosis and treatment of breast cancer in a FVB/N neuN breast cancer mouse model. By comparing FVB/N neuN transgenic mice and age-matched wild type controls, I have found and validated specific immunosignatures at multiple time points before tumors are palpable. Immunosignatures change along with tumor development. Using a late-stage immunosignature to predict early samples, or vice versa, cannot achieve high prediction performance. By using the immunosignature of early breast cancer, I show that at the time of diagnosis, early treatment with the checkpoint blockade, anti-PD-L1, inhibits tumor growth in FVB/N neuN transgenic mouse model. The mRNA analysis of the PD-L1 level in mice mammary glands suggests that it is more effective to have treatment early.

Novel discoveries are changing understanding of breast cancer and improving strategies in clinical treatment. Researchers and healthcare professionals are actively working in the early diagnosis and early treatment fields. This dissertation provides a step along the road for better diagnosis and treatment of breast cancer.
ContributorsDuan, Hu (Author) / Johnston, Stephen Albert (Thesis advisor) / Hartwell, Leland Harrison (Committee member) / Dinu, Valentin (Committee member) / Chang, Yung (Committee member) / Arizona State University (Publisher)
Created2015
154070-Thumbnail Image.png
Description
No two cancers are alike. Cancer is a dynamic and heterogeneous disease, such heterogeneity arise among patients with the same cancer type, among cancer cells within the same individual’s tumor and even among cells within the same sub-clone over time. The recent application of next-generation sequencing and precision medicine techniques

No two cancers are alike. Cancer is a dynamic and heterogeneous disease, such heterogeneity arise among patients with the same cancer type, among cancer cells within the same individual’s tumor and even among cells within the same sub-clone over time. The recent application of next-generation sequencing and precision medicine techniques is the driving force to uncover the complexity of cancer and the best clinical practice. The core concept of precision medicine is to move away from crowd-based, best-for-most treatment and take individual variability into account when optimizing the prevention and treatment strategies. Next-generation sequencing is the method to sift through the entire 3 billion letters of each patient’s DNA genetic code in a massively parallel fashion.

The deluge of next-generation sequencing data nowadays has shifted the bottleneck of cancer research from multiple “-omics” data collection to integrative analysis and data interpretation. In this dissertation, I attempt to address two distinct, but dependent, challenges. The first is to design specific computational algorithms and tools that can process and extract useful information from the raw data in an efficient, robust, and reproducible manner. The second challenge is to develop high-level computational methods and data frameworks for integrating and interpreting these data. Specifically, Chapter 2 presents a tool called Snipea (SNv Integration, Prioritization, Ensemble, and Annotation) to further identify, prioritize and annotate somatic SNVs (Single Nucleotide Variant) called from multiple variant callers. Chapter 3 describes a novel alignment-based algorithm to accurately and losslessly classify sequencing reads from xenograft models. Chapter 4 describes a direct and biologically motivated framework and associated methods for identification of putative aberrations causing survival difference in GBM patients by integrating whole-genome sequencing, exome sequencing, RNA-Sequencing, methylation array and clinical data. Lastly, chapter 5 explores longitudinal and intratumor heterogeneity studies to reveal the temporal and spatial context of tumor evolution. The long-term goal is to help patients with cancer, particularly those who are in front of us today. Genome-based analysis of the patient tumor can identify genomic alterations unique to each patient’s tumor that are candidate therapeutic targets to decrease therapy resistance and improve clinical outcome.
ContributorsPeng, Sen (Author) / Dinu, Valentin (Thesis advisor) / Scotch, Matthew (Committee member) / Wallstrom, Garrick (Committee member) / Arizona State University (Publisher)
Created2015
152847-Thumbnail Image.png
Description
The processes of a human somatic cell are very complex with various genetic mechanisms governing its fate. Such cells undergo various genetic mutations, which translate to the genetic aberrations that we see in cancer. There are more than 100 types of cancer, each having many more subtypes with aberrations being

The processes of a human somatic cell are very complex with various genetic mechanisms governing its fate. Such cells undergo various genetic mutations, which translate to the genetic aberrations that we see in cancer. There are more than 100 types of cancer, each having many more subtypes with aberrations being unique to each. In the past two decades, the widespread application of high-throughput genomic technologies, such as micro-arrays and next-generation sequencing, has led to the revelation of many such aberrations. Known types and subtypes can be readily identified using gene-expression profiling and more importantly, high-throughput genomic datasets have helped identify novel sub-types with distinct signatures. Recent studies showing usage of gene-expression profiling in clinical decision making in breast cancer patients underscore the utility of high-throughput datasets. Beyond prognosis, understanding the underlying cellular processes is essential for effective cancer treatment. Various high-throughput techniques are now available to look at a particular aspect of a genetic mechanism in cancer tissue. To look at these mechanisms individually is akin to looking at a broken watch; taking apart each of its parts, looking at them individually and finally making a list of all the faulty ones. Integrative approaches are needed to transform one-dimensional cancer signatures into multi-dimensional interaction and regulatory networks, consequently bettering our understanding of cellular processes in cancer. Here, I attempt to (i) address ways to effectively identify high quality variants when multiple assays on the same sample samples are available through two novel tools, snpSniffer and NGSPE; (ii) glean new biological insight into multiple myeloma through two novel integrative analysis approaches making use of disparate high-throughput datasets. While these methods focus on multiple myeloma datasets, the informatics approaches are applicable to all cancer datasets and will thus help advance cancer genomics.
ContributorsYellapantula, Venkata (Author) / Dinu, Valentin (Thesis advisor) / Scotch, Matthew (Committee member) / Wallstrom, Garrick (Committee member) / Keats, Jonathan (Committee member) / Arizona State University (Publisher)
Created2014
152740-Thumbnail Image.png
Description
Genomic structural variation (SV) is defined as gross alterations in the genome broadly classified as insertions/duplications, deletions inversions and translocations. DNA sequencing ushered structural variant discovery beyond laboratory detection techniques to high resolution informatics approaches. Bioinformatics tools for computational discovery of SVs however are still missing variants in the complex

Genomic structural variation (SV) is defined as gross alterations in the genome broadly classified as insertions/duplications, deletions inversions and translocations. DNA sequencing ushered structural variant discovery beyond laboratory detection techniques to high resolution informatics approaches. Bioinformatics tools for computational discovery of SVs however are still missing variants in the complex cancer genome. This study aimed to define genomic context leading to tool failure and design novel algorithm addressing this context. Methods: The study tested the widely held but unproven hypothesis that tools fail to detect variants which lie in repeat regions. Publicly available 1000-Genomes dataset with experimentally validated variants was tested with SVDetect-tool for presence of true positives (TP) SVs versus false negative (FN) SVs, expecting that FNs would be overrepresented in repeat regions. Further, the novel algorithm designed to informatically capture the biological etiology of translocations (non-allelic homologous recombination and 3&ndashD; placement of chromosomes in cells –context) was tested using simulated dataset. Translocations were created in known translocation hotspots and the novel&ndashalgorithm; tool compared with SVDetect and BreakDancer. Results: 53% of false negative (FN) deletions were within repeat structure compared to 81% true positive (TP) deletions. Similarly, 33% FN insertions versus 42% TP, 26% FN duplication versus 57% TP and 54% FN novel sequences versus 62% TP were within repeats. Repeat structure was not driving the tool's inability to detect variants and could not be used as context. The novel algorithm with a redefined context, when tested against SVDetect and BreakDancer was able to detect 10/10 simulated translocations with 30X coverage dataset and 100% allele frequency, while SVDetect captured 4/10 and BreakDancer detected 6/10. For 15X coverage dataset with 100% allele frequency, novel algorithm was able to detect all ten translocations albeit with fewer reads supporting the same. BreakDancer detected 4/10 and SVDetect detected 2/10 Conclusion: This study showed that presence of repetitive elements in general within a structural variant did not influence the tool's ability to capture it. This context-based algorithm proved better than current tools even with half the genome coverage than accepted protocol and provides an important first step for novel translocation discovery in cancer genome.
ContributorsShetty, Sheetal (Author) / Dinu, Valentin (Thesis advisor) / Bussey, Kimberly (Committee member) / Scotch, Matthew (Committee member) / Wallstrom, Garrick (Committee member) / Arizona State University (Publisher)
Created2014
153097-Thumbnail Image.png
Description
This dissertation consists of three substantive chapters. The first substantive chapter investigates the premature harvesting problem in fisheries. Traditionally, yield-per-recruit analysis has been used to both assess and address the premature harvesting of fish stocks. However, the fact that fish size often affects the unit price suggests that this approach

This dissertation consists of three substantive chapters. The first substantive chapter investigates the premature harvesting problem in fisheries. Traditionally, yield-per-recruit analysis has been used to both assess and address the premature harvesting of fish stocks. However, the fact that fish size often affects the unit price suggests that this approach may be inadequate. In this chapter, I first synthesize the conventional yield-per-recruit analysis, and then extend this conventional approach by incorporating a size-price function for a revenue-per-recruit analysis. An optimal control approach is then used to derive a general bioeconomic solution for the optimal harvesting of a short-lived single cohort. This approach prevents economically premature harvesting and provides an "optimal economic yield". By comparing the yield- and revenue-per-recruit management strategies with the bioeconomic management strategy, I am able to test the economic efficiency of the conventional yield-per-recruit approach. This is illustrated with a numerical study. It shows that a bioeconomic strategy can significantly improve economic welfare compared with the yield-per-recruit strategy, particularly in the face of high natural mortality. Nevertheless, I find that harvesting on a revenue-per-recruit basis improves management policy and can generate a rent that is close to that from bioeconomic analysis, in particular when the natural mortality is relatively low.

The second substantive chapter explores the conservation potential of a whale permit market under bounded economic uncertainty. Pro- and anti-whaling stakeholders are concerned about a recently proposed, "cap and trade" system for managing the global harvest of whales. Supporters argue that such an approach represents a novel solution to the current gridlock in international whale management. In addition to ethical objections, opponents worry that uncertainty about demand for whale-based products and the environmental benefits of conservation may make it difficult to predict the outcome of a whale share market. In this study, I use population and economic data for minke whales to examine the potential ecological consequences of the establishment of a whale permit market in Norway under bounded but significant economic uncertainty. A bioeconomic model is developed to evaluate the influence of economic uncertainties associated with pro- and anti- whaling demands on long-run steady state whale population size, harvest, and potential allocation. The results indicate that these economic uncertainties, in particular on the conservation demand side, play an important role in determining the steady state ecological outcome of a whale share market. A key finding is that while a whale share market has the potential to yield a wide range of allocations between conservation and whaling interests - outcomes in which conservationists effectively "buy out" the whaling industry seem most likely.

The third substantive chapter examines the sea lice externality between farmed fisheries and wild fisheries. A central issue in the debate over the effect of fish farming on the wild fisheries is the nature of sea lice population dynamics and the wild juvenile mortality rate induced by sea lice infection. This study develops a bioeconomic model that integrates sea lice population dynamics, fish population dynamics, aquaculture and wild capture salmon fisheries in an optimal control framework. It provides a tool to investigate sea lice control policy from the standpoint both of private aquaculture producers and wild fishery managers by considering the sea lice infection externality between farmed and wild fisheries. Numerical results suggest that the state trajectory paths may be quite different under different management regimes, but approach the same steady state. Although the difference in economic benefits is not significant in the particular case considered due to the low value of the wild fishery, I investigate the possibility of levying a tax on aquaculture production for correcting the sea lice externality generated by fish farms.
ContributorsHuang, Biao (Author) / Abbott, Joshua K (Thesis advisor) / Perrings, Charles (Thesis advisor) / Gerber, Leah R. (Committee member) / Muneepeerakul, Rachata (Committee member) / Schoon, Michael (Committee member) / Arizona State University (Publisher)
Created2014
Description
Skeletal muscle (SM) mitochondria generate the majority of adenosine triphosphate (ATP) in SM, and help regulate whole-body energy expenditure. Obesity is associated with alterations in SM mitochondria, which are unique with respect to their arrangement within cells; some mitochondria are located directly beneath the sarcolemma (i.e., subsarcolemmal (SS) mitochondria), while

Skeletal muscle (SM) mitochondria generate the majority of adenosine triphosphate (ATP) in SM, and help regulate whole-body energy expenditure. Obesity is associated with alterations in SM mitochondria, which are unique with respect to their arrangement within cells; some mitochondria are located directly beneath the sarcolemma (i.e., subsarcolemmal (SS) mitochondria), while other are nested between the myofibrils (i.e., intermyofibrillar (IMF) mitochondria). Functional and proteome differences specific to SS versus IMF mitochondria in obese individuals may contribute to reduced capacity for muscle ATP production seen in obesity. The overall goals of this work were to (1) isolate functional muscle SS and IMF mitochondria from lean and obese individuals, (2) assess enzyme activities associated with the electron transport chain and ATP production, (3) determine if elevated plasma amino acids enhance SS and IMF mitochondrial respiration and ATP production rates in SM of obese humans, and (4) determine differences in mitochondrial proteome regulating energy metabolism and key biological processes associated with SS and IMF mitochondria between lean and obese humans.

Polarography was used to determine functional differences in isolated SS and IMF mitochondria between lean (37 ± 3 yrs; n = 10) and obese (35 ± 3 yrs; n = 11) subjects during either saline (control) or amino acid (AA) infusions. AA infusion increased ADP-stimulated respiration (i.e., coupled respiration), non-ADP stimulated respiration (i.e., uncoupled respiration), and ATP production rates in SS, but not IMF mitochondria in lean (n = 10; P < 0.05). Neither infusion increased any of the above parameters in muscle SS or IMF mitochondria of the obese subjects.

Using label free quantitative mass spectrometry, we determined differences in proteomes of SM SS and IMF mitochondria between lean (33 ± 3 yrs; n = 16) and obese (32 ± 3 yrs; n = 17) subjects. Differentially-expressed mitochondrial proteins in SS versus IMF mitochondria of obese subjects were associated with biological processes that regulate: electron transport chain (P<0.0001), citric acid cycle (P<0.0001), oxidative phosphorylation (P<0.001), branched-chain amino acid degradation, (P<0.0001), and fatty acid degradation (P<0.001). Overall, these findings show that obesity is associated with redistribution of key biological processes within the mitochondrial reticulum responsible for regulating energy metabolism in human skeletal muscle.
ContributorsKras, Katon Anthony (Author) / Katsanos, Christos (Thesis advisor) / Chandler, Douglas (Committee member) / Dinu, Valentin (Committee member) / Mor, Tsafrir S. (Committee member) / Arizona State University (Publisher)
Created2017
155725-Thumbnail Image.png
Description
Random forest (RF) is a popular and powerful technique nowadays. It can be used for classification, regression and unsupervised clustering. In its original form introduced by Leo Breiman, RF is used as a predictive model to generate predictions for new observations. Recent researches have proposed several methods based on RF

Random forest (RF) is a popular and powerful technique nowadays. It can be used for classification, regression and unsupervised clustering. In its original form introduced by Leo Breiman, RF is used as a predictive model to generate predictions for new observations. Recent researches have proposed several methods based on RF for feature selection and for generating prediction intervals. However, they are limited in their applicability and accuracy. In this dissertation, RF is applied to build a predictive model for a complex dataset, and used as the basis for two novel methods for biomarker discovery and generating prediction interval.

Firstly, a biodosimetry is developed using RF to determine absorbed radiation dose from gene expression measured from blood samples of potentially exposed individuals. To improve the prediction accuracy of the biodosimetry, day-specific models were built to deal with day interaction effect and a technique of nested modeling was proposed. The nested models can fit this complex data of large variability and non-linear relationships.

Secondly, a panel of biomarkers was selected using a data-driven feature selection method as well as handpick, considering prior knowledge and other constraints. To incorporate domain knowledge, a method called Know-GRRF was developed based on guided regularized RF. This method can incorporate domain knowledge as a penalized term to regulate selection of candidate features in RF. It adds more flexibility to data-driven feature selection and can improve the interpretability of models. Know-GRRF showed significant improvement in cross-species prediction when cross-species correlation was used to guide selection of biomarkers. The method can also compete with existing methods using intrinsic data characteristics as alternative of domain knowledge in simulated datasets.

Lastly, a novel non-parametric method, RFerr, was developed to generate prediction interval using RF regression. This method is widely applicable to any predictive models and was shown to have better coverage and precision than existing methods on the real-world radiation dataset, as well as benchmark and simulated datasets.
ContributorsGuan, Xin (Author) / Liu, Li (Thesis advisor) / Runger, George C. (Thesis advisor) / Dinu, Valentin (Committee member) / Arizona State University (Publisher)
Created2017
Description
Obesity and its underlying insulin resistance are caused by environmental and genetic factors. DNA methylation provides a mechanism by which environmental factors can regulate transcriptional activity. The overall goal of the work herein was to (1) identify alterations in DNA methylation in human skeletal muscle with obesity and its underlying

Obesity and its underlying insulin resistance are caused by environmental and genetic factors. DNA methylation provides a mechanism by which environmental factors can regulate transcriptional activity. The overall goal of the work herein was to (1) identify alterations in DNA methylation in human skeletal muscle with obesity and its underlying insulin resistance, (2) to determine if these changes in methylation can be altered through weight-loss induced by bariatric surgery, and (3) to identify DNA methylation biomarkers in whole blood that can be used as a surrogate for skeletal muscle.

Assessment of DNA methylation was performed on human skeletal muscle and blood using reduced representation bisulfite sequencing (RRBS) for high-throughput identification and pyrosequencing for site-specific confirmation. Sorbin and SH3 homology domain 3 (SORBS3) was identified in skeletal muscle to be increased in methylation (+5.0 to +24.4 %) in the promoter and 5’untranslated region (UTR) in the obese participants (n= 10) compared to lean (n=12), and this finding corresponded with a decrease in gene expression (fold change: -1.9, P=0.0001). Furthermore, SORBS3 was demonstrated in a separate cohort of morbidly obese participants (n=7) undergoing weight-loss induced by surgery, to decrease in methylation (-5.6 to -24.2%) and increase in gene expression (fold change: +1.7; P=0.05) post-surgery. Moreover, SORBS3 promoter methylation was demonstrated in vitro to inhibit transcriptional activity (P=0.000003). The methylation and transcriptional changes for SORBS3 were significantly (P≤0.05) correlated with obesity measures and fasting insulin levels. SORBS3 was not identified in the blood methylation analysis of lean (n=10) and obese (n=10) participants suggesting that it is a muscle specific marker. However, solute carrier family 19 member 1 (SLC19A1) was identified in blood and skeletal muscle to have decreased 5’UTR methylation in obese participants, and this was significantly (P≤0.05) predicted by insulin sensitivity.

These findings suggest SLC19A1 as a potential blood-based biomarker for obese, insulin resistant states. The collective findings of SORBS3 DNA methylation and gene expression present an exciting novel target in skeletal muscle for further understanding obesity and its underlying insulin resistance. Moreover, the dynamic changes to SORBS3 in response to metabolic improvements and weight-loss induced by surgery.
ContributorsDay, Samantha Elaine (Author) / Coletta, Dawn K. (Thesis advisor) / Katsanos, Christos (Committee member) / Mandarino, Lawrence J. (Committee member) / Shaibi, Gabriel Q. (Committee member) / Dinu, Valentin (Committee member) / Arizona State University (Publisher)
Created2017