This growing collection consists of scholarly works authored by ASU-affiliated faculty, staff, and community members, and it contains many open access articles. ASU-affiliated authors are encouraged to Share Your Work in KEEP.

Displaying 1 - 10 of 27
Filtering by

Clear all filters

128972-Thumbnail Image.png
Description

Background: Most excess deaths that occur during extreme hot weather events do not have natural heat recorded as an underlying or contributing cause. This study aims to identify the specific individuals who died because of hot weather using only secondary data. A novel approach was developed in which the expected number

Background: Most excess deaths that occur during extreme hot weather events do not have natural heat recorded as an underlying or contributing cause. This study aims to identify the specific individuals who died because of hot weather using only secondary data. A novel approach was developed in which the expected number of deaths was repeatedly sampled from all deaths that occurred during a hot weather event, and compared with deaths during a control period. The deaths were compared with respect to five factors known to be associated with hot weather mortality. Individuals were ranked by their presence in significant models over 100 trials of 10,000 repetitions. Those with the highest rankings were identified as probable excess deaths. Sensitivity analyses were performed on a range of model combinations. These methods were applied to a 2009 hot weather event in greater Vancouver, Canada.

Results: The excess deaths identified were sensitive to differences in model combinations, particularly between univariate and multivariate approaches. One multivariate and one univariate combination were chosen as the best models for further analyses. The individuals identified by multiple combinations suggest that marginalized populations in greater Vancouver are at higher risk of death during hot weather.

Conclusions: This study proposes novel methods for classifying specific deaths as expected or excess during a hot weather event. Further work is needed to evaluate performance of the methods in simulation studies and against clinically identified cases. If confirmed, these methods could be applied to a wide range of populations and events of interest.

Created2016-11-15
128958-Thumbnail Image.png
Description

Background: Immunosignaturing is a new peptide microarray based technology for profiling of humoral immune responses. Despite new challenges, immunosignaturing gives us the opportunity to explore new and fundamentally different research questions. In addition to classifying samples based on disease status, the complex patterns and latent factors underlying immunosignatures, which we attempt

Background: Immunosignaturing is a new peptide microarray based technology for profiling of humoral immune responses. Despite new challenges, immunosignaturing gives us the opportunity to explore new and fundamentally different research questions. In addition to classifying samples based on disease status, the complex patterns and latent factors underlying immunosignatures, which we attempt to model, may have a diverse range of applications.

Methods: We investigate the utility of a number of statistical methods to determine model performance and address challenges inherent in analyzing immunosignatures. Some of these methods include exploratory and confirmatory factor analyses, classical significance testing, structural equation and mixture modeling.

Results: We demonstrate an ability to classify samples based on disease status and show that immunosignaturing is a very promising technology for screening and presymptomatic screening of disease. In addition, we are able to model complex patterns and latent factors underlying immunosignatures. These latent factors may serve as biomarkers for disease and may play a key role in a bioinformatic method for antibody discovery.

Conclusion: Based on this research, we lay out an analytic framework illustrating how immunosignatures may be useful as a general method for screening and presymptomatic screening of disease as well as antibody discovery.

ContributorsBrown, Justin (Author) / Stafford, Phillip (Author) / Johnston, Stephen (Author) / Dinu, Valentin (Author) / College of Health Solutions (Contributor)
Created2011-08-19
128960-Thumbnail Image.png
Description

Background: Microarray image analysis processes scanned digital images of hybridized arrays to produce the input spot-level data for downstream analysis, so it can have a potentially large impact on those and subsequent analysis. Signal saturation is an optical effect that occurs when some pixel values for highly expressed genes or

Background: Microarray image analysis processes scanned digital images of hybridized arrays to produce the input spot-level data for downstream analysis, so it can have a potentially large impact on those and subsequent analysis. Signal saturation is an optical effect that occurs when some pixel values for highly expressed genes or peptides exceed the upper detection threshold of the scanner software (216 - 1 = 65, 535 for 16-bit images). In practice, spots with a sizable number of saturated pixels are often flagged and discarded. Alternatively, the saturated values are used without adjustments for estimating spot intensities. The resulting expression data tend to be biased downwards and can distort high-level analysis that relies on these data. Hence, it is crucial to effectively correct for signal saturation.

Results: We developed a flexible mixture model-based segmentation and spot intensity estimation procedure that accounts for saturated pixels by incorporating a censored component in the mixture model. As demonstrated with biological data and simulation, our method extends the dynamic range of expression data beyond the saturation threshold and is effective in correcting saturation-induced bias when the lost information is not tremendous. We further illustrate the impact of image processing on downstream classification, showing that the proposed method can increase diagnostic accuracy using data from a lymphoma cancer diagnosis study.

Conclusions: The presented method adjusts for signal saturation at the segmentation stage that identifies a pixel as part of the foreground, background or other. The cluster membership of a pixel can be altered versus treating saturated values as truly observed. Thus, the resulting spot intensity estimates may be more accurate than those obtained from existing methods that correct for saturation based on already segmented data. As a model-based segmentation method, our procedure is able to identify inner holes, fuzzy edges and blank spots that are common in microarray images. The approach is independent of microarray platform and applicable to both single- and dual-channel microarrays.

ContributorsYang, Yan (Author) / Stafford, Phillip (Author) / Kim, YoonJoo (Author) / College of Liberal Arts and Sciences (Contributor)
Created2011-11-30
129075-Thumbnail Image.png
Description

Background: High-throughput technologies such as DNA, RNA, protein, antibody and peptide microarrays are often used to examine differences across drug treatments, diseases, transgenic animals, and others. Typically one trains a classification system by gathering large amounts of probe-level data, selecting informative features, and classifies test samples using a small number of

Background: High-throughput technologies such as DNA, RNA, protein, antibody and peptide microarrays are often used to examine differences across drug treatments, diseases, transgenic animals, and others. Typically one trains a classification system by gathering large amounts of probe-level data, selecting informative features, and classifies test samples using a small number of features. As new microarrays are invented, classification systems that worked well for other array types may not be ideal. Expression microarrays, arguably one of the most prevalent array types, have been used for years to help develop classification algorithms. Many biological assumptions are built into classifiers that were designed for these types of data. One of the more problematic is the assumption of independence, both at the probe level and again at the biological level. Probes for RNA transcripts are designed to bind single transcripts. At the biological level, many genes have dependencies across transcriptional pathways where co-regulation of transcriptional units may make many genes appear as being completely dependent. Thus, algorithms that perform well for gene expression data may not be suitable when other technologies with different binding characteristics exist. The immunosignaturing microarray is based on complex mixtures of antibodies binding to arrays of random sequence peptides. It relies on many-to-many binding of antibodies to the random sequence peptides. Each peptide can bind multiple antibodies and each antibody can bind multiple peptides. This technology has been shown to be highly reproducible and appears promising for diagnosing a variety of disease states. However, it is not clear what is the optimal classification algorithm for analyzing this new type of data.

Results: We characterized several classification algorithms to analyze immunosignaturing data. We selected several datasets that range from easy to difficult to classify, from simple monoclonal binding to complex binding patterns in asthma patients. We then classified the biological samples using 17 different classification algorithms. Using a wide variety of assessment criteria, we found ‘Naïve Bayes’ far more useful than other widely used methods due to its simplicity, robustness, speed and accuracy.

Conclusions: ‘Naïve Bayes’ algorithm appears to accommodate the complex patterns hidden within multilayered immunosignaturing microarray data due to its fundamental mathematical properties.

ContributorsKukreja, Muskan (Author) / Johnston, Stephen (Author) / Stafford, Phillip (Author) / Biodesign Institute (Contributor)
Created2012-06-21
128834-Thumbnail Image.png
Description

Introduction: The ketogenic diet (KD) is a high-fat, low-carbohydrate diet that alters metabolism by increasing the level of ketone bodies in the blood. KetoCal® (KC) is a nutritionally complete, commercially available 4∶1 (fat∶ carbohydrate+protein) ketogenic formula that is an effective non-pharmacologic treatment for the management of refractory pediatric epilepsy. Diet-induced ketosis

Introduction: The ketogenic diet (KD) is a high-fat, low-carbohydrate diet that alters metabolism by increasing the level of ketone bodies in the blood. KetoCal® (KC) is a nutritionally complete, commercially available 4∶1 (fat∶ carbohydrate+protein) ketogenic formula that is an effective non-pharmacologic treatment for the management of refractory pediatric epilepsy. Diet-induced ketosis causes changes to brain homeostasis that have potential for the treatment of other neurological diseases such as malignant gliomas.

Methods: We used an intracranial bioluminescent mouse model of malignant glioma. Following implantation animals were maintained on standard diet (SD) or KC. The mice received 2×4 Gy of whole brain radiation and tumor growth was followed by in vivo imaging.

Results: Animals fed KC had elevated levels of β-hydroxybutyrate (p = 0.0173) and an increased median survival of approximately 5 days relative to animals maintained on SD. KC plus radiation treatment were more than additive, and in 9 of 11 irradiated animals maintained on KC the bioluminescent signal from the tumor cells diminished below the level of detection (p<0.0001). Animals were switched to SD 101 days after implantation and no signs of tumor recurrence were seen for over 200 days.

Conclusions: KC significantly enhances the anti-tumor effect of radiation. This suggests that cellular metabolic alterations induced through KC may be useful as an adjuvant to the current standard of care for the treatment of human malignant gliomas.

ContributorsAbdelwahab, Mohammed G. (Author) / Fenton, Kathryn E. (Author) / Preul, Mark C. (Author) / Rho, Jong M. (Author) / Lynch, Andrew (Author) / Stafford, Phillip (Author) / Scheck, Adrienne C. (Author) / Biodesign Institute (Contributor)
Created2012-05-01
128852-Thumbnail Image.png
Description

Immunosignaturing shows promise as a general approach to diagnosis. It has been shown to detect immunological signs of infection early during the course of disease and to distinguish Alzheimer’s disease from healthy controls. Here we test whether immunosignatures correspond to clinical classifications of disease using samples from people with brain

Immunosignaturing shows promise as a general approach to diagnosis. It has been shown to detect immunological signs of infection early during the course of disease and to distinguish Alzheimer’s disease from healthy controls. Here we test whether immunosignatures correspond to clinical classifications of disease using samples from people with brain tumors. Blood samples from patients undergoing craniotomies for therapeutically naïve brain tumors with diagnoses of astrocytoma (23 samples), Glioblastoma multiforme (22 samples), mixed oligodendroglioma/astrocytoma (16 samples), oligodendroglioma (18 samples), and 34 otherwise healthy controls were tested by immunosignature. Because samples were taken prior to adjuvant therapy, they are unlikely to be perturbed by non-cancer related affects. The immunosignaturing platform distinguished not only brain cancer from controls, but also pathologically important features about the tumor including type, grade, and the presence or absence of O6-methyl-guanine-DNA methyltransferase methylation promoter (MGMT), an important biomarker that predicts response to temozolomide in Glioblastoma multiformae patients.

ContributorsHughes, Alexa (Author) / Cichacz, Zbigniew (Author) / Scheck, Adrienne (Author) / Coons, Stephen W. (Author) / Johnston, Stephen (Author) / Stafford, Phillip (Author) / Biodesign Institute (Contributor)
Created2012-07-16
129012-Thumbnail Image.png
Description

Background: Malignant brain tumors affect people of all ages and are the second leading cause of cancer deaths in children. While current treatments are effective and improve survival, there remains a substantial need for more efficacious therapeutic modalities. The ketogenic diet (KD) - a high-fat, low-carbohydrate treatment for medically refractory epilepsy

Background: Malignant brain tumors affect people of all ages and are the second leading cause of cancer deaths in children. While current treatments are effective and improve survival, there remains a substantial need for more efficacious therapeutic modalities. The ketogenic diet (KD) - a high-fat, low-carbohydrate treatment for medically refractory epilepsy - has been suggested as an alternative strategy to inhibit tumor growth by altering intrinsic metabolism, especially by inducing glycopenia.

Methods: Here, we examined the effects of an experimental KD on a mouse model of glioma, and compared patterns of gene expression in tumors vs. normal brain from animals fed either a KD or a standard diet.

Results: Animals received intracranial injections of bioluminescent GL261-luc cells and tumor growth was followed in vivo. KD treatment significantly reduced the rate of tumor growth and prolonged survival. Further, the KD reduced reactive oxygen species (ROS) production in tumor cells. Gene expression profiling demonstrated that the KD induces an overall reversion to expression patterns seen in non-tumor specimens. Notably, genes involved in modulating ROS levels and oxidative stress were altered, including those encoding cyclooxygenase 2, glutathione peroxidases 3 and 7, and periredoxin 4.

Conclusions: Our data demonstrate that the KD improves survivability in our mouse model of glioma, and suggests that the mechanisms accounting for this protective effect likely involve complex alterations in cellular metabolism beyond simply a reduction in glucose.

ContributorsStafford, Phillip (Author) / Abdelwahab, Mohammed G. (Author) / Kim, Do Young (Author) / Preul, Mark C. (Author) / Rho, Jong M. (Author) / Scheck, Adrienne C. (Author) / Biodesign Institute (Contributor)
Created2010-09-10
128290-Thumbnail Image.png
Description

The sensitivity of Earth’s wetlands to observed shifts in global precipitation and temperature patterns and their ability to produce large quantities of methane gas are key global change questions. We present a microwave satellite-based approach for mapping fractional surface water (FW) globally at 25-km resolution. The approach employs a land

The sensitivity of Earth’s wetlands to observed shifts in global precipitation and temperature patterns and their ability to produce large quantities of methane gas are key global change questions. We present a microwave satellite-based approach for mapping fractional surface water (FW) globally at 25-km resolution. The approach employs a land cover-supported, atmospherically-corrected dynamic mixture model applied to 20+ years (1992–2013) of combined, daily, passive/active microwave remote sensing data. The resulting product, known as Surface Water Microwave Product Series (SWAMPS), shows strong microwave sensitivity to sub-grid scale open water and inundated wetlands comprising open plant canopies. SWAMPS’ FW compares favorably (R2 = 91%–94%) with higher-resolution, global-scale maps of open water from MODIS and SRTM-MOD44W. Correspondence of SWAMPS with open water and wetland products from satellite SAR in Alaska and the Amazon deteriorates when exposed wetlands or inundated forests captured by the SAR products were added to the open water fraction reflecting SWAMPS’ inability to detect water underneath the soil surface or beneath closed forest canopies. Except for a brief period of drying during the first 4 years of observation, the inundation extent for the global domain excluding the coast was largely stable. Regionally, inundation in North America is advancing while inundation is on the retreat in Tropical Africa and North Eurasia. SWAMPS provides a consistent and long-term global record of daily FW dynamics, with documented accuracies suitable for hydrologic assessment and global change-related investigations.

ContributorsSchroeder, Ronny (Author) / McDonald, Kyle C. (Author) / Chapman, Bruce D. (Author) / Jensen, Katherine (Author) / Podest, Erika (Author) / Tessler, Zachary D. (Author) / Bohn, Theodore (Author) / Zimmermann, Reiner (Author) / College of Liberal Arts and Sciences (Contributor)
Created2015-12-09
128306-Thumbnail Image.png
Description

The Arctic, even more so than other parts of the world, has warmed substantially over the past few decades. Temperature and humidity influence the rate of development, survival and reproduction of pathogens and thus the incidence and prevalence of many infectious diseases. Higher temperatures may also allow infected host species

The Arctic, even more so than other parts of the world, has warmed substantially over the past few decades. Temperature and humidity influence the rate of development, survival and reproduction of pathogens and thus the incidence and prevalence of many infectious diseases. Higher temperatures may also allow infected host species to survive winters in larger numbers, increase the population size and expand their habitat range. The impact of these changes on human disease in the Arctic has not been fully evaluated. There is concern that climate change may shift the geographic and temporal distribution of a range of infectious diseases. Many infectious diseases are climate sensitive, where their emergence in a region is dependent on climate-related ecological changes. Most are zoonotic diseases, and can be spread between humans and animals by arthropod vectors, water, soil, wild or domestic animals. Potentially climate-sensitive zoonotic pathogens of circumpolar concern include Brucella spp., Toxoplasma gondii, Trichinella spp., Clostridium botulinum, Francisella tularensis, Borrelia burgdorferi, Bacillus anthracis, Echinococcus spp., Leptospira spp., Giardia spp., Cryptosporida spp., Coxiella burnetti, rabies virus, West Nile virus, Hantaviruses, and tick-borne encephalitis viruses.

ContributorsParkinson, Alan J. (Author) / Evengard, Birgitta (Author) / Semenza, Jan C. (Author) / Ogden, Nicholas (Author) / Borresen, Malene L. (Author) / Berner, Jim (Author) / Brubaker, Michael (Author) / Sjostedt, Anders (Author) / Evander, Magnus (Author) / Hondula, David M. (Author) / Menne, Bettina (Author) / Pshenichnaya, Natalia (Author) / Gounder, Prabhu (Author) / Larose, Tricia (Author) / Revich, Boris (Author) / Hueffer, Karsten (Author) / Albihn, Ann (Author) / College of Public Service and Community Solutions (Contributor)
Created2014-09-30
128319-Thumbnail Image.png
Description

A warming climate is altering land-atmosphere exchanges of carbon, with a potential for increased vegetation productivity as well as the mobilization of permafrost soil carbon stores. Here we investigate land-atmosphere carbon dioxide (CO2) cycling through analysis of net ecosystem productivity (NEP) and its component fluxes of gross primary productivity (GPP)

A warming climate is altering land-atmosphere exchanges of carbon, with a potential for increased vegetation productivity as well as the mobilization of permafrost soil carbon stores. Here we investigate land-atmosphere carbon dioxide (CO2) cycling through analysis of net ecosystem productivity (NEP) and its component fluxes of gross primary productivity (GPP) and ecosystem respiration (ER) and soil carbon residence time, simulated by a set of land surface models (LSMs) over a region spanning the drainage basin of Northern Eurasia. The retrospective simulations cover the period 1960–2009 at 0.5° resolution, which is a scale common among many global carbon and climate model simulations. Model performance benchmarks were drawn from comparisons against both observed CO2 fluxes derived from site-based eddy covariance measurements as well as regional-scale GPP estimates based on satellite remote-sensing data.

The site-based comparisons depict a tendency for overestimates in GPP and ER for several of the models, particularly at the two sites to the south. For several models the spatial pattern in GPP explains less than half the variance in the MODIS MOD17 GPP product. Across the models NEP increases by as little as 0.01 to as much as 0.79 g C m-2 yr-2, equivalent to 3 to 340 % of the respective model means, over the analysis period. For the multimodel average the increase is 135 % of the mean from the first to last 10 years of record (1960–1969 vs. 2000–2009), with a weakening CO2 sink over the latter decades. Vegetation net primary productivity increased by 8 to 30 % from the first to last 10 years, contributing to soil carbon storage gains. The range in regional mean NEP among the group is twice the multimodel mean, indicative of the uncertainty in CO2 sink strength.

The models simulate that inputs to the soil carbon pool exceeded losses, resulting in a net soil carbon gain amid a decrease in residence time. Our analysis points to improvements in model elements controlling vegetation productivity and soil respiration as being needed for reducing uncertainty in land-atmosphere CO2 exchange. These advances will require collection of new field data on vegetation and soil dynamics, the development of benchmarking data sets from measurements and remote-sensing observations, and investments in future model development and intercomparison studies.

ContributorsRawlins, M. A. (Author) / McGuire, A. D. (Author) / Kimball, J. S. (Author) / Dass, P. (Author) / Lawrence, D. (Author) / Burke, E. (Author) / Chen, X. (Author) / Delire, C. (Author) / Koven, C. (Author) / MacDougall, A. (Author) / Peng, S. (Author) / Rinke, A. (Author) / Saito, K. (Author) / Zhang, W. (Author) / Alkama, R. (Author) / Bohn, Theodore (Author) / Ciais, P. (Author) / Decharme, B. (Author) / Gouttevin, I. (Author) / Hajima, T. (Author) / Ji, D. (Author) / Krinner, G. (Author) / Lettenmaier, D. P. (Author) / Miller, P. (Author) / Moore, J. C. (Author) / Smith, B. (Author) / Sueyoshi, T. (Author) / College of Liberal Arts and Sciences (Contributor)
Created2015-07-28