This growing collection consists of scholarly works authored by ASU-affiliated faculty, staff, and community members, and it contains many open access articles. ASU-affiliated authors are encouraged to Share Your Work in KEEP.

Displaying 1 - 10 of 50
Filtering by

Clear all filters

129588-Thumbnail Image.png
Description

A globally integrated carbon observation and analysis system is needed to improve the fundamental understanding of the global carbon cycle, to improve our ability to project future changes, and to verify the effectiveness of policies aiming to reduce greenhouse gas emissions and increase carbon sequestration. Building an integrated carbon observation

A globally integrated carbon observation and analysis system is needed to improve the fundamental understanding of the global carbon cycle, to improve our ability to project future changes, and to verify the effectiveness of policies aiming to reduce greenhouse gas emissions and increase carbon sequestration. Building an integrated carbon observation system requires transformational advances from the existing sparse, exploratory framework towards a dense, robust, and sustained system in all components: anthropogenic emissions, the atmosphere, the ocean, and the terrestrial biosphere. The paper is addressed to scientists, policymakers, and funding agencies who need to have a global picture of the current state of the (diverse) carbon observations.

We identify the current state of carbon observations, and the needs and notional requirements for a global integrated carbon observation system that can be built in the next decade. A key conclusion is the substantial expansion of the ground-based observation networks required to reach the high spatial resolution for CO2 and CH4 fluxes, and for carbon stocks for addressing policy-relevant objectives, and attributing flux changes to underlying processes in each region. In order to establish flux and stock diagnostics over areas such as the southern oceans, tropical forests, and the Arctic, in situ observations will have to be complemented with remote-sensing measurements. Remote sensing offers the advantage of dense spatial coverage and frequent revisit. A key challenge is to bring remote-sensing measurements to a level of long-term consistency and accuracy so that they can be efficiently combined in models to reduce uncertainties, in synergy with ground-based data.

Bringing tight observational constraints on fossil fuel and land use change emissions will be the biggest challenge for deployment of a policy-relevant integrated carbon observation system. This will require in situ and remotely sensed data at much higher resolution and density than currently achieved for natural fluxes, although over a small land area (cities, industrial sites, power plants), as well as the inclusion of fossil fuel CO2 proxy measurements such as radiocarbon in CO2 and carbon-fuel combustion tracers. Additionally, a policy-relevant carbon monitoring system should also provide mechanisms for reconciling regional top-down (atmosphere-based) and bottom-up (surface-based) flux estimates across the range of spatial and temporal scales relevant to mitigation policies. In addition, uncertainties for each observation data-stream should be assessed. The success of the system will rely on long-term commitments to monitoring, on improved international collaboration to fill gaps in the current observations, on sustained efforts to improve access to the different data streams and make databases interoperable, and on the calibration of each component of the system to agreed-upon international scales.

ContributorsCiais, P. (Author) / Dolman, A. J. (Author) / Bombelli, A. (Author) / Duren, R. (Author) / Peregon, A. (Author) / Rayner, P. J. (Author) / Miller, C. (Author) / Gobron, N. (Author) / Kinderman, G. (Author) / Marland, G. (Author) / Gruber, N. (Author) / Chevallier, F. (Author) / Andres, R. J. (Author) / Balsamo, G. (Author) / Bopp, L. (Author) / Breon, F. -M. (Author) / Broquet, G. (Author) / Dargaville, R. (Author) / Battin, T. J. (Author) / Borges, A. (Author) / Bovensmann, H. (Author) / Buchwitz, M. (Author) / Butler, J. (Author) / Canadell, J. G. (Author) / Cook, R. B. (Author) / DeFries, R. (Author) / Engelen, R. (Author) / Gurney, Kevin (Author) / Heinze, C. (Author) / Heimann, M. (Author) / Held, A. (Author) / Henry, M. (Author) / Law, B. (Author) / Luyssaert, S. (Author) / Miller, J. (Author) / Moriyama, T. (Author) / Moulin, C. (Author) / Myneni, R. (Author) / College of Liberal Arts and Sciences (Contributor)
Created2013-11-30
129462-Thumbnail Image.png
Description

We develop a general framework to analyze the controllability of multiplex networks using multiple-relation networks and multiple-layer networks with interlayer couplings as two classes of prototypical systems. In the former, networks associated with different physical variables share the same set of nodes and in the latter, diffusion processes take place.

We develop a general framework to analyze the controllability of multiplex networks using multiple-relation networks and multiple-layer networks with interlayer couplings as two classes of prototypical systems. In the former, networks associated with different physical variables share the same set of nodes and in the latter, diffusion processes take place. We find that, for a multiple-relation network, a layer exists that dominantly determines the controllability of the whole network and, for a multiple-layer network, a small fraction of the interconnections can enhance the controllability remarkably. Our theory is generally applicable to other types of multiplex networks as well, leading to significant insights into the control of complex network systems with diverse structures and interacting patterns.

ContributorsYuan, Zhengzhong (Author) / Zhao, Chen (Author) / Wang, Wen-Xu (Author) / Di, Zengru (Author) / Lai, Ying-Cheng (Author) / Ira A. Fulton Schools of Engineering (Contributor)
Created2014-10-24
129478-Thumbnail Image.png
Description

Errors in the specification or utilization of fossil fuel CO2 emissions within carbon budget or atmospheric CO2 inverse studies can alias the estimation of biospheric and oceanic carbon exchange. A key component in the simulation of CO2 concentrations arising from fossil fuel emissions is the spatial distribution of the emission

Errors in the specification or utilization of fossil fuel CO2 emissions within carbon budget or atmospheric CO2 inverse studies can alias the estimation of biospheric and oceanic carbon exchange. A key component in the simulation of CO2 concentrations arising from fossil fuel emissions is the spatial distribution of the emission near coastlines. Regridding of fossil fuel CO2 emissions (FFCO2) from fine to coarse grids to enable atmospheric transport simulations can give rise to mismatches between the emissions and simulated atmospheric dynamics which differ over land or water. For example, emissions originally emanating from the land are emitted from a grid cell for which the vertical mixing reflects the roughness and/or surface energy exchange of an ocean surface. We test this potential "dynamical inconsistency" by examining simulated global atmospheric CO2 concentration driven by two different approaches to regridding fossil fuel CO2 emissions. The two approaches are as follows: (1) a commonly used method that allocates emissions to grid cells with no attempt to ensure dynamical consistency with atmospheric transport and (2) an improved method that reallocates emissions to grid cells to ensure dynamically consistent results. Results show large spatial and temporal differences in the simulated CO2 concentration when comparing these two approaches. The emissions difference ranges from −30.3 TgC grid cell-1 yr-1 (−3.39 kgC m-2 yr-1) to +30.0 TgC grid cell-1 yr-1 (+2.6 kgC m-2 yr-1) along coastal margins. Maximum simulated annual mean CO2 concentration differences at the surface exceed ±6 ppm at various locations and times. Examination of the current CO2 monitoring locations during the local afternoon, consistent with inversion modeling system sampling and measurement protocols, finds maximum hourly differences at 38 stations exceed ±0.10 ppm with individual station differences exceeding −32 ppm. The differences implied by not accounting for this dynamical consistency problem are largest at monitoring sites proximal to large coastal urban areas and point sources. These results suggest that studies comparing simulated to observed atmospheric CO2 concentration, such as atmospheric CO2 inversions, must take measures to correct for this potential problem and ensure flux and dynamical consistency.

ContributorsZhang, X. (Author) / Gurney, Kevin (Author) / Rayner, P. (Author) / Liu, Y. (Author) / Asefi-Najafabady, Salvi (Author) / College of Liberal Arts and Sciences (Contributor)
Created2013-11-30
Description

High-resolution, global quantification of fossil fuel CO[subscript 2] emissions is emerging as a critical need in carbon cycle science and climate policy. We build upon a previously developed fossil fuel data assimilation system (FFDAS) for estimating global high-resolution fossil fuel CO[subscript 2] emissions. We have improved the underlying observationally based

High-resolution, global quantification of fossil fuel CO[subscript 2] emissions is emerging as a critical need in carbon cycle science and climate policy. We build upon a previously developed fossil fuel data assimilation system (FFDAS) for estimating global high-resolution fossil fuel CO[subscript 2] emissions. We have improved the underlying observationally based data sources, expanded the approach through treatment of separate emitting sectors including a new pointwise database of global power plants, and extended the results to cover a 1997 to 2010 time series at a spatial resolution of 0.1°. Long-term trend analysis of the resulting global emissions shows subnational spatial structure in large active economies such as the United States, China, and India. These three countries, in particular, show different long-term trends and exploration of the trends in nighttime lights, and population reveal a decoupling of population and emissions at the subnational level. Analysis of shorter-term variations reveals the impact of the 2008–2009 global financial crisis with widespread negative emission anomalies across the U.S. and Europe. We have used a center of mass (CM) calculation as a compact metric to express the time evolution of spatial patterns in fossil fuel CO[subscript 2] emissions. The global emission CM has moved toward the east and somewhat south between 1997 and 2010, driven by the increase in emissions in China and South Asia over this time period. Analysis at the level of individual countries reveals per capita CO[subscript 2] emission migration in both Russia and India. The per capita emission CM holds potential as a way to succinctly analyze subnational shifts in carbon intensity over time. Uncertainties are generally lower than the previous version of FFDAS due mainly to an improved nightlight data set.

ContributorsAsefi-Najafabady, Salvi (Author) / Rayner, P. J. (Author) / Gurney, Kevin (Author) / McRobert, A. (Author) / Song, Y. (Author) / Coltin, K. (Author) / Huang, J. (Author) / Elvidge, C. (Author) / Baugh, K. (Author) / College of Liberal Arts and Sciences (Contributor)
Created2014-09-16
128998-Thumbnail Image.png
Description

Background: While prior studies have quantified the mortality burden of the 1957 H2N2 influenza pandemic at broad geographic regions in the United States, little is known about the pandemic impact at a local level. Here we focus on analyzing the transmissibility and mortality burden of this pandemic in Arizona, a setting

Background: While prior studies have quantified the mortality burden of the 1957 H2N2 influenza pandemic at broad geographic regions in the United States, little is known about the pandemic impact at a local level. Here we focus on analyzing the transmissibility and mortality burden of this pandemic in Arizona, a setting where the dry climate was promoted as reducing respiratory illness transmission yet tuberculosis prevalence was high.

Methods: Using archival death certificates from 1954 to 1961, we quantified the age-specific seasonal patterns, excess-mortality rates, and transmissibility patterns of the 1957 H2N2 pandemic in Maricopa County, Arizona. By applying cyclical Serfling linear regression models to weekly mortality rates, the excess-mortality rates due to respiratory and all-causes were estimated for each age group during the pandemic period. The reproduction number was quantified from weekly data using a simple growth rate method and assumed generation intervals of 3 and 4 days. Local newspaper articles published during 1957–1958 were also examined.

Results: Excess-mortality rates varied between waves, age groups, and causes of death, but overall remained low. From October 1959-June 1960, the most severe wave of the pandemic, the absolute excess-mortality rate based on respiratory deaths per 10,000 population was 16.59 in the elderly (≥65 years). All other age groups exhibit very low excess-mortality and the typical U-shaped age-pattern was absent. However, the standardized mortality ratio was greatest (4.06) among children and young adolescents (5–14 years) from October 1957-March 1958, based on mortality rates of respiratory deaths. Transmissibility was greatest during the same 1957–1958 period, when the mean reproduction number was estimated at 1.08–1.11, assuming 3- or 4-day generation intervals with exponential or fixed distributions.

Conclusions: Maricopa County exhibited very low mortality impact associated with the 1957 influenza pandemic. Understanding the relatively low excess-mortality rates and transmissibility in Maricopa County during this historic pandemic may help public health officials prepare for and mitigate future outbreaks of influenza.

ContributorsCobos, April (Author) / Nelson, Clinton (Author) / Jehn, Megan (Author) / Viboud, Cecile (Author) / Chowell-Puente, Gerardo (Author) / College of Liberal Arts and Sciences (Contributor)
Created2016-08-11
128953-Thumbnail Image.png
Description

Background: On 31 March 2013, the first human infections with the novel influenza A/H7N9 virus were reported in Eastern China. The outbreak expanded rapidly in geographic scope and size, with a total of 132 laboratory-confirmed cases reported by 3 June 2013, in 10 Chinese provinces and Taiwan. The incidence of A/H7N9

Background: On 31 March 2013, the first human infections with the novel influenza A/H7N9 virus were reported in Eastern China. The outbreak expanded rapidly in geographic scope and size, with a total of 132 laboratory-confirmed cases reported by 3 June 2013, in 10 Chinese provinces and Taiwan. The incidence of A/H7N9 cases has stalled in recent weeks, presumably as a consequence of live bird market closures in the most heavily affected areas. Here we compare the transmission potential of influenza A/H7N9 with that of other emerging pathogens and evaluate the impact of intervention measures in an effort to guide pandemic preparedness.

Methods: We used a Bayesian approach combined with a SEIR (Susceptible-Exposed-Infectious-Removed) transmission model fitted to daily case data to assess the reproduction number (R) of A/H7N9 by province and to evaluate the impact of live bird market closures in April and May 2013. Simulation studies helped quantify the performance of our approach in the context of an emerging pathogen, where human-to-human transmission is limited and most cases arise from spillover events. We also used alternative approaches to estimate R based on individual-level information on prior exposure and compared the transmission potential of influenza A/H7N9 with that of other recent zoonoses.

Results: Estimates of R for the A/H7N9 outbreak were below the epidemic threshold required for sustained human-to-human transmission and remained near 0.1 throughout the study period, with broad 95% credible intervals by the Bayesian method (0.01 to 0.49). The Bayesian estimation approach was dominated by the prior distribution, however, due to relatively little information contained in the case data. We observe a statistically significant deceleration in growth rate after 6 April 2013, which is consistent with a reduction in A/H7N9 transmission associated with the preemptive closure of live bird markets. Although confidence intervals are broad, the estimated transmission potential of A/H7N9 appears lower than that of recent zoonotic threats, including avian influenza A/H5N1, swine influenza H3N2sw and Nipah virus.

Conclusion: Although uncertainty remains high in R estimates for H7N9 due to limited epidemiological information, all available evidence points to a low transmission potential. Continued monitoring of the transmission potential of A/H7N9 is critical in the coming months as intervention measures may be relaxed and seasonal factors could promote disease transmission in colder months.

Created2013-10-02
128958-Thumbnail Image.png
Description

Background: Immunosignaturing is a new peptide microarray based technology for profiling of humoral immune responses. Despite new challenges, immunosignaturing gives us the opportunity to explore new and fundamentally different research questions. In addition to classifying samples based on disease status, the complex patterns and latent factors underlying immunosignatures, which we attempt

Background: Immunosignaturing is a new peptide microarray based technology for profiling of humoral immune responses. Despite new challenges, immunosignaturing gives us the opportunity to explore new and fundamentally different research questions. In addition to classifying samples based on disease status, the complex patterns and latent factors underlying immunosignatures, which we attempt to model, may have a diverse range of applications.

Methods: We investigate the utility of a number of statistical methods to determine model performance and address challenges inherent in analyzing immunosignatures. Some of these methods include exploratory and confirmatory factor analyses, classical significance testing, structural equation and mixture modeling.

Results: We demonstrate an ability to classify samples based on disease status and show that immunosignaturing is a very promising technology for screening and presymptomatic screening of disease. In addition, we are able to model complex patterns and latent factors underlying immunosignatures. These latent factors may serve as biomarkers for disease and may play a key role in a bioinformatic method for antibody discovery.

Conclusion: Based on this research, we lay out an analytic framework illustrating how immunosignatures may be useful as a general method for screening and presymptomatic screening of disease as well as antibody discovery.

ContributorsBrown, Justin (Author) / Stafford, Phillip (Author) / Johnston, Stephen (Author) / Dinu, Valentin (Author) / College of Health Solutions (Contributor)
Created2011-08-19
128959-Thumbnail Image.png
Description

Background: The impact of socio-demographic factors and baseline health on the mortality burden of seasonal and pandemic influenza remains debated. Here we analyzed the spatial-temporal mortality patterns of the 1918 influenza pandemic in Spain, one of the countries of Europe that experienced the highest mortality burden.

Methods: We analyzed monthly death rates from

Background: The impact of socio-demographic factors and baseline health on the mortality burden of seasonal and pandemic influenza remains debated. Here we analyzed the spatial-temporal mortality patterns of the 1918 influenza pandemic in Spain, one of the countries of Europe that experienced the highest mortality burden.

Methods: We analyzed monthly death rates from respiratory diseases and all-causes across 49 provinces of Spain, including the Canary and Balearic Islands, during the period January-1915 to June-1919. We estimated the influenza-related excess death rates and risk of death relative to baseline mortality by pandemic wave and province. We then explored the association between pandemic excess mortality rates and health and socio-demographic factors, which included population size and age structure, population density, infant mortality rates, baseline death rates, and urbanization.

Results: Our analysis revealed high geographic heterogeneity in pandemic mortality impact. We identified 3 pandemic waves of varying timing and intensity covering the period from Jan-1918 to Jun-1919, with the highest pandemic-related excess mortality rates occurring during the months of October-November 1918 across all Spanish provinces. Cumulative excess mortality rates followed a south–north gradient after controlling for demographic factors, with the North experiencing highest excess mortality rates. A model that included latitude, population density, and the proportion of children living in provinces explained about 40% of the geographic variability in cumulative excess death rates during 1918–19, but different factors explained mortality variation in each wave.

Conclusions: A substantial fraction of the variability in excess mortality rates across Spanish provinces remained unexplained, which suggests that other unidentified factors such as comorbidities, climate and background immunity may have affected the 1918-19 pandemic mortality rates. Further archeo-epidemiological research should concentrate on identifying settings with combined availability of local historical mortality records and information on the prevalence of underlying risk factors, or patient-level clinical data, to further clarify the drivers of 1918 pandemic influenza mortality.

Created2014-07-05
128960-Thumbnail Image.png
Description

Background: Microarray image analysis processes scanned digital images of hybridized arrays to produce the input spot-level data for downstream analysis, so it can have a potentially large impact on those and subsequent analysis. Signal saturation is an optical effect that occurs when some pixel values for highly expressed genes or

Background: Microarray image analysis processes scanned digital images of hybridized arrays to produce the input spot-level data for downstream analysis, so it can have a potentially large impact on those and subsequent analysis. Signal saturation is an optical effect that occurs when some pixel values for highly expressed genes or peptides exceed the upper detection threshold of the scanner software (216 - 1 = 65, 535 for 16-bit images). In practice, spots with a sizable number of saturated pixels are often flagged and discarded. Alternatively, the saturated values are used without adjustments for estimating spot intensities. The resulting expression data tend to be biased downwards and can distort high-level analysis that relies on these data. Hence, it is crucial to effectively correct for signal saturation.

Results: We developed a flexible mixture model-based segmentation and spot intensity estimation procedure that accounts for saturated pixels by incorporating a censored component in the mixture model. As demonstrated with biological data and simulation, our method extends the dynamic range of expression data beyond the saturation threshold and is effective in correcting saturation-induced bias when the lost information is not tremendous. We further illustrate the impact of image processing on downstream classification, showing that the proposed method can increase diagnostic accuracy using data from a lymphoma cancer diagnosis study.

Conclusions: The presented method adjusts for signal saturation at the segmentation stage that identifies a pixel as part of the foreground, background or other. The cluster membership of a pixel can be altered versus treating saturated values as truly observed. Thus, the resulting spot intensity estimates may be more accurate than those obtained from existing methods that correct for saturation based on already segmented data. As a model-based segmentation method, our procedure is able to identify inner holes, fuzzy edges and blank spots that are common in microarray images. The approach is independent of microarray platform and applicable to both single- and dual-channel microarrays.

ContributorsYang, Yan (Author) / Stafford, Phillip (Author) / Kim, YoonJoo (Author) / College of Liberal Arts and Sciences (Contributor)
Created2011-11-30
129075-Thumbnail Image.png
Description

Background: High-throughput technologies such as DNA, RNA, protein, antibody and peptide microarrays are often used to examine differences across drug treatments, diseases, transgenic animals, and others. Typically one trains a classification system by gathering large amounts of probe-level data, selecting informative features, and classifies test samples using a small number of

Background: High-throughput technologies such as DNA, RNA, protein, antibody and peptide microarrays are often used to examine differences across drug treatments, diseases, transgenic animals, and others. Typically one trains a classification system by gathering large amounts of probe-level data, selecting informative features, and classifies test samples using a small number of features. As new microarrays are invented, classification systems that worked well for other array types may not be ideal. Expression microarrays, arguably one of the most prevalent array types, have been used for years to help develop classification algorithms. Many biological assumptions are built into classifiers that were designed for these types of data. One of the more problematic is the assumption of independence, both at the probe level and again at the biological level. Probes for RNA transcripts are designed to bind single transcripts. At the biological level, many genes have dependencies across transcriptional pathways where co-regulation of transcriptional units may make many genes appear as being completely dependent. Thus, algorithms that perform well for gene expression data may not be suitable when other technologies with different binding characteristics exist. The immunosignaturing microarray is based on complex mixtures of antibodies binding to arrays of random sequence peptides. It relies on many-to-many binding of antibodies to the random sequence peptides. Each peptide can bind multiple antibodies and each antibody can bind multiple peptides. This technology has been shown to be highly reproducible and appears promising for diagnosing a variety of disease states. However, it is not clear what is the optimal classification algorithm for analyzing this new type of data.

Results: We characterized several classification algorithms to analyze immunosignaturing data. We selected several datasets that range from easy to difficult to classify, from simple monoclonal binding to complex binding patterns in asthma patients. We then classified the biological samples using 17 different classification algorithms. Using a wide variety of assessment criteria, we found ‘Naïve Bayes’ far more useful than other widely used methods due to its simplicity, robustness, speed and accuracy.

Conclusions: ‘Naïve Bayes’ algorithm appears to accommodate the complex patterns hidden within multilayered immunosignaturing microarray data due to its fundamental mathematical properties.

ContributorsKukreja, Muskan (Author) / Johnston, Stephen (Author) / Stafford, Phillip (Author) / Biodesign Institute (Contributor)
Created2012-06-21