This growing collection consists of scholarly works authored by ASU-affiliated faculty, staff, and community members, and it contains many open access articles. ASU-affiliated authors are encouraged to Share Your Work in KEEP.

Displaying 1 - 10 of 44
Filtering by

Clear all filters

129588-Thumbnail Image.png
Description

A globally integrated carbon observation and analysis system is needed to improve the fundamental understanding of the global carbon cycle, to improve our ability to project future changes, and to verify the effectiveness of policies aiming to reduce greenhouse gas emissions and increase carbon sequestration. Building an integrated carbon observation

A globally integrated carbon observation and analysis system is needed to improve the fundamental understanding of the global carbon cycle, to improve our ability to project future changes, and to verify the effectiveness of policies aiming to reduce greenhouse gas emissions and increase carbon sequestration. Building an integrated carbon observation system requires transformational advances from the existing sparse, exploratory framework towards a dense, robust, and sustained system in all components: anthropogenic emissions, the atmosphere, the ocean, and the terrestrial biosphere. The paper is addressed to scientists, policymakers, and funding agencies who need to have a global picture of the current state of the (diverse) carbon observations.

We identify the current state of carbon observations, and the needs and notional requirements for a global integrated carbon observation system that can be built in the next decade. A key conclusion is the substantial expansion of the ground-based observation networks required to reach the high spatial resolution for CO2 and CH4 fluxes, and for carbon stocks for addressing policy-relevant objectives, and attributing flux changes to underlying processes in each region. In order to establish flux and stock diagnostics over areas such as the southern oceans, tropical forests, and the Arctic, in situ observations will have to be complemented with remote-sensing measurements. Remote sensing offers the advantage of dense spatial coverage and frequent revisit. A key challenge is to bring remote-sensing measurements to a level of long-term consistency and accuracy so that they can be efficiently combined in models to reduce uncertainties, in synergy with ground-based data.

Bringing tight observational constraints on fossil fuel and land use change emissions will be the biggest challenge for deployment of a policy-relevant integrated carbon observation system. This will require in situ and remotely sensed data at much higher resolution and density than currently achieved for natural fluxes, although over a small land area (cities, industrial sites, power plants), as well as the inclusion of fossil fuel CO2 proxy measurements such as radiocarbon in CO2 and carbon-fuel combustion tracers. Additionally, a policy-relevant carbon monitoring system should also provide mechanisms for reconciling regional top-down (atmosphere-based) and bottom-up (surface-based) flux estimates across the range of spatial and temporal scales relevant to mitigation policies. In addition, uncertainties for each observation data-stream should be assessed. The success of the system will rely on long-term commitments to monitoring, on improved international collaboration to fill gaps in the current observations, on sustained efforts to improve access to the different data streams and make databases interoperable, and on the calibration of each component of the system to agreed-upon international scales.

ContributorsCiais, P. (Author) / Dolman, A. J. (Author) / Bombelli, A. (Author) / Duren, R. (Author) / Peregon, A. (Author) / Rayner, P. J. (Author) / Miller, C. (Author) / Gobron, N. (Author) / Kinderman, G. (Author) / Marland, G. (Author) / Gruber, N. (Author) / Chevallier, F. (Author) / Andres, R. J. (Author) / Balsamo, G. (Author) / Bopp, L. (Author) / Breon, F. -M. (Author) / Broquet, G. (Author) / Dargaville, R. (Author) / Battin, T. J. (Author) / Borges, A. (Author) / Bovensmann, H. (Author) / Buchwitz, M. (Author) / Butler, J. (Author) / Canadell, J. G. (Author) / Cook, R. B. (Author) / DeFries, R. (Author) / Engelen, R. (Author) / Gurney, Kevin (Author) / Heinze, C. (Author) / Heimann, M. (Author) / Held, A. (Author) / Henry, M. (Author) / Law, B. (Author) / Luyssaert, S. (Author) / Miller, J. (Author) / Moriyama, T. (Author) / Moulin, C. (Author) / Myneni, R. (Author) / College of Liberal Arts and Sciences (Contributor)
Created2013-11-30
129478-Thumbnail Image.png
Description

Errors in the specification or utilization of fossil fuel CO2 emissions within carbon budget or atmospheric CO2 inverse studies can alias the estimation of biospheric and oceanic carbon exchange. A key component in the simulation of CO2 concentrations arising from fossil fuel emissions is the spatial distribution of the emission

Errors in the specification or utilization of fossil fuel CO2 emissions within carbon budget or atmospheric CO2 inverse studies can alias the estimation of biospheric and oceanic carbon exchange. A key component in the simulation of CO2 concentrations arising from fossil fuel emissions is the spatial distribution of the emission near coastlines. Regridding of fossil fuel CO2 emissions (FFCO2) from fine to coarse grids to enable atmospheric transport simulations can give rise to mismatches between the emissions and simulated atmospheric dynamics which differ over land or water. For example, emissions originally emanating from the land are emitted from a grid cell for which the vertical mixing reflects the roughness and/or surface energy exchange of an ocean surface. We test this potential "dynamical inconsistency" by examining simulated global atmospheric CO2 concentration driven by two different approaches to regridding fossil fuel CO2 emissions. The two approaches are as follows: (1) a commonly used method that allocates emissions to grid cells with no attempt to ensure dynamical consistency with atmospheric transport and (2) an improved method that reallocates emissions to grid cells to ensure dynamically consistent results. Results show large spatial and temporal differences in the simulated CO2 concentration when comparing these two approaches. The emissions difference ranges from −30.3 TgC grid cell-1 yr-1 (−3.39 kgC m-2 yr-1) to +30.0 TgC grid cell-1 yr-1 (+2.6 kgC m-2 yr-1) along coastal margins. Maximum simulated annual mean CO2 concentration differences at the surface exceed ±6 ppm at various locations and times. Examination of the current CO2 monitoring locations during the local afternoon, consistent with inversion modeling system sampling and measurement protocols, finds maximum hourly differences at 38 stations exceed ±0.10 ppm with individual station differences exceeding −32 ppm. The differences implied by not accounting for this dynamical consistency problem are largest at monitoring sites proximal to large coastal urban areas and point sources. These results suggest that studies comparing simulated to observed atmospheric CO2 concentration, such as atmospheric CO2 inversions, must take measures to correct for this potential problem and ensure flux and dynamical consistency.

ContributorsZhang, X. (Author) / Gurney, Kevin (Author) / Rayner, P. (Author) / Liu, Y. (Author) / Asefi-Najafabady, Salvi (Author) / College of Liberal Arts and Sciences (Contributor)
Created2013-11-30
129256-Thumbnail Image.png
Description

Studies on urban heat island (UHI) have been more than a century after the phenomenon was first discovered in the early 1800s. UHI emerges as the source of many urban environmental problems and exacerbates the living environment in cities. Under the challenges of increasing urbanization and future climate changes, there

Studies on urban heat island (UHI) have been more than a century after the phenomenon was first discovered in the early 1800s. UHI emerges as the source of many urban environmental problems and exacerbates the living environment in cities. Under the challenges of increasing urbanization and future climate changes, there is a pressing need for sustainable adaptation/mitigation strategies for UHI effects, one popular option being the use of reflective materials. While it is introduced as an effective method to reduce temperature and energy consumption in cities, its impacts on environmental sustainability and large-scale non-local effect are inadequately explored. This paper provides a synthetic overview of potential environmental impacts of reflective materials at a variety of scales, ranging from energy load on a single building to regional hydroclimate. The review shows that mitigation potential of reflective materials depends on a set of factors, including building characteristics, urban environment, meteorological and geographical conditions, to name a few. Precaution needs to be exercised by city planners and policy makers for large-scale deployment of reflective materials before their environmental impacts, especially on regional hydroclimates, are better understood. In general, it is recommended that optimal strategy for UHI needs to be determined on a city-by-city basis, rather than adopting a “one-solution-fits-all” strategy.

ContributorsYang, Jiachuan (Author) / Wang, Zhi-Hua (Author) / Kaloush, Kamil (Author) / Ira A. Fulton Schools of Engineering (Contributor)
Created2015-07-01
129257-Thumbnail Image.png
Description

Land surface energy balance in a built environment is widely modelled using urban canopy models with representation of building arrays as big street canyons. Modification of this simplified geometric representation, however, leads to challenging numerical difficulties in improving physical parameterization schemes that are deterministic in nature. In this paper, we

Land surface energy balance in a built environment is widely modelled using urban canopy models with representation of building arrays as big street canyons. Modification of this simplified geometric representation, however, leads to challenging numerical difficulties in improving physical parameterization schemes that are deterministic in nature. In this paper, we develop a stochastic algorithm to estimate view factors between canyon facets in the presence of shade trees based on Monte Carlo simulation, where an analytical formulation is inhibited by the complex geometry. The model is validated against analytical solutions of benchmark radiative problems as well as field measurements in real street canyons. In conjunction with the matrix method resolving infinite number of reflections, the proposed model is capable of predicting the radiative exchange inside the street canyon with good accuracy. Modeling of transient evolution of thermal filed inside the street canyon using the proposed method demonstrate the potential of shade trees in mitigating canyon surface temperatures as well as saving of building energy use. This new numerical framework also deepens our insight into the fundamental physics of radiative heat transfer and surface energy balance for urban climate modeling.

ContributorsWang, Zhi-Hua (Author) / Ira A. Fulton Schools of Engineering (Contributor)
Created2014-12-01
Description

High-resolution, global quantification of fossil fuel CO[subscript 2] emissions is emerging as a critical need in carbon cycle science and climate policy. We build upon a previously developed fossil fuel data assimilation system (FFDAS) for estimating global high-resolution fossil fuel CO[subscript 2] emissions. We have improved the underlying observationally based

High-resolution, global quantification of fossil fuel CO[subscript 2] emissions is emerging as a critical need in carbon cycle science and climate policy. We build upon a previously developed fossil fuel data assimilation system (FFDAS) for estimating global high-resolution fossil fuel CO[subscript 2] emissions. We have improved the underlying observationally based data sources, expanded the approach through treatment of separate emitting sectors including a new pointwise database of global power plants, and extended the results to cover a 1997 to 2010 time series at a spatial resolution of 0.1°. Long-term trend analysis of the resulting global emissions shows subnational spatial structure in large active economies such as the United States, China, and India. These three countries, in particular, show different long-term trends and exploration of the trends in nighttime lights, and population reveal a decoupling of population and emissions at the subnational level. Analysis of shorter-term variations reveals the impact of the 2008–2009 global financial crisis with widespread negative emission anomalies across the U.S. and Europe. We have used a center of mass (CM) calculation as a compact metric to express the time evolution of spatial patterns in fossil fuel CO[subscript 2] emissions. The global emission CM has moved toward the east and somewhat south between 1997 and 2010, driven by the increase in emissions in China and South Asia over this time period. Analysis at the level of individual countries reveals per capita CO[subscript 2] emission migration in both Russia and India. The per capita emission CM holds potential as a way to succinctly analyze subnational shifts in carbon intensity over time. Uncertainties are generally lower than the previous version of FFDAS due mainly to an improved nightlight data set.

ContributorsAsefi-Najafabady, Salvi (Author) / Rayner, P. J. (Author) / Gurney, Kevin (Author) / McRobert, A. (Author) / Song, Y. (Author) / Coltin, K. (Author) / Huang, J. (Author) / Elvidge, C. (Author) / Baugh, K. (Author) / College of Liberal Arts and Sciences (Contributor)
Created2014-09-16
128949-Thumbnail Image.png
Description

Background: The Nike + Fuelband is a commercially available, wrist-worn accelerometer used to track physical activity energy expenditure (PAEE) during exercise. However, validation studies assessing the accuracy of this device for estimating PAEE are lacking. Therefore, this study examined the validity and reliability of the Nike + Fuelband for estimating PAEE during physical activity in

Background: The Nike + Fuelband is a commercially available, wrist-worn accelerometer used to track physical activity energy expenditure (PAEE) during exercise. However, validation studies assessing the accuracy of this device for estimating PAEE are lacking. Therefore, this study examined the validity and reliability of the Nike + Fuelband for estimating PAEE during physical activity in young adults. Secondarily, we compared PAEE estimation of the Nike + Fuelband with the previously validated SenseWear Armband (SWA).

Methods: Twenty-four participants (n = 24) completed two, 60-min semi-structured routines consisting of sedentary/light-intensity, moderate-intensity, and vigorous-intensity physical activity. Participants wore a Nike + Fuelband and SWA, while oxygen uptake was measured continuously with an Oxycon Mobile (OM) metabolic measurement system (criterion).

Results: The Nike + Fuelband (ICC = 0.77) and SWA (ICC = 0.61) both demonstrated moderate to good validity. PAEE estimates provided by the Nike + Fuelband (246 ± 67 kcal) and SWA (238 ± 57 kcal) were not statistically different than OM (243 ± 67 kcal). Both devices also displayed similar mean absolute percent errors for PAEE estimates (Nike + Fuelband = 16 ± 13 %; SWA = 18 ± 18 %). Test-retest reliability for PAEE indicated good stability for Nike + Fuelband (ICC = 0.96) and SWA (ICC = 0.90).

Conclusion: The Nike + Fuelband provided valid and reliable estimates of PAEE, that are similar to the previously validated SWA, during a routine that included approximately equal amounts of sedentary/light-, moderate- and vigorous-intensity physical activity.

ContributorsTucker, Wesley (Author) / Bhammar, Dharini M. (Author) / Sawyer, Brandon J. (Author) / Buman, Matthew (Author) / Gaesser, Glenn (Author) / College of Health Solutions (Contributor)
Created2015-06-30
128958-Thumbnail Image.png
Description

Background: Immunosignaturing is a new peptide microarray based technology for profiling of humoral immune responses. Despite new challenges, immunosignaturing gives us the opportunity to explore new and fundamentally different research questions. In addition to classifying samples based on disease status, the complex patterns and latent factors underlying immunosignatures, which we attempt

Background: Immunosignaturing is a new peptide microarray based technology for profiling of humoral immune responses. Despite new challenges, immunosignaturing gives us the opportunity to explore new and fundamentally different research questions. In addition to classifying samples based on disease status, the complex patterns and latent factors underlying immunosignatures, which we attempt to model, may have a diverse range of applications.

Methods: We investigate the utility of a number of statistical methods to determine model performance and address challenges inherent in analyzing immunosignatures. Some of these methods include exploratory and confirmatory factor analyses, classical significance testing, structural equation and mixture modeling.

Results: We demonstrate an ability to classify samples based on disease status and show that immunosignaturing is a very promising technology for screening and presymptomatic screening of disease. In addition, we are able to model complex patterns and latent factors underlying immunosignatures. These latent factors may serve as biomarkers for disease and may play a key role in a bioinformatic method for antibody discovery.

Conclusion: Based on this research, we lay out an analytic framework illustrating how immunosignatures may be useful as a general method for screening and presymptomatic screening of disease as well as antibody discovery.

ContributorsBrown, Justin (Author) / Stafford, Phillip (Author) / Johnston, Stephen (Author) / Dinu, Valentin (Author) / College of Health Solutions (Contributor)
Created2011-08-19
128960-Thumbnail Image.png
Description

Background: Microarray image analysis processes scanned digital images of hybridized arrays to produce the input spot-level data for downstream analysis, so it can have a potentially large impact on those and subsequent analysis. Signal saturation is an optical effect that occurs when some pixel values for highly expressed genes or

Background: Microarray image analysis processes scanned digital images of hybridized arrays to produce the input spot-level data for downstream analysis, so it can have a potentially large impact on those and subsequent analysis. Signal saturation is an optical effect that occurs when some pixel values for highly expressed genes or peptides exceed the upper detection threshold of the scanner software (216 - 1 = 65, 535 for 16-bit images). In practice, spots with a sizable number of saturated pixels are often flagged and discarded. Alternatively, the saturated values are used without adjustments for estimating spot intensities. The resulting expression data tend to be biased downwards and can distort high-level analysis that relies on these data. Hence, it is crucial to effectively correct for signal saturation.

Results: We developed a flexible mixture model-based segmentation and spot intensity estimation procedure that accounts for saturated pixels by incorporating a censored component in the mixture model. As demonstrated with biological data and simulation, our method extends the dynamic range of expression data beyond the saturation threshold and is effective in correcting saturation-induced bias when the lost information is not tremendous. We further illustrate the impact of image processing on downstream classification, showing that the proposed method can increase diagnostic accuracy using data from a lymphoma cancer diagnosis study.

Conclusions: The presented method adjusts for signal saturation at the segmentation stage that identifies a pixel as part of the foreground, background or other. The cluster membership of a pixel can be altered versus treating saturated values as truly observed. Thus, the resulting spot intensity estimates may be more accurate than those obtained from existing methods that correct for saturation based on already segmented data. As a model-based segmentation method, our procedure is able to identify inner holes, fuzzy edges and blank spots that are common in microarray images. The approach is independent of microarray platform and applicable to both single- and dual-channel microarrays.

ContributorsYang, Yan (Author) / Stafford, Phillip (Author) / Kim, YoonJoo (Author) / College of Liberal Arts and Sciences (Contributor)
Created2011-11-30
129067-Thumbnail Image.png
Description

Background: Little research has explored who responds better to an automated vs. human advisor for health behaviors in general, and for physical activity (PA) promotion in particular. The purpose of this study was to explore baseline factors (i.e., demographics, motivation, interpersonal style, and external resources) that moderate intervention efficacy delivered by

Background: Little research has explored who responds better to an automated vs. human advisor for health behaviors in general, and for physical activity (PA) promotion in particular. The purpose of this study was to explore baseline factors (i.e., demographics, motivation, interpersonal style, and external resources) that moderate intervention efficacy delivered by either a human or automated advisor.

Methods: Data were from the CHAT Trial, a 12-month randomized controlled trial to increase PA among underactive older adults (full trial N = 218) via a human advisor or automated interactive voice response advisor. Trial results indicated significant increases in PA in both interventions by 12 months that were maintained at 18-months. Regression was used to explore moderation of the two interventions.

Results: Results indicated amotivation (i.e., lack of intent in PA) moderated 12-month PA (d = 0.55, p < 0.01) and private self-consciousness (i.e., tendency to attune to one’s own inner thoughts and emotions) moderated 18-month PA (d = 0.34, p < 0.05) but a variety of other factors (e.g., demographics) did not (p > 0.12).

Conclusions: Results provide preliminary evidence for generating hypotheses about pathways for supporting later clinical decision-making with regard to the use of either human- vs. computer-delivered interventions for PA promotion.

ContributorsHekler, Eric (Author) / Buman, Matthew (Author) / Otten, Jennifer (Author) / Castro, Cynthia (Author) / Grieco, Lauren (Author) / Marcus, Bess (Author) / Friedman, Robert H. (Author) / Napolitano, Melissa A. (Author) / King, Abby C. (Author) / College of Health Solutions (Contributor)
Created2013-09-22
129075-Thumbnail Image.png
Description

Background: High-throughput technologies such as DNA, RNA, protein, antibody and peptide microarrays are often used to examine differences across drug treatments, diseases, transgenic animals, and others. Typically one trains a classification system by gathering large amounts of probe-level data, selecting informative features, and classifies test samples using a small number of

Background: High-throughput technologies such as DNA, RNA, protein, antibody and peptide microarrays are often used to examine differences across drug treatments, diseases, transgenic animals, and others. Typically one trains a classification system by gathering large amounts of probe-level data, selecting informative features, and classifies test samples using a small number of features. As new microarrays are invented, classification systems that worked well for other array types may not be ideal. Expression microarrays, arguably one of the most prevalent array types, have been used for years to help develop classification algorithms. Many biological assumptions are built into classifiers that were designed for these types of data. One of the more problematic is the assumption of independence, both at the probe level and again at the biological level. Probes for RNA transcripts are designed to bind single transcripts. At the biological level, many genes have dependencies across transcriptional pathways where co-regulation of transcriptional units may make many genes appear as being completely dependent. Thus, algorithms that perform well for gene expression data may not be suitable when other technologies with different binding characteristics exist. The immunosignaturing microarray is based on complex mixtures of antibodies binding to arrays of random sequence peptides. It relies on many-to-many binding of antibodies to the random sequence peptides. Each peptide can bind multiple antibodies and each antibody can bind multiple peptides. This technology has been shown to be highly reproducible and appears promising for diagnosing a variety of disease states. However, it is not clear what is the optimal classification algorithm for analyzing this new type of data.

Results: We characterized several classification algorithms to analyze immunosignaturing data. We selected several datasets that range from easy to difficult to classify, from simple monoclonal binding to complex binding patterns in asthma patients. We then classified the biological samples using 17 different classification algorithms. Using a wide variety of assessment criteria, we found ‘Naïve Bayes’ far more useful than other widely used methods due to its simplicity, robustness, speed and accuracy.

Conclusions: ‘Naïve Bayes’ algorithm appears to accommodate the complex patterns hidden within multilayered immunosignaturing microarray data due to its fundamental mathematical properties.

ContributorsKukreja, Muskan (Author) / Johnston, Stephen (Author) / Stafford, Phillip (Author) / Biodesign Institute (Contributor)
Created2012-06-21