This growing collection consists of scholarly works authored by ASU-affiliated faculty, staff, and community members, and it contains many open access articles. ASU-affiliated authors are encouraged to Share Your Work in KEEP.

Displaying 1 - 10 of 65
Filtering by

Clear all filters

141461-Thumbnail Image.png
Description
In the digital humanities, there is a constant need to turn images and PDF files into plain text to apply analyses such as topic modelling, named entity recognition, and other techniques. However, although there exist different solutions to extract text embedded in PDF files or run OCR on images, they

In the digital humanities, there is a constant need to turn images and PDF files into plain text to apply analyses such as topic modelling, named entity recognition, and other techniques. However, although there exist different solutions to extract text embedded in PDF files or run OCR on images, they typically require additional training (for example, scholars have to learn how to use the command line) or are difficult to automate without programming skills. The Giles Ecosystem is a distributed system based on Apache Kafka that allows users to upload documents for text and image extraction. The system components are implemented using Java and the Spring Framework and are available under an Open Source license on GitHub (https://github.com/diging/).
ContributorsLessios-Damerow, Julia (Contributor) / Peirson, Erick (Contributor) / Laubichler, Manfred (Contributor) / ASU-SFI Center for Biosocial Complex Systems (Contributor)
Created2017-09-28
129570-Thumbnail Image.png
Description

Land-use mapping is critical for global change research. In Central Arizona, U.S.A., the spatial distribution of land use is important for sustainable land management decisions. The objective of this study was to create a land-use map that serves as a model for the city of Maricopa, an expanding urban region

Land-use mapping is critical for global change research. In Central Arizona, U.S.A., the spatial distribution of land use is important for sustainable land management decisions. The objective of this study was to create a land-use map that serves as a model for the city of Maricopa, an expanding urban region in the Sun Corridor of Arizona. We use object-based image analysis to map six land-use types from ASTER imagery, and then compare this with two per-pixel classifications. Our results show that a single segmentation, combined with intermediary classifications and merging, morphing, and growing image-objects, can lead to an accurate land-use map that is capable of utilizing both spatial and spectral information. We also employ a moving-window diversity assessment to help with analysis and improve post-classification modifications.

ContributorsGalletti, Christopher (Author) / Myint, Soe (Author) / College of Liberal Arts and Sciences (Contributor)
Created2014-07-01
129588-Thumbnail Image.png
Description

A globally integrated carbon observation and analysis system is needed to improve the fundamental understanding of the global carbon cycle, to improve our ability to project future changes, and to verify the effectiveness of policies aiming to reduce greenhouse gas emissions and increase carbon sequestration. Building an integrated carbon observation

A globally integrated carbon observation and analysis system is needed to improve the fundamental understanding of the global carbon cycle, to improve our ability to project future changes, and to verify the effectiveness of policies aiming to reduce greenhouse gas emissions and increase carbon sequestration. Building an integrated carbon observation system requires transformational advances from the existing sparse, exploratory framework towards a dense, robust, and sustained system in all components: anthropogenic emissions, the atmosphere, the ocean, and the terrestrial biosphere. The paper is addressed to scientists, policymakers, and funding agencies who need to have a global picture of the current state of the (diverse) carbon observations.

We identify the current state of carbon observations, and the needs and notional requirements for a global integrated carbon observation system that can be built in the next decade. A key conclusion is the substantial expansion of the ground-based observation networks required to reach the high spatial resolution for CO2 and CH4 fluxes, and for carbon stocks for addressing policy-relevant objectives, and attributing flux changes to underlying processes in each region. In order to establish flux and stock diagnostics over areas such as the southern oceans, tropical forests, and the Arctic, in situ observations will have to be complemented with remote-sensing measurements. Remote sensing offers the advantage of dense spatial coverage and frequent revisit. A key challenge is to bring remote-sensing measurements to a level of long-term consistency and accuracy so that they can be efficiently combined in models to reduce uncertainties, in synergy with ground-based data.

Bringing tight observational constraints on fossil fuel and land use change emissions will be the biggest challenge for deployment of a policy-relevant integrated carbon observation system. This will require in situ and remotely sensed data at much higher resolution and density than currently achieved for natural fluxes, although over a small land area (cities, industrial sites, power plants), as well as the inclusion of fossil fuel CO2 proxy measurements such as radiocarbon in CO2 and carbon-fuel combustion tracers. Additionally, a policy-relevant carbon monitoring system should also provide mechanisms for reconciling regional top-down (atmosphere-based) and bottom-up (surface-based) flux estimates across the range of spatial and temporal scales relevant to mitigation policies. In addition, uncertainties for each observation data-stream should be assessed. The success of the system will rely on long-term commitments to monitoring, on improved international collaboration to fill gaps in the current observations, on sustained efforts to improve access to the different data streams and make databases interoperable, and on the calibration of each component of the system to agreed-upon international scales.

ContributorsCiais, P. (Author) / Dolman, A. J. (Author) / Bombelli, A. (Author) / Duren, R. (Author) / Peregon, A. (Author) / Rayner, P. J. (Author) / Miller, C. (Author) / Gobron, N. (Author) / Kinderman, G. (Author) / Marland, G. (Author) / Gruber, N. (Author) / Chevallier, F. (Author) / Andres, R. J. (Author) / Balsamo, G. (Author) / Bopp, L. (Author) / Breon, F. -M. (Author) / Broquet, G. (Author) / Dargaville, R. (Author) / Battin, T. J. (Author) / Borges, A. (Author) / Bovensmann, H. (Author) / Buchwitz, M. (Author) / Butler, J. (Author) / Canadell, J. G. (Author) / Cook, R. B. (Author) / DeFries, R. (Author) / Engelen, R. (Author) / Gurney, Kevin (Author) / Heinze, C. (Author) / Heimann, M. (Author) / Held, A. (Author) / Henry, M. (Author) / Law, B. (Author) / Luyssaert, S. (Author) / Miller, J. (Author) / Moriyama, T. (Author) / Moulin, C. (Author) / Myneni, R. (Author) / College of Liberal Arts and Sciences (Contributor)
Created2013-11-30
129478-Thumbnail Image.png
Description

Errors in the specification or utilization of fossil fuel CO2 emissions within carbon budget or atmospheric CO2 inverse studies can alias the estimation of biospheric and oceanic carbon exchange. A key component in the simulation of CO2 concentrations arising from fossil fuel emissions is the spatial distribution of the emission

Errors in the specification or utilization of fossil fuel CO2 emissions within carbon budget or atmospheric CO2 inverse studies can alias the estimation of biospheric and oceanic carbon exchange. A key component in the simulation of CO2 concentrations arising from fossil fuel emissions is the spatial distribution of the emission near coastlines. Regridding of fossil fuel CO2 emissions (FFCO2) from fine to coarse grids to enable atmospheric transport simulations can give rise to mismatches between the emissions and simulated atmospheric dynamics which differ over land or water. For example, emissions originally emanating from the land are emitted from a grid cell for which the vertical mixing reflects the roughness and/or surface energy exchange of an ocean surface. We test this potential "dynamical inconsistency" by examining simulated global atmospheric CO2 concentration driven by two different approaches to regridding fossil fuel CO2 emissions. The two approaches are as follows: (1) a commonly used method that allocates emissions to grid cells with no attempt to ensure dynamical consistency with atmospheric transport and (2) an improved method that reallocates emissions to grid cells to ensure dynamically consistent results. Results show large spatial and temporal differences in the simulated CO2 concentration when comparing these two approaches. The emissions difference ranges from −30.3 TgC grid cell-1 yr-1 (−3.39 kgC m-2 yr-1) to +30.0 TgC grid cell-1 yr-1 (+2.6 kgC m-2 yr-1) along coastal margins. Maximum simulated annual mean CO2 concentration differences at the surface exceed ±6 ppm at various locations and times. Examination of the current CO2 monitoring locations during the local afternoon, consistent with inversion modeling system sampling and measurement protocols, finds maximum hourly differences at 38 stations exceed ±0.10 ppm with individual station differences exceeding −32 ppm. The differences implied by not accounting for this dynamical consistency problem are largest at monitoring sites proximal to large coastal urban areas and point sources. These results suggest that studies comparing simulated to observed atmospheric CO2 concentration, such as atmospheric CO2 inversions, must take measures to correct for this potential problem and ensure flux and dynamical consistency.

ContributorsZhang, X. (Author) / Gurney, Kevin (Author) / Rayner, P. (Author) / Liu, Y. (Author) / Asefi-Najafabady, Salvi (Author) / College of Liberal Arts and Sciences (Contributor)
Created2013-11-30
129236-Thumbnail Image.png
Description

Perchloroethylene (PCE) is a highly utilized solvent in the dry cleaning industry because of its cleaning effectiveness and relatively low cost to consumers. According to the 2006 U.S. Census, approximately 28,000 dry cleaning operations used PCE as their principal cleaning agent. Widespread use of PCE is problematic because of its

Perchloroethylene (PCE) is a highly utilized solvent in the dry cleaning industry because of its cleaning effectiveness and relatively low cost to consumers. According to the 2006 U.S. Census, approximately 28,000 dry cleaning operations used PCE as their principal cleaning agent. Widespread use of PCE is problematic because of its adverse impacts on human health and environmental quality. As PCE use is curtailed, effective alternatives must be analyzed for their toxicity and impacts to human health and the environment. Potential alternatives to PCE in dry cleaning include dipropylene glycol n-butyl ether (DPnB) and dipropylene glycol tert-butyl ether (DPtB), both promising to pose a relatively smaller risk. To evaluate these two alternatives to PCE, we established and scored performance criteria, including chemical toxicity, employee and customer exposure levels, impacts on the general population, costs of each system, and cleaning efficacy. The scores received for PCE were 5, 5, 3, 5, 3, and 3, respectively, and DPnB and DPtB scored 3, 1, 2, 2, 4, and 4, respectively. An aggregate sum of the performance criteria yielded a favorably low score of “16” for both DPnB and DPtB compared to “24” for PCE. We conclude that DPnB and DPtB are preferable dry cleaning agents, exhibiting reduced human toxicity and a lesser adverse impact on human health and the environment compared to PCE, with comparable capital investments, and moderately higher annual operating costs.

ContributorsHesari, Nikou (Author) / Francis, Chelsea (Author) / Halden, Rolf (Author) / Ira A. Fulton Schools of Engineering (Contributor)
Created2014-04-03
Description

A meta-analysis was conducted to inform the epistemology, or theory of knowledge, of contaminants of emerging concern (CECs). The CEC terminology acknowledges the existence of harmful environmental agents whose identities, occurrences, hazards, and effects are not sufficiently understood. Here, data on publishing activity were analyzed for 12 CECs, revealing a

A meta-analysis was conducted to inform the epistemology, or theory of knowledge, of contaminants of emerging concern (CECs). The CEC terminology acknowledges the existence of harmful environmental agents whose identities, occurrences, hazards, and effects are not sufficiently understood. Here, data on publishing activity were analyzed for 12 CECs, revealing a common pattern of emergence, suitable for identifying past years of peak concern and forecasting future ones: dichlorodiphenyltrichloroethane (DDT; 1972, 2008), trichloroacetic acid (TCAA; 1972, 2009), nitrosodimethylamine (1984), methyl tert-butyl ether (2001), trichloroethylene (2005), perchlorate (2006), 1,4-dioxane (2009), prions (2009), triclocarban (2010), triclosan (2012), nanomaterials (by 2016), and microplastics (2022 ± 4). CECs were found to emerge from obscurity to the height of concern in 14.1 ± 3.6 years, and subside to a new baseline level of concern in 14.5 ± 4.5 years. CECs can emerge more than once (e.g., TCAA, DDT) and the multifactorial process of emergence may be driven by inception of novel scientific methods (e.g., ion chromatography, mass spectrometry and nanometrology), scientific paradigm shifts (discovery of infectious proteins), and the development, marketing and mass consumption of novel products (antimicrobial personal care products, microplastics and nanomaterials). Publishing activity and U.S. regulatory actions were correlated for several CECs investigated.

ContributorsHalden, Rolf (Author) / Biodesign Institute (Contributor)
Created2015-01-23
129255-Thumbnail Image.png
Description

Nanoscale zero-valent iron (nZVI) is a strong nonspecific reducing agent that is used for in situ degradation of chlorinated solvents and other oxidized pollutants. However, there are significant concerns regarding the risks posed by the deliberate release of engineered nanomaterials into the environment, which have triggered moratoria, for example, in

Nanoscale zero-valent iron (nZVI) is a strong nonspecific reducing agent that is used for in situ degradation of chlorinated solvents and other oxidized pollutants. However, there are significant concerns regarding the risks posed by the deliberate release of engineered nanomaterials into the environment, which have triggered moratoria, for example, in the United Kingdom. This critical review focuses on the effect of nZVI injection on subsurface microbial communities, which are of interest due to their important role in contaminant attenuation processes. Corrosion of ZVI stimulates dehalorespiring bacteria, due to the production of H2 that can serve as an electron donor for reduction of chlorinated contaminants. Conversely, laboratory studies show that nZVI can be inhibitory to pure bacterial cultures, although toxicity is reduced when nZVI is coated with polyelectrolytes or natural organic matter. The emerging toolkit of molecular biological analyses should enable a more sophisticated assessment of combined nZVI/biostimulation or bioaugmentation approaches. While further research on the consequences of its application for subsurface microbial communities is needed, nZVI continues to hold promise as an innovative technology for in situ remediation of pollutants It is particularly attractive. for the remediation of subsurface environments containing chlorinated ethenes because of its ability to potentially elicit and sustain both physical–chemical and biological removal despite its documented antimicrobial properties.

ContributorsBruton, Thomas (Author) / Pycke, Benny (Author) / Halden, Rolf (Author) / Biodesign Institute (Contributor)
Created2015-06-03
129259-Thumbnail Image.png
Description

What's a profession without a code of ethics? Being a legitimate profession almost requires drafting a code and, at least nominally, making members follow it. Codes of ethics (henceforth “codes”) exist for a number of reasons, many of which can vary widely from profession to profession - but above all

What's a profession without a code of ethics? Being a legitimate profession almost requires drafting a code and, at least nominally, making members follow it. Codes of ethics (henceforth “codes”) exist for a number of reasons, many of which can vary widely from profession to profession - but above all they are a form of codified self-regulation. While codes can be beneficial, it argues that when we scratch below the surface, there are many problems at their root. In terms of efficacy, codes can serve as a form of ethical window dressing, rather than effective rules for behavior. But even more that, codes can degrade the meaning behind being a good person who acts ethically for the right reasons.

Created2013-11-30
Description

Widespread contamination of groundwater by chlorinated ethenes and their biological dechlorination products necessitates the reliable monitoring of liquid matrices; current methods approved by the U.S. Environmental Protection Agency (EPA) require a minimum of 5 mL of sample volume and cannot simultaneously detect all transformative products. This paper reports on the

Widespread contamination of groundwater by chlorinated ethenes and their biological dechlorination products necessitates the reliable monitoring of liquid matrices; current methods approved by the U.S. Environmental Protection Agency (EPA) require a minimum of 5 mL of sample volume and cannot simultaneously detect all transformative products. This paper reports on the simultaneous detection of six chlorinated ethenes and ethene itself, using a liquid sample volume of 1 mL by concentrating the compounds onto an 85-µm carboxen-polydimenthylsiloxane solid-phase microextraction fiber in 5 min and subsequent chromatographic analysis in 9.15 min. Linear increases in signal response were obtained over three orders of magnitude (∼0.05 to ∼50 µM) for simultaneous analysis with coefficient of determination (R2) values of ≥ 0.99. The detection limits of the method (1.3–6 µg/L) were at or below the maximum contaminant levels specified by the EPA. Matrix spike studies with groundwater and mineral medium showed recovery rates between 79–108%. The utility of the method was demonstrated in lab-scale sediment flow-through columns assessing the bioremediation potential of chlorinated ethene-contaminated groundwater. Owing to its low sample volume requirements, good sensitivity and broad target analyte range, the method is suitable for routine compliance monitoring and is particularly attractive for interpreting the bench-scale feasibility studies that are commonly performed during the remedial design stage of groundwater cleanup projects.

ContributorsZiv-El, Michal (Author) / Kalinowski, Tomasz (Author) / Krajmalnik-Brown, Rosa (Author) / Halden, Rolf (Author) / Biodesign Institute (Contributor)
Created2014-02-01
129434-Thumbnail Image.png
Description

Aquaculture production has nearly tripled in the last two decades, bringing with it a significant increase in the use of antibiotics. Using liquid chromatography/tandem mass spectrometry (LC–MS/MS), the presence of 47 antibiotics was investigated in U.S. purchased shrimp, salmon, catfish, trout, tilapia, and swai originating from 11 different countries. All

Aquaculture production has nearly tripled in the last two decades, bringing with it a significant increase in the use of antibiotics. Using liquid chromatography/tandem mass spectrometry (LC–MS/MS), the presence of 47 antibiotics was investigated in U.S. purchased shrimp, salmon, catfish, trout, tilapia, and swai originating from 11 different countries. All samples (n = 27) complied with U.S. FDA regulations and five antibiotics were detected above the limits of detection: oxytetracycline (in wild shrimp, 7.7 ng/g of fresh weight; farmed tilapia, 2.7; farmed salmon, 8.6; farmed trout with spinal deformities, 3.9), 4-epioxytetracycline (farmed salmon, 4.1), sulfadimethoxine (farmed shrimp, 0.3), ormetoprim (farmed salmon, 0.5), and virginiamycin (farmed salmon marketed as antibiotic-free, 5.2). A literature review showed that sub-regulatory levels of antibiotics, as found here, can promote resistance development; publications linking aquaculture to this have increased more than 8-fold from 1991 to 2013. Although this study was limited in size and employed sample pooling, it represents the largest reconnaissance of antibiotics in U.S. seafood to date, providing data on previously unmonitored antibiotics and on farmed trout with spinal deformities. Results indicate low levels of antibiotic residues and general compliance with U.S. regulations. The potential for development of microbial drug resistance was identified as a key concern and research priority.

ContributorsDone, Hansa (Author) / Halden, Rolf (Author) / Biodesign Institute (Contributor)
Created2015-01-23