This growing collection consists of scholarly works authored by ASU-affiliated faculty, staff, and community members, and it contains many open access articles. ASU-affiliated authors are encouraged to Share Your Work in KEEP.

Displaying 1 - 10 of 47
Filtering by

Clear all filters

141468-Thumbnail Image.png
Description

In this synthesis, we hope to accomplish two things: 1) reflect on how the analysis of the new archaeological cases presented in this special feature adds to previous case studies by revisiting a set of propositions reported in a 2006 special feature, and 2) reflect on four main ideas that

In this synthesis, we hope to accomplish two things: 1) reflect on how the analysis of the new archaeological cases presented in this special feature adds to previous case studies by revisiting a set of propositions reported in a 2006 special feature, and 2) reflect on four main ideas that are more specific to the archaeological cases: i) societal choices are influenced by robustness–vulnerability trade-offs, ii) there is interplay between robustness–vulnerability trade-offs and robustness–performance trade-offs, iii) societies often get locked in to particular strategies, and iv) multiple positive feedbacks escalate the perceived cost of societal change. We then discuss whether these lock-in traps can be prevented or whether the risks associated with them can be mitigated. We conclude by highlighting how these long-term historical studies can help us to understand current society, societal practices, and the nexus between ecology and society.

ContributorsSchoon, Michael (Author) / Fabricius, Christo (Author) / Anderies, John (Author) / Nelson, Margaret (Author) / College of Liberal Arts and Sciences (Contributor)
Created2011
129492-Thumbnail Image.png
Description

As part of an international collaboration to compare large-scale commons, we used the Social-Ecological Systems Meta-Analysis Database (SESMAD) to systematically map out attributes of and changes in the Great Barrier Reef Marine Park (GBRMP) in Australia. We focus on eight design principles from common-pool resource (CPR) theory and other key

As part of an international collaboration to compare large-scale commons, we used the Social-Ecological Systems Meta-Analysis Database (SESMAD) to systematically map out attributes of and changes in the Great Barrier Reef Marine Park (GBRMP) in Australia. We focus on eight design principles from common-pool resource (CPR) theory and other key social-ecological systems governance variables, and explore to what extent they help explain the social and ecological outcomes of park management through time. Our analysis showed that commercial fisheries management and the re-zoning of the GBRMP in 2004 led to improvements in ecological condition of the reef, particularly fisheries. These boundary and rights changes were supported by effective monitoring, sanctioning and conflict resolution. Moderate biophysical connectivity was also important for improved outcomes. However, our analysis also highlighted that continued challenges to improved ecological health in terms of coral cover and biodiversity can be explained by fuzzy boundaries between land and sea, and the significance of external drivers to even large-scale social-ecological systems (SES). While ecological and institutional fit in the marine SES was high, this was not the case when considering the coastal SES. Nested governance arrangements become even more important at this larger scale. To our knowledge, our paper provides the first analysis linking the re-zoning of the GBRMP to CPR and SES theory. We discuss important challenges to coding large-scale systems for meta-analysis.

Created2013-11-30
129493-Thumbnail Image.png
Description

The Montreal Protocol is generally credited as a successful example of international cooperation in response to a global environmental problem. As a result, the production and consumption of ozone-depleting substances has declined rapidly, and it is expected that atmospheric ozone concentrations will return to their normal ranges toward the end

The Montreal Protocol is generally credited as a successful example of international cooperation in response to a global environmental problem. As a result, the production and consumption of ozone-depleting substances has declined rapidly, and it is expected that atmospheric ozone concentrations will return to their normal ranges toward the end of this century. This paper applies the social-ecological system framework and common-pool resource theory to explore the congruence between successful resolution of small-scale appropriation problems and ozone regulation, a large-scale pollution problem. The results of our analysis correspond closely to past studies of the Protocol that highlight the importance of attributes such as a limited number of major industrial producers, advances in scientific knowledge, and the availability of technological substitutes. However, in contrast to previous theoretical accounts that focus on one or a few variables, our analysis suggests that its success may have been the result of interactions between a wider range of SES attributes, many of which are associated with successful small-scale environmental governance. Although carefully noting the limitations of drawing conclusions from the analysis of a single case, our analysis reveals the potential for fruitful interplay between common-pool resource theory and large-scale pollution problems.

ContributorsEpstein, Graham (Author) / Perez Ibarra, Irene (Author) / Schoon, Michael (Author) / Meek, Chanda L. (Author) / College of Liberal Arts and Sciences (Contributor)
Created2013-11-30
Description

Background: Meiotic recombination has traditionally been explained based on the structural requirement to stabilize homologous chromosome pairs to ensure their proper meiotic segregation. Competing hypotheses seek to explain the emerging findings of significant heterogeneity in recombination rates within and between genomes, but intraspecific comparisons of genome-wide recombination patterns are rare.

Background: Meiotic recombination has traditionally been explained based on the structural requirement to stabilize homologous chromosome pairs to ensure their proper meiotic segregation. Competing hypotheses seek to explain the emerging findings of significant heterogeneity in recombination rates within and between genomes, but intraspecific comparisons of genome-wide recombination patterns are rare. The honey bee (Apis mellifera) exhibits the highest rate of genomic recombination among multicellular animals with about five cross-over events per chromatid.

Results: Here, we present a comparative analysis of recombination rates across eight genetic linkage maps of the honey bee genome to investigate which genomic sequence features are correlated with recombination rate and with its variation across the eight data sets, ranging in average marker spacing ranging from 1 Mbp to 120 kbp. Overall, we found that GC content explained best the variation in local recombination rate along chromosomes at the analyzed 100 kbp scale. In contrast, variation among the different maps was correlated to the abundance of microsatellites and several specific tri- and tetra-nucleotides.

Conclusions: The combined evidence from eight medium-scale recombination maps of the honey bee genome suggests that recombination rate variation in this highly recombining genome might be due to the DNA configuration instead of distinct sequence motifs. However, more fine-scale analyses are needed. The empirical basis of eight differing genetic maps allowed for robust conclusions about the correlates of the local recombination rates and enabled the study of the relation between DNA features and variability in local recombination rates, which is particularly relevant in the honey bee genome with its exceptionally high recombination rate.

ContributorsRoss, Caitlin R. (Author) / DeFelice, Dominick S. (Author) / Hunt, Greg J. (Author) / Ihle, Kate (Author) / Amdam, Gro (Author) / Rueppell, Olav (Author) / College of Liberal Arts and Sciences (Contributor)
Created2015-02-21
129379-Thumbnail Image.png
Description

The purpose of the United Nations-guided process to establish Sustainable Development Goals is to galvanize governments and civil society to rise to the interlinked environmental, societal, and economic challenges we face in the Anthropocene. We argue that the process of setting Sustainable Development Goals should take three key aspects into

The purpose of the United Nations-guided process to establish Sustainable Development Goals is to galvanize governments and civil society to rise to the interlinked environmental, societal, and economic challenges we face in the Anthropocene. We argue that the process of setting Sustainable Development Goals should take three key aspects into consideration. First, it should embrace an integrated social-ecological system perspective and acknowledge the key dynamics that such systems entail, including the role of ecosystems in sustaining human wellbeing, multiple cross-scale interactions, and uncertain thresholds. Second, the process needs to address trade-offs between the ambition of goals and the feasibility in reaching them, recognizing biophysical, social, and political constraints. Third, the goal-setting exercise and the management of goal implementation need to be guided by existing knowledge about the principles, dynamics, and constraints of social change processes at all scales, from the individual to the global. Combining these three aspects will increase the chances of establishing and achieving effective Sustainable Development Goals.

ContributorsNorstrom, Albert V. (Author) / Dannenberg, Astrid (Author) / McCarney, Geoff (Author) / Milkoreit, Manjana (Author) / Diekert, Florian (Author) / Engstrom, Gustav (Author) / Fishman, Ram (Author) / Gars, Johan (Author) / Kyriakopoolou, Efthymia (Author) / Manoussi, Vassiliki (Author) / Meng, Kyle (Author) / Metian, Marc (Author) / Sanctuary, Mark (Author) / Schluter, Maja (Author) / Schoon, Michael (Author) / Schultz, Lisen (Author) / Sjostedt, Martin (Author) / Julie Ann Wrigley Global Institute of Sustainability (Contributor)
Created2013-11-30
129236-Thumbnail Image.png
Description

Perchloroethylene (PCE) is a highly utilized solvent in the dry cleaning industry because of its cleaning effectiveness and relatively low cost to consumers. According to the 2006 U.S. Census, approximately 28,000 dry cleaning operations used PCE as their principal cleaning agent. Widespread use of PCE is problematic because of its

Perchloroethylene (PCE) is a highly utilized solvent in the dry cleaning industry because of its cleaning effectiveness and relatively low cost to consumers. According to the 2006 U.S. Census, approximately 28,000 dry cleaning operations used PCE as their principal cleaning agent. Widespread use of PCE is problematic because of its adverse impacts on human health and environmental quality. As PCE use is curtailed, effective alternatives must be analyzed for their toxicity and impacts to human health and the environment. Potential alternatives to PCE in dry cleaning include dipropylene glycol n-butyl ether (DPnB) and dipropylene glycol tert-butyl ether (DPtB), both promising to pose a relatively smaller risk. To evaluate these two alternatives to PCE, we established and scored performance criteria, including chemical toxicity, employee and customer exposure levels, impacts on the general population, costs of each system, and cleaning efficacy. The scores received for PCE were 5, 5, 3, 5, 3, and 3, respectively, and DPnB and DPtB scored 3, 1, 2, 2, 4, and 4, respectively. An aggregate sum of the performance criteria yielded a favorably low score of “16” for both DPnB and DPtB compared to “24” for PCE. We conclude that DPnB and DPtB are preferable dry cleaning agents, exhibiting reduced human toxicity and a lesser adverse impact on human health and the environment compared to PCE, with comparable capital investments, and moderately higher annual operating costs.

ContributorsHesari, Nikou (Author) / Francis, Chelsea (Author) / Halden, Rolf (Author) / Ira A. Fulton Schools of Engineering (Contributor)
Created2014-04-03
Description

A meta-analysis was conducted to inform the epistemology, or theory of knowledge, of contaminants of emerging concern (CECs). The CEC terminology acknowledges the existence of harmful environmental agents whose identities, occurrences, hazards, and effects are not sufficiently understood. Here, data on publishing activity were analyzed for 12 CECs, revealing a

A meta-analysis was conducted to inform the epistemology, or theory of knowledge, of contaminants of emerging concern (CECs). The CEC terminology acknowledges the existence of harmful environmental agents whose identities, occurrences, hazards, and effects are not sufficiently understood. Here, data on publishing activity were analyzed for 12 CECs, revealing a common pattern of emergence, suitable for identifying past years of peak concern and forecasting future ones: dichlorodiphenyltrichloroethane (DDT; 1972, 2008), trichloroacetic acid (TCAA; 1972, 2009), nitrosodimethylamine (1984), methyl tert-butyl ether (2001), trichloroethylene (2005), perchlorate (2006), 1,4-dioxane (2009), prions (2009), triclocarban (2010), triclosan (2012), nanomaterials (by 2016), and microplastics (2022 ± 4). CECs were found to emerge from obscurity to the height of concern in 14.1 ± 3.6 years, and subside to a new baseline level of concern in 14.5 ± 4.5 years. CECs can emerge more than once (e.g., TCAA, DDT) and the multifactorial process of emergence may be driven by inception of novel scientific methods (e.g., ion chromatography, mass spectrometry and nanometrology), scientific paradigm shifts (discovery of infectious proteins), and the development, marketing and mass consumption of novel products (antimicrobial personal care products, microplastics and nanomaterials). Publishing activity and U.S. regulatory actions were correlated for several CECs investigated.

ContributorsHalden, Rolf (Author) / Biodesign Institute (Contributor)
Created2015-01-23
129254-Thumbnail Image.png
Description

Over the past couple of decades, quality has been an area of increased focus. Multiple models and approaches have been proposed to measure the quality in the construction industry. This paper focuses on determining the quality of one of the types of roofing systems used in the construction industry, i.e.,

Over the past couple of decades, quality has been an area of increased focus. Multiple models and approaches have been proposed to measure the quality in the construction industry. This paper focuses on determining the quality of one of the types of roofing systems used in the construction industry, i.e., sprayed polyurethane foam roofs (SPF roofs). Thirty-seven urethane-coated SPF roofs that were installed in 2005/2006 were visually inspected to measure the percentage of blisters and repairs three times over a period of four years, six years, and seven years. A repairing criteria was established after a six-year mark based on the data that were reported to contractors as vulnerable roofs. Furthermore, the relation between four possible contributing time-of-installation factors—contractor, demographics, season, and difficulty (number of penetrations and size of the roof in square feet) that could affect the quality of the roof was determined. Demographics and difficulty did not affect the quality of the roofs, whereas the contractor and the season when the roof was installed did affect the quality of the roofs.

ContributorsGajjar, Dhaval (Author) / Kashiwagi, Dean (Author) / Sullivan, Kenneth (Author) / Kashiwagi, Jacob (Author) / Ira A. Fulton Schools of Engineering (Contributor)
Created2015-04-01
129255-Thumbnail Image.png
Description

Nanoscale zero-valent iron (nZVI) is a strong nonspecific reducing agent that is used for in situ degradation of chlorinated solvents and other oxidized pollutants. However, there are significant concerns regarding the risks posed by the deliberate release of engineered nanomaterials into the environment, which have triggered moratoria, for example, in

Nanoscale zero-valent iron (nZVI) is a strong nonspecific reducing agent that is used for in situ degradation of chlorinated solvents and other oxidized pollutants. However, there are significant concerns regarding the risks posed by the deliberate release of engineered nanomaterials into the environment, which have triggered moratoria, for example, in the United Kingdom. This critical review focuses on the effect of nZVI injection on subsurface microbial communities, which are of interest due to their important role in contaminant attenuation processes. Corrosion of ZVI stimulates dehalorespiring bacteria, due to the production of H2 that can serve as an electron donor for reduction of chlorinated contaminants. Conversely, laboratory studies show that nZVI can be inhibitory to pure bacterial cultures, although toxicity is reduced when nZVI is coated with polyelectrolytes or natural organic matter. The emerging toolkit of molecular biological analyses should enable a more sophisticated assessment of combined nZVI/biostimulation or bioaugmentation approaches. While further research on the consequences of its application for subsurface microbial communities is needed, nZVI continues to hold promise as an innovative technology for in situ remediation of pollutants It is particularly attractive. for the remediation of subsurface environments containing chlorinated ethenes because of its ability to potentially elicit and sustain both physical–chemical and biological removal despite its documented antimicrobial properties.

ContributorsBruton, Thomas (Author) / Pycke, Benny (Author) / Halden, Rolf (Author) / Biodesign Institute (Contributor)
Created2015-06-03
Description

Widespread contamination of groundwater by chlorinated ethenes and their biological dechlorination products necessitates the reliable monitoring of liquid matrices; current methods approved by the U.S. Environmental Protection Agency (EPA) require a minimum of 5 mL of sample volume and cannot simultaneously detect all transformative products. This paper reports on the

Widespread contamination of groundwater by chlorinated ethenes and their biological dechlorination products necessitates the reliable monitoring of liquid matrices; current methods approved by the U.S. Environmental Protection Agency (EPA) require a minimum of 5 mL of sample volume and cannot simultaneously detect all transformative products. This paper reports on the simultaneous detection of six chlorinated ethenes and ethene itself, using a liquid sample volume of 1 mL by concentrating the compounds onto an 85-µm carboxen-polydimenthylsiloxane solid-phase microextraction fiber in 5 min and subsequent chromatographic analysis in 9.15 min. Linear increases in signal response were obtained over three orders of magnitude (∼0.05 to ∼50 µM) for simultaneous analysis with coefficient of determination (R2) values of ≥ 0.99. The detection limits of the method (1.3–6 µg/L) were at or below the maximum contaminant levels specified by the EPA. Matrix spike studies with groundwater and mineral medium showed recovery rates between 79–108%. The utility of the method was demonstrated in lab-scale sediment flow-through columns assessing the bioremediation potential of chlorinated ethene-contaminated groundwater. Owing to its low sample volume requirements, good sensitivity and broad target analyte range, the method is suitable for routine compliance monitoring and is particularly attractive for interpreting the bench-scale feasibility studies that are commonly performed during the remedial design stage of groundwater cleanup projects.

ContributorsZiv-El, Michal (Author) / Kalinowski, Tomasz (Author) / Krajmalnik-Brown, Rosa (Author) / Halden, Rolf (Author) / Biodesign Institute (Contributor)
Created2014-02-01