This growing collection consists of scholarly works authored by ASU-affiliated faculty, staff, and community members, and it contains many open access articles. ASU-affiliated authors are encouraged to Share Your Work in KEEP.

Displaying 1 - 10 of 43
Filtering by

Clear all filters

141475-Thumbnail Image.png
Description

The evolution of cooperation is a fundamental problem in biology, especially for non-relatives, where indirect fitness benefits cannot counter within-group inequalities. Multilevel selection models show how cooperation can evolve if it generates a group-level advantage, even when cooperators are disadvantaged within their group. This allows the possibility of group selection,

The evolution of cooperation is a fundamental problem in biology, especially for non-relatives, where indirect fitness benefits cannot counter within-group inequalities. Multilevel selection models show how cooperation can evolve if it generates a group-level advantage, even when cooperators are disadvantaged within their group. This allows the possibility of group selection, but few examples have been described in nature. Here we show that group selection can explain the evolution of cooperative nest founding in the harvester ant Pogonomyrmex californicus. Through most of this species’ range, colonies are founded by single queens, but in some populations nests are instead founded by cooperative groups of unrelated queens. In mixed groups of cooperative and single-founding queens, we found that aggressive individuals had a survival advantage within their nest, but foundress groups with such non-cooperators died out more often than those with only cooperative members. An agent-based model shows that the between-group advantage of the cooperative phenotype drives it to fixation, despite its within-group disadvantage, but only when population density is high enough to make between-group competition intense. Field data show higher nest density in a population where cooperative founding is common, consistent with greater density driving the evolution of cooperative foundation through group selection.

ContributorsShaffer, Zachary (Author) / Sasaki, Takao (Author) / Haney, Brian (Author) / Janssen, Marco (Author) / Pratt, Stephen (Author) / Fewell, Jennifer (Author) / College of Liberal Arts and Sciences (Contributor)
Created2016-07-28
Description

Human societies are unique in the level of cooperation among non-kin. Evolutionary models explaining this behavior typically assume pure strategies of cooperation and defection. Behavioral experiments, however, demonstrate that humans are typically conditional co-operators who have other-regarding preferences. Building on existing models on the evolution of cooperation and costly punishment,

Human societies are unique in the level of cooperation among non-kin. Evolutionary models explaining this behavior typically assume pure strategies of cooperation and defection. Behavioral experiments, however, demonstrate that humans are typically conditional co-operators who have other-regarding preferences. Building on existing models on the evolution of cooperation and costly punishment, we use a utilitarian formulation of agent decision making to explore conditions that support the emergence of cooperative behavior. Our results indicate that cooperation levels are significantly lower for larger groups in contrast to the original pure strategy model. Here, defection behavior not only diminishes the public good, but also affects the expectations of group members leading conditional co-operators to change their strategies. Hence defection has a more damaging effect when decisions are based on expectations and not only pure strategies.

Created2014-07-01
Description

On-going efforts to understand the dynamics of coupled social-ecological (or more broadly, coupled infrastructure) systems and common pool resources have led to the generation of numerous datasets based on a large number of case studies. This data has facilitated the identification of important factors and fundamental principles which increase our

On-going efforts to understand the dynamics of coupled social-ecological (or more broadly, coupled infrastructure) systems and common pool resources have led to the generation of numerous datasets based on a large number of case studies. This data has facilitated the identification of important factors and fundamental principles which increase our understanding of such complex systems. However, the data at our disposal are often not easily comparable, have limited scope and scale, and are based on disparate underlying frameworks inhibiting synthesis, meta-analysis, and the validation of findings. Research efforts are further hampered when case inclusion criteria, variable definitions, coding schema, and inter-coder reliability testing are not made explicit in the presentation of research and shared among the research community. This paper first outlines challenges experienced by researchers engaged in a large-scale coding project; then highlights valuable lessons learned; and finally discusses opportunities for further research on comparative case study analysis focusing on social-ecological systems and common pool resources. Includes supplemental materials and appendices published in the International Journal of the Commons 2016 Special Issue. Volume 10 - Issue 2 - 2016.

ContributorsRatajczyk, Elicia (Author) / Brady, Ute (Author) / Baggio, Jacopo (Author) / Barnett, Allain J. (Author) / Perez Ibarra, Irene (Author) / Rollins, Nathan (Author) / Rubinos, Cathy (Author) / Shin, Hoon Cheol (Author) / Yu, David (Author) / Aggarwal, Rimjhim (Author) / Anderies, John (Author) / Janssen, Marco (Author) / ASU-SFI Center for Biosocial Complex Systems (Contributor)
Created2016-09-09
Description

Governing common pool resources (CPR) in the face of disturbances such as globalization and climate change is challenging. The outcome of any CPR governance regime is the influenced by local combinations of social, institutional, and biophysical factors, as well as cross-scale interdependencies. In this study, we take a step towards

Governing common pool resources (CPR) in the face of disturbances such as globalization and climate change is challenging. The outcome of any CPR governance regime is the influenced by local combinations of social, institutional, and biophysical factors, as well as cross-scale interdependencies. In this study, we take a step towards understanding multiple-causation of CPR outcomes by analyzing 1) the co-occurrence of Design Principles (DP) by activity (irrigation, fishery and forestry), and 2) the combination(s) of DPs leading to social and ecological success. We analyzed 69 cases pertaining to three different activities: irrigation, fishery, and forestry. We find that the importance of the design principles is dependent upon the natural and hard human made infrastructure (i.e. canals, equipment, vessels etc.). For example, clearly defined social boundaries are important when the natural infrastructure is highly mobile (i.e. tuna fish), while monitoring is more important when the natural infrastructure is more static (i.e. forests or water contained within an irrigation system). However, we also find that congruence between local conditions and rules and proportionality between investment and extraction are key for CPR success independent from the natural and human hard made infrastructure. We further provide new visualization techniques for co-occurrence patterns and add to qualitative comparative analysis by introducing a reliability metric to deal with a large meta-analysis dataset on secondary data where information is missing or uncertain.

Includes supplemental materials and appendices publications in International Journal of the Commons 2016 Special Issue. Volume 10 - Issue 2 - 2016

ContributorsBaggio, Jacopo (Author) / Barnett, Alain J. (Author) / Perez, Irene (Author) / Brady, Ute (Author) / Ratajczyk, Elicia (Author) / Rollins, Nathan (Author) / Rubinos, Cathy (Author) / Shin, Hoon Cheol (Author) / Yu, David (Author) / Aggarwal, Rimjhim (Author) / Anderies, John (Author) / Janssen, Marco (Author) / Julie Ann Wrigley Global Institute of Sustainability (Contributor)
Created2016-09-09
Description

A relatively unexplored issue in cybersecurity science and engineering is whether there exist intrinsic patterns of cyberattacks. Conventional wisdom favors absence of such patterns due to the overwhelming complexity of the modern cyberspace. Surprisingly, through a detailed analysis of an extensive data set that records the time-dependent frequencies of attacks

A relatively unexplored issue in cybersecurity science and engineering is whether there exist intrinsic patterns of cyberattacks. Conventional wisdom favors absence of such patterns due to the overwhelming complexity of the modern cyberspace. Surprisingly, through a detailed analysis of an extensive data set that records the time-dependent frequencies of attacks over a relatively wide range of consecutive IP addresses, we successfully uncover intrinsic spatiotemporal patterns underlying cyberattacks, where the term “spatio” refers to the IP address space. In particular, we focus on analyzing macroscopic properties of the attack traffic flows and identify two main patterns with distinct spatiotemporal characteristics: deterministic and stochastic. Strikingly, there are very few sets of major attackers committing almost all the attacks, since their attack “fingerprints” and target selection scheme can be unequivocally identified according to the very limited number of unique spatiotemporal characteristics, each of which only exists on a consecutive IP region and differs significantly from the others. We utilize a number of quantitative measures, including the flux-fluctuation law, the Markov state transition probability matrix, and predictability measures, to characterize the attack patterns in a comprehensive manner. A general finding is that the attack patterns possess high degrees of predictability, potentially paving the way to anticipating and, consequently, mitigating or even preventing large-scale cyberattacks using macroscopic approaches.

ContributorsChen, Yu-Zhong (Author) / Huang, Zi-Gang (Author) / Xu, Shouhuai (Author) / Lai, Ying-Cheng (Author) / Ira A. Fulton Schools of Engineering (Contributor)
Created2015-05-20
Description

Supply-demand processes take place on a large variety of real-world networked systems ranging from power grids and the internet to social networking and urban systems. In a modern infrastructure, supply-demand systems are constantly expanding, leading to constant increase in load requirement for resources and consequently, to problems such as low

Supply-demand processes take place on a large variety of real-world networked systems ranging from power grids and the internet to social networking and urban systems. In a modern infrastructure, supply-demand systems are constantly expanding, leading to constant increase in load requirement for resources and consequently, to problems such as low efficiency, resource scarcity, and partial system failures. Under certain conditions global catastrophe on the scale of the whole system can occur through the dynamical process of cascading failures. We investigate optimization and resilience of time-varying supply-demand systems by constructing network models of such systems, where resources are transported from the supplier sites to users through various links. Here by optimization we mean minimization of the maximum load on links, and system resilience can be characterized using the cascading failure size of users who fail to connect with suppliers.

We consider two representative classes of supply schemes: load driven supply and fix fraction supply. Our findings are: (1) optimized systems are more robust since relatively smaller cascading failures occur when triggered by external perturbation to the links; (2) a large fraction of links can be free of load if resources are directed to transport through the shortest paths; (3) redundant links in the performance of the system can help to reroute the traffic but may undesirably transmit and enlarge the failure size of the system; (4) the patterns of cascading failures depend strongly upon the capacity of links; (5) the specific location of the trigger determines the specific route of cascading failure, but has little effect on the final cascading size; (6) system expansion typically reduces the efficiency; and (7) when the locations of the suppliers are optimized over a long expanding period, fewer suppliers are required. These results hold for heterogeneous networks in general, providing insights into designing optimal and resilient complex supply-demand systems that expand constantly in time.

ContributorsZhang, Si-Ping (Author) / Huang, Zi-Gang (Author) / Dong, Jia-Qi (Author) / Eisenberg, Daniel (Author) / Seager, Thomas (Author) / Lai, Ying-Cheng (Author) / Ira A. Fulton Schools of Engineering (Contributor)
Created2015-06-23
127882-Thumbnail Image.png
Description

The estimation of energy demand (by power plants) has traditionally relied on historical energy use data for the region(s) that a plant produces for. Regression analysis, artificial neural network and Bayesian theory are the most common approaches for analysing these data. Such data and techniques do not generate reliable results.

The estimation of energy demand (by power plants) has traditionally relied on historical energy use data for the region(s) that a plant produces for. Regression analysis, artificial neural network and Bayesian theory are the most common approaches for analysing these data. Such data and techniques do not generate reliable results. Consequently, excess energy has to be generated to prevent blackout; causes for energy surge are not easily determined; and potential energy use reduction from energy efficiency solutions is usually not translated into actual energy use reduction. The paper highlights the weaknesses of traditional techniques, and lays out a framework to improve the prediction of energy demand by combining energy use models of equipment, physical systems and buildings, with the proposed data mining algorithms for reverse engineering. The research team first analyses data samples from large complex energy data, and then, presents a set of computationally efficient data mining algorithms for reverse engineering. In order to develop a structural system model for reverse engineering, two focus groups are developed that has direct relation with cause and effect variables. The research findings of this paper includes testing out different sets of reverse engineering algorithms, understand their output patterns and modify algorithms to elevate accuracy of the outputs.

ContributorsNaganathan, Hariharan (Author) / Chong, Oswald (Author) / Ye, Long (Author) / Ira A. Fulton School of Engineering (Contributor)
Created2015-12-09
127878-Thumbnail Image.png
Description

Small and medium office buildings consume a significant parcel of the U.S. building stock energy consumption. Still, owners lack resources and experience to conduct detailed energy audits and retrofit analysis. We present an eight-steps framework for an energy retrofit assessment in small and medium office buildings. Through a bottom-up approach

Small and medium office buildings consume a significant parcel of the U.S. building stock energy consumption. Still, owners lack resources and experience to conduct detailed energy audits and retrofit analysis. We present an eight-steps framework for an energy retrofit assessment in small and medium office buildings. Through a bottom-up approach and a web-based retrofit toolkit tested on a case study in Arizona, this methodology was able to save about 50% of the total energy consumed by the case study building, depending on the adopted measures and invested capital. While the case study presented is a deep energy retrofit, the proposed framework is effective in guiding the decision-making process that precedes any energy retrofit, deep or light.

ContributorsRios, Fernanda (Author) / Parrish, Kristen (Author) / Chong, Oswald (Author) / Ira A. Fulton School of Engineering (Contributor)
Created2016-05-20
127865-Thumbnail Image.png
Description

Commercial buildings’ consumption is driven by multiple factors that include occupancy, system and equipment efficiency, thermal heat transfer, equipment plug loads, maintenance and operational procedures, and outdoor and indoor temperatures. A modern building energy system can be viewed as a complex dynamical system that is interconnected and influenced by external

Commercial buildings’ consumption is driven by multiple factors that include occupancy, system and equipment efficiency, thermal heat transfer, equipment plug loads, maintenance and operational procedures, and outdoor and indoor temperatures. A modern building energy system can be viewed as a complex dynamical system that is interconnected and influenced by external and internal factors. Modern large scale sensor measures some physical signals to monitor real-time system behaviors. Such data has the potentials to detect anomalies, identify consumption patterns, and analyze peak loads. The paper proposes a novel method to detect hidden anomalies in commercial building energy consumption system. The framework is based on Hilbert-Huang transform and instantaneous frequency analysis. The objectives are to develop an automated data pre-processing system that can detect anomalies and provide solutions with real-time consumption database using Ensemble Empirical Mode Decomposition (EEMD) method. The finding of this paper will also include the comparisons of Empirical mode decomposition and Ensemble empirical mode decomposition of three important type of institutional buildings.

ContributorsNaganathan, Hariharan (Author) / Chong, Oswald (Author) / Huang, Zigang (Author) / Cheng, Ying (Author) / Ira A. Fulton School of Engineering (Contributor)
Created2016-05-20
127833-Thumbnail Image.png
Description

There are many data mining and machine learning techniques to manage large sets of complex energy supply and demand data for building, organization and city. As the amount of data continues to grow, new data analysis methods are needed to address the increasing complexity. Using data from the energy loss

There are many data mining and machine learning techniques to manage large sets of complex energy supply and demand data for building, organization and city. As the amount of data continues to grow, new data analysis methods are needed to address the increasing complexity. Using data from the energy loss between the supply (energy production sources) and demand (buildings and cities consumption), this paper proposes a Semi-Supervised Energy Model (SSEM) to analyse different loss factors for a building cluster. This is done by deep machine learning by training machines to semi-supervise the learning, understanding and manage the process of energy losses. Semi-Supervised Energy Model (SSEM) aims at understanding the demand-supply characteristics of a building cluster and utilizes the confident unlabelled data (loss factors) using deep machine learning techniques. The research findings involves sample data from one of the university campuses and presents the output, which provides an estimate of losses that can be reduced. The paper also provides a list of loss factors that contributes to the total losses and suggests a threshold value for each loss factor, which is determined through real time experiments. The conclusion of this paper provides a proposed energy model that can provide accurate numbers on energy demand, which in turn helps the suppliers to adopt such a model to optimize their supply strategies.

ContributorsNaganathan, Hariharan (Author) / Chong, Oswald (Author) / Chen, Xue-wen (Author) / Ira A. Fulton Schools of Engineering (Contributor)
Created2015-09-14