This growing collection consists of scholarly works authored by ASU-affiliated faculty, staff, and community members, and it contains many open access articles. ASU-affiliated authors are encouraged to Share Your Work in KEEP.

Displaying 1 - 10 of 25
Filtering by

Clear all filters

Description

On-going efforts to understand the dynamics of coupled social-ecological (or more broadly, coupled infrastructure) systems and common pool resources have led to the generation of numerous datasets based on a large number of case studies. This data has facilitated the identification of important factors and fundamental principles which increase our

On-going efforts to understand the dynamics of coupled social-ecological (or more broadly, coupled infrastructure) systems and common pool resources have led to the generation of numerous datasets based on a large number of case studies. This data has facilitated the identification of important factors and fundamental principles which increase our understanding of such complex systems. However, the data at our disposal are often not easily comparable, have limited scope and scale, and are based on disparate underlying frameworks inhibiting synthesis, meta-analysis, and the validation of findings. Research efforts are further hampered when case inclusion criteria, variable definitions, coding schema, and inter-coder reliability testing are not made explicit in the presentation of research and shared among the research community. This paper first outlines challenges experienced by researchers engaged in a large-scale coding project; then highlights valuable lessons learned; and finally discusses opportunities for further research on comparative case study analysis focusing on social-ecological systems and common pool resources. Includes supplemental materials and appendices published in the International Journal of the Commons 2016 Special Issue. Volume 10 - Issue 2 - 2016.

ContributorsRatajczyk, Elicia (Author) / Brady, Ute (Author) / Baggio, Jacopo (Author) / Barnett, Allain J. (Author) / Perez Ibarra, Irene (Author) / Rollins, Nathan (Author) / Rubinos, Cathy (Author) / Shin, Hoon Cheol (Author) / Yu, David (Author) / Aggarwal, Rimjhim (Author) / Anderies, John (Author) / Janssen, Marco (Author) / ASU-SFI Center for Biosocial Complex Systems (Contributor)
Created2016-09-09
Description

Governing common pool resources (CPR) in the face of disturbances such as globalization and climate change is challenging. The outcome of any CPR governance regime is the influenced by local combinations of social, institutional, and biophysical factors, as well as cross-scale interdependencies. In this study, we take a step towards

Governing common pool resources (CPR) in the face of disturbances such as globalization and climate change is challenging. The outcome of any CPR governance regime is the influenced by local combinations of social, institutional, and biophysical factors, as well as cross-scale interdependencies. In this study, we take a step towards understanding multiple-causation of CPR outcomes by analyzing 1) the co-occurrence of Design Principles (DP) by activity (irrigation, fishery and forestry), and 2) the combination(s) of DPs leading to social and ecological success. We analyzed 69 cases pertaining to three different activities: irrigation, fishery, and forestry. We find that the importance of the design principles is dependent upon the natural and hard human made infrastructure (i.e. canals, equipment, vessels etc.). For example, clearly defined social boundaries are important when the natural infrastructure is highly mobile (i.e. tuna fish), while monitoring is more important when the natural infrastructure is more static (i.e. forests or water contained within an irrigation system). However, we also find that congruence between local conditions and rules and proportionality between investment and extraction are key for CPR success independent from the natural and human hard made infrastructure. We further provide new visualization techniques for co-occurrence patterns and add to qualitative comparative analysis by introducing a reliability metric to deal with a large meta-analysis dataset on secondary data where information is missing or uncertain.

Includes supplemental materials and appendices publications in International Journal of the Commons 2016 Special Issue. Volume 10 - Issue 2 - 2016

ContributorsBaggio, Jacopo (Author) / Barnett, Alain J. (Author) / Perez, Irene (Author) / Brady, Ute (Author) / Ratajczyk, Elicia (Author) / Rollins, Nathan (Author) / Rubinos, Cathy (Author) / Shin, Hoon Cheol (Author) / Yu, David (Author) / Aggarwal, Rimjhim (Author) / Anderies, John (Author) / Janssen, Marco (Author) / Julie Ann Wrigley Global Institute of Sustainability (Contributor)
Created2016-09-09
127882-Thumbnail Image.png
Description

The estimation of energy demand (by power plants) has traditionally relied on historical energy use data for the region(s) that a plant produces for. Regression analysis, artificial neural network and Bayesian theory are the most common approaches for analysing these data. Such data and techniques do not generate reliable results.

The estimation of energy demand (by power plants) has traditionally relied on historical energy use data for the region(s) that a plant produces for. Regression analysis, artificial neural network and Bayesian theory are the most common approaches for analysing these data. Such data and techniques do not generate reliable results. Consequently, excess energy has to be generated to prevent blackout; causes for energy surge are not easily determined; and potential energy use reduction from energy efficiency solutions is usually not translated into actual energy use reduction. The paper highlights the weaknesses of traditional techniques, and lays out a framework to improve the prediction of energy demand by combining energy use models of equipment, physical systems and buildings, with the proposed data mining algorithms for reverse engineering. The research team first analyses data samples from large complex energy data, and then, presents a set of computationally efficient data mining algorithms for reverse engineering. In order to develop a structural system model for reverse engineering, two focus groups are developed that has direct relation with cause and effect variables. The research findings of this paper includes testing out different sets of reverse engineering algorithms, understand their output patterns and modify algorithms to elevate accuracy of the outputs.

ContributorsNaganathan, Hariharan (Author) / Chong, Oswald (Author) / Ye, Long (Author) / Ira A. Fulton School of Engineering (Contributor)
Created2015-12-09
127878-Thumbnail Image.png
Description

Small and medium office buildings consume a significant parcel of the U.S. building stock energy consumption. Still, owners lack resources and experience to conduct detailed energy audits and retrofit analysis. We present an eight-steps framework for an energy retrofit assessment in small and medium office buildings. Through a bottom-up approach

Small and medium office buildings consume a significant parcel of the U.S. building stock energy consumption. Still, owners lack resources and experience to conduct detailed energy audits and retrofit analysis. We present an eight-steps framework for an energy retrofit assessment in small and medium office buildings. Through a bottom-up approach and a web-based retrofit toolkit tested on a case study in Arizona, this methodology was able to save about 50% of the total energy consumed by the case study building, depending on the adopted measures and invested capital. While the case study presented is a deep energy retrofit, the proposed framework is effective in guiding the decision-making process that precedes any energy retrofit, deep or light.

ContributorsRios, Fernanda (Author) / Parrish, Kristen (Author) / Chong, Oswald (Author) / Ira A. Fulton School of Engineering (Contributor)
Created2016-05-20
127865-Thumbnail Image.png
Description

Commercial buildings’ consumption is driven by multiple factors that include occupancy, system and equipment efficiency, thermal heat transfer, equipment plug loads, maintenance and operational procedures, and outdoor and indoor temperatures. A modern building energy system can be viewed as a complex dynamical system that is interconnected and influenced by external

Commercial buildings’ consumption is driven by multiple factors that include occupancy, system and equipment efficiency, thermal heat transfer, equipment plug loads, maintenance and operational procedures, and outdoor and indoor temperatures. A modern building energy system can be viewed as a complex dynamical system that is interconnected and influenced by external and internal factors. Modern large scale sensor measures some physical signals to monitor real-time system behaviors. Such data has the potentials to detect anomalies, identify consumption patterns, and analyze peak loads. The paper proposes a novel method to detect hidden anomalies in commercial building energy consumption system. The framework is based on Hilbert-Huang transform and instantaneous frequency analysis. The objectives are to develop an automated data pre-processing system that can detect anomalies and provide solutions with real-time consumption database using Ensemble Empirical Mode Decomposition (EEMD) method. The finding of this paper will also include the comparisons of Empirical mode decomposition and Ensemble empirical mode decomposition of three important type of institutional buildings.

ContributorsNaganathan, Hariharan (Author) / Chong, Oswald (Author) / Huang, Zigang (Author) / Cheng, Ying (Author) / Ira A. Fulton School of Engineering (Contributor)
Created2016-05-20
127833-Thumbnail Image.png
Description

There are many data mining and machine learning techniques to manage large sets of complex energy supply and demand data for building, organization and city. As the amount of data continues to grow, new data analysis methods are needed to address the increasing complexity. Using data from the energy loss

There are many data mining and machine learning techniques to manage large sets of complex energy supply and demand data for building, organization and city. As the amount of data continues to grow, new data analysis methods are needed to address the increasing complexity. Using data from the energy loss between the supply (energy production sources) and demand (buildings and cities consumption), this paper proposes a Semi-Supervised Energy Model (SSEM) to analyse different loss factors for a building cluster. This is done by deep machine learning by training machines to semi-supervise the learning, understanding and manage the process of energy losses. Semi-Supervised Energy Model (SSEM) aims at understanding the demand-supply characteristics of a building cluster and utilizes the confident unlabelled data (loss factors) using deep machine learning techniques. The research findings involves sample data from one of the university campuses and presents the output, which provides an estimate of losses that can be reduced. The paper also provides a list of loss factors that contributes to the total losses and suggests a threshold value for each loss factor, which is determined through real time experiments. The conclusion of this paper provides a proposed energy model that can provide accurate numbers on energy demand, which in turn helps the suppliers to adopt such a model to optimize their supply strategies.

ContributorsNaganathan, Hariharan (Author) / Chong, Oswald (Author) / Chen, Xue-wen (Author) / Ira A. Fulton Schools of Engineering (Contributor)
Created2015-09-14
128816-Thumbnail Image.png
Description

To address the need to study frozen clinical specimens using next-generation RNA, DNA, chromatin immunoprecipitation (ChIP) sequencing and protein analyses, we developed a biobank work flow to prospectively collect biospecimens from patients with renal cell carcinoma (RCC). We describe our standard operating procedures and work flow to annotate pathologic results

To address the need to study frozen clinical specimens using next-generation RNA, DNA, chromatin immunoprecipitation (ChIP) sequencing and protein analyses, we developed a biobank work flow to prospectively collect biospecimens from patients with renal cell carcinoma (RCC). We describe our standard operating procedures and work flow to annotate pathologic results and clinical outcomes. We report quality control outcomes and nucleic acid yields of our RCC submissions (N=16) to The Cancer Genome Atlas (TCGA) project, as well as newer discovery platforms, by describing mass spectrometry analysis of albumin oxidation in plasma and 6 ChIP sequencing libraries generated from nephrectomy specimens after histone H3 lysine 36 trimethylation (H3K36me3) immunoprecipitation. From June 1, 2010, through January 1, 2013, we enrolled 328 patients with RCC. Our mean (SD) TCGA RNA integrity numbers (RINs) were 8.1 (0.8) for papillary RCC, with a 12.5% overall rate of sample disqualification for RIN <7. Banked plasma had significantly less albumin oxidation (by mass spectrometry analysis) than plasma kept at 25°C (P<.001). For ChIP sequencing, the FastQC score for average read quality was at least 30 for 91% to 95% of paired-end reads. In parallel, we analyzed frozen tissue by RNA sequencing; after genome alignment, only 0.2% to 0.4% of total reads failed the default quality check steps of Bowtie2, which was comparable to the disqualification ratio (0.1%) of the 786-O RCC cell line that was prepared under optimal RNA isolation conditions. The overall correlation coefficients for gene expression between Mayo Clinic vs TCGA tissues ranged from 0.75 to 0.82. These data support the generation of high-quality nucleic acids for genomic analyses from banked RCC. Importantly, the protocol does not interfere with routine clinical care. Collections over defined time points during disease treatment further enhance collaborative efforts to integrate genomic information with outcomes.

ContributorsHo, Thai H. (Author) / Nunez Nateras, Rafael (Author) / Yan, Huihuang (Author) / Park, Jin (Author) / Jensen, Sally (Author) / Borges, Chad (Author) / Lee, Jeong Heon (Author) / Champion, Mia D. (Author) / Tibes, Raoul (Author) / Bryce, Alan H. (Author) / Carballido, Estrella M. (Author) / Todd, Mark A. (Author) / Joseph, Richard W. (Author) / Wong, William W. (Author) / Parker, Alexander S. (Author) / Stanton, Melissa L. (Author) / Castle, Erik P. (Author) / Biodesign Institute (Contributor)
Created2015-07-16
128800-Thumbnail Image.png
Description

Insulin-like growth factor 1 (IGF1) is an important biomarker for the management of growth hormone disorders. Recently there has been rising interest in deploying mass spectrometric (MS) methods of detection for measuring IGF1. However, widespread clinical adoption of any MS-based IGF1 assay will require increased throughput and speed to justify

Insulin-like growth factor 1 (IGF1) is an important biomarker for the management of growth hormone disorders. Recently there has been rising interest in deploying mass spectrometric (MS) methods of detection for measuring IGF1. However, widespread clinical adoption of any MS-based IGF1 assay will require increased throughput and speed to justify the costs of analyses, and robust industrial platforms that are reproducible across laboratories. Presented here is an MS-based quantitative IGF1 assay with performance rating of >1,000 samples/day, and a capability of quantifying IGF1 point mutations and posttranslational modifications. The throughput of the IGF1 mass spectrometric immunoassay (MSIA) benefited from a simplified sample preparation step, IGF1 immunocapture in a tip format, and high-throughput MALDI-TOF MS analysis. The Limit of Detection and Limit of Quantification of the resulting assay were 1.5 μg/L and 5 μg/L, respectively, with intra- and inter-assay precision CVs of less than 10%, and good linearity and recovery characteristics. The IGF1 MSIA was benchmarked against commercially available IGF1 ELISA via Bland-Altman method comparison test, resulting in a slight positive bias of 16%. The IGF1 MSIA was employed in an optimized parallel workflow utilizing two pipetting robots and MALDI-TOF-MS instruments synced into one-hour phases of sample preparation, extraction and MSIA pipette tip elution, MS data collection, and data processing. Using this workflow, high-throughput IGF1 quantification of 1,054 human samples was achieved in approximately 9 hours. This rate of assaying is a significant improvement over existing MS-based IGF1 assays, and is on par with that of the enzyme-based immunoassays. Furthermore, a mutation was detected in ∼1% of the samples (SNP: rs17884626, creating an A→T substitution at position 67 of the IGF1), demonstrating the capability of IGF1 MSIA to detect point mutations and posttranslational modifications.

ContributorsOran, Paul (Author) / Trenchevska, Olgica (Author) / Nedelkov, Dobrin (Author) / Borges, Chad (Author) / Schaab, Matthew (Author) / Rehder, Douglas (Author) / Jarvis, Jason (Author) / Sherma, Nisha (Author) / Shen, Luhui (Author) / Krastins, Bryan (Author) / Lopez, Mary F. (Author) / Schwenke, Dawn (Author) / Reaven, Peter D. (Author) / Nelson, Randall (Author) / Biodesign Institute (Contributor)
Created2014-03-24
128778-Thumbnail Image.png
Description

Online communities are becoming increasingly important as platforms for large-scale human cooperation. These communities allow users seeking and sharing professional skills to solve problems collaboratively. To investigate how users cooperate to complete a large number of knowledge-producing tasks, we analyze Stack Exchange, one of the largest question and answer systems

Online communities are becoming increasingly important as platforms for large-scale human cooperation. These communities allow users seeking and sharing professional skills to solve problems collaboratively. To investigate how users cooperate to complete a large number of knowledge-producing tasks, we analyze Stack Exchange, one of the largest question and answer systems in the world. We construct attention networks to model the growth of 110 communities in the Stack Exchange system and quantify individual answering strategies using the linking dynamics on attention networks. We identify two answering strategies. Strategy A aims at performing maintenance by doing simple tasks, whereas strategy B aims at investing time in doing challenging tasks. Both strategies are important: empirical evidence shows that strategy A decreases the median waiting time for answers and strategy B increases the acceptance rate of answers. In investigating the strategic persistence of users, we find that users tends to stick on the same strategy over time in a community, but switch from one strategy to the other across communities. This finding reveals the different sets of knowledge and skills between users. A balance between the population of users taking A and B strategies that approximates 2:1, is found to be optimal to the sustainable growth of communities.

ContributorsWu, Lingfei (Author) / Baggio, Jacopo (Author) / Janssen, Marco (Author) / ASU-SFI Center for Biosocial Complex Systems (Contributor)
Created2016-03-02
128773-Thumbnail Image.png
Description

Serum Amyloid A (SAA) is an acute phase protein complex consisting of several abundant isoforms. The N- terminus of SAA is critical to its function in amyloid formation. SAA is frequently truncated, either missing an arginine or an arginine-serine dipeptide, resulting in isoforms that may influence the capacity to form

Serum Amyloid A (SAA) is an acute phase protein complex consisting of several abundant isoforms. The N- terminus of SAA is critical to its function in amyloid formation. SAA is frequently truncated, either missing an arginine or an arginine-serine dipeptide, resulting in isoforms that may influence the capacity to form amyloid. However, the relative abundance of truncated SAA in diabetes and chronic kidney disease is not known.

Methods: Using mass spectrometric immunoassay, the abundance of SAA truncations relative to the native variants was examined in plasma of 91 participants with type 2 diabetes and chronic kidney disease and 69 participants without diabetes.

Results: The ratio of SAA 1.1 (missing N-terminal arginine) to native SAA 1.1 was lower in diabetics compared to non-diabetics (p = 0.004), and in males compared to females (p<0.001). This ratio was negatively correlated with glycated hemoglobin (r = −0.32, p<0.001) and triglyceride concentrations (r = −0.37, p<0.001), and positively correlated with HDL cholesterol concentrations (r = 0.32, p<0.001).

Conclusion: The relative abundance of the N-terminal arginine truncation of SAA1.1 is significantly decreased in diabetes and negatively correlates with measures of glycemic and lipid control.

ContributorsYassine, Hussein N. (Author) / Trenchevska, Olgica (Author) / He, Huijuan (Author) / Borges, Chad (Author) / Nedelkov, Dobrin (Author) / Mack, Wendy (Author) / Kono, Naoko (Author) / Koska, Juraj (Author) / Reaven, Peter D. (Author) / Nelson, Randall (Author) / Biodesign Institute (Contributor)
Created2015-01-21