This growing collection consists of scholarly works authored by ASU-affiliated faculty, staff, and community members, and it contains many open access articles. ASU-affiliated authors are encouraged to Share Your Work in KEEP.

Displaying 1 - 10 of 38
Filtering by

Clear all filters

141461-Thumbnail Image.png
Description
In the digital humanities, there is a constant need to turn images and PDF files into plain text to apply analyses such as topic modelling, named entity recognition, and other techniques. However, although there exist different solutions to extract text embedded in PDF files or run OCR on images, they

In the digital humanities, there is a constant need to turn images and PDF files into plain text to apply analyses such as topic modelling, named entity recognition, and other techniques. However, although there exist different solutions to extract text embedded in PDF files or run OCR on images, they typically require additional training (for example, scholars have to learn how to use the command line) or are difficult to automate without programming skills. The Giles Ecosystem is a distributed system based on Apache Kafka that allows users to upload documents for text and image extraction. The system components are implemented using Java and the Spring Framework and are available under an Open Source license on GitHub (https://github.com/diging/).
ContributorsLessios-Damerow, Julia (Contributor) / Peirson, Erick (Contributor) / Laubichler, Manfred (Contributor) / ASU-SFI Center for Biosocial Complex Systems (Contributor)
Created2017-09-28
Description

On-going efforts to understand the dynamics of coupled social-ecological (or more broadly, coupled infrastructure) systems and common pool resources have led to the generation of numerous datasets based on a large number of case studies. This data has facilitated the identification of important factors and fundamental principles which increase our

On-going efforts to understand the dynamics of coupled social-ecological (or more broadly, coupled infrastructure) systems and common pool resources have led to the generation of numerous datasets based on a large number of case studies. This data has facilitated the identification of important factors and fundamental principles which increase our understanding of such complex systems. However, the data at our disposal are often not easily comparable, have limited scope and scale, and are based on disparate underlying frameworks inhibiting synthesis, meta-analysis, and the validation of findings. Research efforts are further hampered when case inclusion criteria, variable definitions, coding schema, and inter-coder reliability testing are not made explicit in the presentation of research and shared among the research community. This paper first outlines challenges experienced by researchers engaged in a large-scale coding project; then highlights valuable lessons learned; and finally discusses opportunities for further research on comparative case study analysis focusing on social-ecological systems and common pool resources. Includes supplemental materials and appendices published in the International Journal of the Commons 2016 Special Issue. Volume 10 - Issue 2 - 2016.

ContributorsRatajczyk, Elicia (Author) / Brady, Ute (Author) / Baggio, Jacopo (Author) / Barnett, Allain J. (Author) / Perez Ibarra, Irene (Author) / Rollins, Nathan (Author) / Rubinos, Cathy (Author) / Shin, Hoon Cheol (Author) / Yu, David (Author) / Aggarwal, Rimjhim (Author) / Anderies, John (Author) / Janssen, Marco (Author) / ASU-SFI Center for Biosocial Complex Systems (Contributor)
Created2016-09-09
128166-Thumbnail Image.png
Description

At the end of the dark ages, anatomy was taught as though everything that could be known was known. Scholars learned about what had been discovered rather than how to make discoveries. This was true even though the body (and the rest of biology) was very poorly understood. The renaissance

At the end of the dark ages, anatomy was taught as though everything that could be known was known. Scholars learned about what had been discovered rather than how to make discoveries. This was true even though the body (and the rest of biology) was very poorly understood. The renaissance eventually brought a revolution in how scholars (and graduate students) were trained and worked. This revolution never occurred in K-12 or university education such that we now teach young students in much the way that scholars were taught in the dark ages, we teach them what is already known rather than the process of knowing. Citizen science offers a way to change K-12 and university education and, in doing so, complete the renaissance. Here we offer an example of such an approach and call for change in the way students are taught science, change that is more possible than it has ever been and is, nonetheless, five hundred years delayed.

Created2016-03-01
127882-Thumbnail Image.png
Description

The estimation of energy demand (by power plants) has traditionally relied on historical energy use data for the region(s) that a plant produces for. Regression analysis, artificial neural network and Bayesian theory are the most common approaches for analysing these data. Such data and techniques do not generate reliable results.

The estimation of energy demand (by power plants) has traditionally relied on historical energy use data for the region(s) that a plant produces for. Regression analysis, artificial neural network and Bayesian theory are the most common approaches for analysing these data. Such data and techniques do not generate reliable results. Consequently, excess energy has to be generated to prevent blackout; causes for energy surge are not easily determined; and potential energy use reduction from energy efficiency solutions is usually not translated into actual energy use reduction. The paper highlights the weaknesses of traditional techniques, and lays out a framework to improve the prediction of energy demand by combining energy use models of equipment, physical systems and buildings, with the proposed data mining algorithms for reverse engineering. The research team first analyses data samples from large complex energy data, and then, presents a set of computationally efficient data mining algorithms for reverse engineering. In order to develop a structural system model for reverse engineering, two focus groups are developed that has direct relation with cause and effect variables. The research findings of this paper includes testing out different sets of reverse engineering algorithms, understand their output patterns and modify algorithms to elevate accuracy of the outputs.

ContributorsNaganathan, Hariharan (Author) / Chong, Oswald (Author) / Ye, Long (Author) / Ira A. Fulton School of Engineering (Contributor)
Created2015-12-09
127878-Thumbnail Image.png
Description

Small and medium office buildings consume a significant parcel of the U.S. building stock energy consumption. Still, owners lack resources and experience to conduct detailed energy audits and retrofit analysis. We present an eight-steps framework for an energy retrofit assessment in small and medium office buildings. Through a bottom-up approach

Small and medium office buildings consume a significant parcel of the U.S. building stock energy consumption. Still, owners lack resources and experience to conduct detailed energy audits and retrofit analysis. We present an eight-steps framework for an energy retrofit assessment in small and medium office buildings. Through a bottom-up approach and a web-based retrofit toolkit tested on a case study in Arizona, this methodology was able to save about 50% of the total energy consumed by the case study building, depending on the adopted measures and invested capital. While the case study presented is a deep energy retrofit, the proposed framework is effective in guiding the decision-making process that precedes any energy retrofit, deep or light.

ContributorsRios, Fernanda (Author) / Parrish, Kristen (Author) / Chong, Oswald (Author) / Ira A. Fulton School of Engineering (Contributor)
Created2016-05-20
127872-Thumbnail Image.png
Description

Background: Modern advances in sequencing technology have enabled the census of microbial members of many natural ecosystems. Recently, attention is increasingly being paid to the microbial residents of human-made, built ecosystems, both private (homes) and public (subways, office buildings, and hospitals). Here, we report results of the characterization of the microbial

Background: Modern advances in sequencing technology have enabled the census of microbial members of many natural ecosystems. Recently, attention is increasingly being paid to the microbial residents of human-made, built ecosystems, both private (homes) and public (subways, office buildings, and hospitals). Here, we report results of the characterization of the microbial ecology of a singular built environment, the International Space Station (ISS). This ISS sampling involved the collection and microbial analysis (via 16S rRNA gene PCR) of 15 surfaces sampled by swabs onboard the ISS. This sampling was a component of Project MERCCURI (Microbial Ecology Research Combining Citizen and University Researchers on ISS). Learning more about the microbial inhabitants of the “buildings” in which we travel through space will take on increasing importance, as plans for human exploration continue, with the possibility of colonization of other planets and moons.

Results: Sterile swabs were used to sample 15 surfaces onboard the ISS. The sites sampled were designed to be analogous to samples collected for (1) the Wildlife of Our Homes project and (2) a study of cell phones and shoes that were concurrently being collected for another component of Project MERCCURI. Sequencing of the 16S rRNA genes amplified from DNA extracted from each swab was used to produce a census of the microbes present on each surface sampled. We compared the microbes found on the ISS swabs to those from both homes on Earth and data from the Human Microbiome Project.

Conclusions: While significantly different from homes on Earth and the Human Microbiome Project samples analyzed here, the microbial community composition on the ISS was more similar to home surfaces than to the human microbiome samples. The ISS surfaces are OTU-rich with 1,036–4,294 operational taxonomic units (OTUs per sample). There was no discernible biogeography of microbes on the 15 ISS surfaces, although this may be a reflection of the small sample size we were able to obtain.

ContributorsLang, Jenna M. (Author) / Coil, David A. (Author) / Neches, Russell Y. (Author) / Brown, Wendy E. (Author) / Cavalier, Darlene (Author) / Severance, Mark (Author) / Hampton-Marcell, Jarrad T. (Author) / Gilbert, Jack A. (Author) / Eisen, Jonathan A. (Author) / ASU-SFI Center for Biosocial Complex Systems (Contributor)
Created2017-12-05
127865-Thumbnail Image.png
Description

Commercial buildings’ consumption is driven by multiple factors that include occupancy, system and equipment efficiency, thermal heat transfer, equipment plug loads, maintenance and operational procedures, and outdoor and indoor temperatures. A modern building energy system can be viewed as a complex dynamical system that is interconnected and influenced by external

Commercial buildings’ consumption is driven by multiple factors that include occupancy, system and equipment efficiency, thermal heat transfer, equipment plug loads, maintenance and operational procedures, and outdoor and indoor temperatures. A modern building energy system can be viewed as a complex dynamical system that is interconnected and influenced by external and internal factors. Modern large scale sensor measures some physical signals to monitor real-time system behaviors. Such data has the potentials to detect anomalies, identify consumption patterns, and analyze peak loads. The paper proposes a novel method to detect hidden anomalies in commercial building energy consumption system. The framework is based on Hilbert-Huang transform and instantaneous frequency analysis. The objectives are to develop an automated data pre-processing system that can detect anomalies and provide solutions with real-time consumption database using Ensemble Empirical Mode Decomposition (EEMD) method. The finding of this paper will also include the comparisons of Empirical mode decomposition and Ensemble empirical mode decomposition of three important type of institutional buildings.

ContributorsNaganathan, Hariharan (Author) / Chong, Oswald (Author) / Huang, Zigang (Author) / Cheng, Ying (Author) / Ira A. Fulton School of Engineering (Contributor)
Created2016-05-20
127833-Thumbnail Image.png
Description

There are many data mining and machine learning techniques to manage large sets of complex energy supply and demand data for building, organization and city. As the amount of data continues to grow, new data analysis methods are needed to address the increasing complexity. Using data from the energy loss

There are many data mining and machine learning techniques to manage large sets of complex energy supply and demand data for building, organization and city. As the amount of data continues to grow, new data analysis methods are needed to address the increasing complexity. Using data from the energy loss between the supply (energy production sources) and demand (buildings and cities consumption), this paper proposes a Semi-Supervised Energy Model (SSEM) to analyse different loss factors for a building cluster. This is done by deep machine learning by training machines to semi-supervise the learning, understanding and manage the process of energy losses. Semi-Supervised Energy Model (SSEM) aims at understanding the demand-supply characteristics of a building cluster and utilizes the confident unlabelled data (loss factors) using deep machine learning techniques. The research findings involves sample data from one of the university campuses and presents the output, which provides an estimate of losses that can be reduced. The paper also provides a list of loss factors that contributes to the total losses and suggests a threshold value for each loss factor, which is determined through real time experiments. The conclusion of this paper provides a proposed energy model that can provide accurate numbers on energy demand, which in turn helps the suppliers to adopt such a model to optimize their supply strategies.

ContributorsNaganathan, Hariharan (Author) / Chong, Oswald (Author) / Chen, Xue-wen (Author) / Ira A. Fulton Schools of Engineering (Contributor)
Created2015-09-14
128816-Thumbnail Image.png
Description

To address the need to study frozen clinical specimens using next-generation RNA, DNA, chromatin immunoprecipitation (ChIP) sequencing and protein analyses, we developed a biobank work flow to prospectively collect biospecimens from patients with renal cell carcinoma (RCC). We describe our standard operating procedures and work flow to annotate pathologic results

To address the need to study frozen clinical specimens using next-generation RNA, DNA, chromatin immunoprecipitation (ChIP) sequencing and protein analyses, we developed a biobank work flow to prospectively collect biospecimens from patients with renal cell carcinoma (RCC). We describe our standard operating procedures and work flow to annotate pathologic results and clinical outcomes. We report quality control outcomes and nucleic acid yields of our RCC submissions (N=16) to The Cancer Genome Atlas (TCGA) project, as well as newer discovery platforms, by describing mass spectrometry analysis of albumin oxidation in plasma and 6 ChIP sequencing libraries generated from nephrectomy specimens after histone H3 lysine 36 trimethylation (H3K36me3) immunoprecipitation. From June 1, 2010, through January 1, 2013, we enrolled 328 patients with RCC. Our mean (SD) TCGA RNA integrity numbers (RINs) were 8.1 (0.8) for papillary RCC, with a 12.5% overall rate of sample disqualification for RIN <7. Banked plasma had significantly less albumin oxidation (by mass spectrometry analysis) than plasma kept at 25°C (P<.001). For ChIP sequencing, the FastQC score for average read quality was at least 30 for 91% to 95% of paired-end reads. In parallel, we analyzed frozen tissue by RNA sequencing; after genome alignment, only 0.2% to 0.4% of total reads failed the default quality check steps of Bowtie2, which was comparable to the disqualification ratio (0.1%) of the 786-O RCC cell line that was prepared under optimal RNA isolation conditions. The overall correlation coefficients for gene expression between Mayo Clinic vs TCGA tissues ranged from 0.75 to 0.82. These data support the generation of high-quality nucleic acids for genomic analyses from banked RCC. Importantly, the protocol does not interfere with routine clinical care. Collections over defined time points during disease treatment further enhance collaborative efforts to integrate genomic information with outcomes.

ContributorsHo, Thai H. (Author) / Nunez Nateras, Rafael (Author) / Yan, Huihuang (Author) / Park, Jin (Author) / Jensen, Sally (Author) / Borges, Chad (Author) / Lee, Jeong Heon (Author) / Champion, Mia D. (Author) / Tibes, Raoul (Author) / Bryce, Alan H. (Author) / Carballido, Estrella M. (Author) / Todd, Mark A. (Author) / Joseph, Richard W. (Author) / Wong, William W. (Author) / Parker, Alexander S. (Author) / Stanton, Melissa L. (Author) / Castle, Erik P. (Author) / Biodesign Institute (Contributor)
Created2015-07-16
128800-Thumbnail Image.png
Description

Insulin-like growth factor 1 (IGF1) is an important biomarker for the management of growth hormone disorders. Recently there has been rising interest in deploying mass spectrometric (MS) methods of detection for measuring IGF1. However, widespread clinical adoption of any MS-based IGF1 assay will require increased throughput and speed to justify

Insulin-like growth factor 1 (IGF1) is an important biomarker for the management of growth hormone disorders. Recently there has been rising interest in deploying mass spectrometric (MS) methods of detection for measuring IGF1. However, widespread clinical adoption of any MS-based IGF1 assay will require increased throughput and speed to justify the costs of analyses, and robust industrial platforms that are reproducible across laboratories. Presented here is an MS-based quantitative IGF1 assay with performance rating of >1,000 samples/day, and a capability of quantifying IGF1 point mutations and posttranslational modifications. The throughput of the IGF1 mass spectrometric immunoassay (MSIA) benefited from a simplified sample preparation step, IGF1 immunocapture in a tip format, and high-throughput MALDI-TOF MS analysis. The Limit of Detection and Limit of Quantification of the resulting assay were 1.5 μg/L and 5 μg/L, respectively, with intra- and inter-assay precision CVs of less than 10%, and good linearity and recovery characteristics. The IGF1 MSIA was benchmarked against commercially available IGF1 ELISA via Bland-Altman method comparison test, resulting in a slight positive bias of 16%. The IGF1 MSIA was employed in an optimized parallel workflow utilizing two pipetting robots and MALDI-TOF-MS instruments synced into one-hour phases of sample preparation, extraction and MSIA pipette tip elution, MS data collection, and data processing. Using this workflow, high-throughput IGF1 quantification of 1,054 human samples was achieved in approximately 9 hours. This rate of assaying is a significant improvement over existing MS-based IGF1 assays, and is on par with that of the enzyme-based immunoassays. Furthermore, a mutation was detected in ∼1% of the samples (SNP: rs17884626, creating an A→T substitution at position 67 of the IGF1), demonstrating the capability of IGF1 MSIA to detect point mutations and posttranslational modifications.

ContributorsOran, Paul (Author) / Trenchevska, Olgica (Author) / Nedelkov, Dobrin (Author) / Borges, Chad (Author) / Schaab, Matthew (Author) / Rehder, Douglas (Author) / Jarvis, Jason (Author) / Sherma, Nisha (Author) / Shen, Luhui (Author) / Krastins, Bryan (Author) / Lopez, Mary F. (Author) / Schwenke, Dawn (Author) / Reaven, Peter D. (Author) / Nelson, Randall (Author) / Biodesign Institute (Contributor)
Created2014-03-24