This growing collection consists of scholarly works authored by ASU-affiliated faculty, staff, and community members, and it contains many open access articles. ASU-affiliated authors are encouraged to Share Your Work in KEEP.

Displaying 1 - 10 of 27
Filtering by

Clear all filters

Description

Background: An accurate method that can diagnose and predict lupus and its neuropsychiatric manifestations is essential since currently there are no reliable methods. Autoantibodies to a varied panel of antigens in the body are characteristic of lupus. In this study we investigated whether serum autoantibody binding patterns on random-sequence peptide

Background: An accurate method that can diagnose and predict lupus and its neuropsychiatric manifestations is essential since currently there are no reliable methods. Autoantibodies to a varied panel of antigens in the body are characteristic of lupus. In this study we investigated whether serum autoantibody binding patterns on random-sequence peptide microarrays (immunosignaturing) can be used for diagnosing and predicting the onset of lupus and its central nervous system (CNS) manifestations. We also tested the techniques for identifying potentially pathogenic autoantibodies in CNS-Lupus. We used the well-characterized MRL/lpr lupus animal model in two studies as a first step to develop and evaluate future studies in humans.

Results: In study one we identified possible diagnostic peptides for both lupus and altered behavior in the forced swim test. When comparing the results of study one to that of study two (carried out in a similar manner), we further identified potential peptides that may be diagnostic and predictive of both lupus and altered behavior in the forced swim test. We also characterized five potentially pathogenic brain-reactive autoantibodies, as well as suggested possible brain targets.

Conclusions: These results indicate that immunosignaturing could predict and diagnose lupus and its CNS manifestations. It can also be used to characterize pathogenic autoantibodies, which may help to better understand the underlying mechanisms of CNS-Lupus.

ContributorsWilliams, Stephanie (Author) / Stafford, Phillip (Author) / Hoffman, Steven (Author) / College of Liberal Arts and Sciences (Contributor)
Created2014-06-07
127882-Thumbnail Image.png
Description

The estimation of energy demand (by power plants) has traditionally relied on historical energy use data for the region(s) that a plant produces for. Regression analysis, artificial neural network and Bayesian theory are the most common approaches for analysing these data. Such data and techniques do not generate reliable results.

The estimation of energy demand (by power plants) has traditionally relied on historical energy use data for the region(s) that a plant produces for. Regression analysis, artificial neural network and Bayesian theory are the most common approaches for analysing these data. Such data and techniques do not generate reliable results. Consequently, excess energy has to be generated to prevent blackout; causes for energy surge are not easily determined; and potential energy use reduction from energy efficiency solutions is usually not translated into actual energy use reduction. The paper highlights the weaknesses of traditional techniques, and lays out a framework to improve the prediction of energy demand by combining energy use models of equipment, physical systems and buildings, with the proposed data mining algorithms for reverse engineering. The research team first analyses data samples from large complex energy data, and then, presents a set of computationally efficient data mining algorithms for reverse engineering. In order to develop a structural system model for reverse engineering, two focus groups are developed that has direct relation with cause and effect variables. The research findings of this paper includes testing out different sets of reverse engineering algorithms, understand their output patterns and modify algorithms to elevate accuracy of the outputs.

ContributorsNaganathan, Hariharan (Author) / Chong, Oswald (Author) / Ye, Long (Author) / Ira A. Fulton School of Engineering (Contributor)
Created2015-12-09
127878-Thumbnail Image.png
Description

Small and medium office buildings consume a significant parcel of the U.S. building stock energy consumption. Still, owners lack resources and experience to conduct detailed energy audits and retrofit analysis. We present an eight-steps framework for an energy retrofit assessment in small and medium office buildings. Through a bottom-up approach

Small and medium office buildings consume a significant parcel of the U.S. building stock energy consumption. Still, owners lack resources and experience to conduct detailed energy audits and retrofit analysis. We present an eight-steps framework for an energy retrofit assessment in small and medium office buildings. Through a bottom-up approach and a web-based retrofit toolkit tested on a case study in Arizona, this methodology was able to save about 50% of the total energy consumed by the case study building, depending on the adopted measures and invested capital. While the case study presented is a deep energy retrofit, the proposed framework is effective in guiding the decision-making process that precedes any energy retrofit, deep or light.

ContributorsRios, Fernanda (Author) / Parrish, Kristen (Author) / Chong, Oswald (Author) / Ira A. Fulton School of Engineering (Contributor)
Created2016-05-20
127865-Thumbnail Image.png
Description

Commercial buildings’ consumption is driven by multiple factors that include occupancy, system and equipment efficiency, thermal heat transfer, equipment plug loads, maintenance and operational procedures, and outdoor and indoor temperatures. A modern building energy system can be viewed as a complex dynamical system that is interconnected and influenced by external

Commercial buildings’ consumption is driven by multiple factors that include occupancy, system and equipment efficiency, thermal heat transfer, equipment plug loads, maintenance and operational procedures, and outdoor and indoor temperatures. A modern building energy system can be viewed as a complex dynamical system that is interconnected and influenced by external and internal factors. Modern large scale sensor measures some physical signals to monitor real-time system behaviors. Such data has the potentials to detect anomalies, identify consumption patterns, and analyze peak loads. The paper proposes a novel method to detect hidden anomalies in commercial building energy consumption system. The framework is based on Hilbert-Huang transform and instantaneous frequency analysis. The objectives are to develop an automated data pre-processing system that can detect anomalies and provide solutions with real-time consumption database using Ensemble Empirical Mode Decomposition (EEMD) method. The finding of this paper will also include the comparisons of Empirical mode decomposition and Ensemble empirical mode decomposition of three important type of institutional buildings.

ContributorsNaganathan, Hariharan (Author) / Chong, Oswald (Author) / Huang, Zigang (Author) / Cheng, Ying (Author) / Ira A. Fulton School of Engineering (Contributor)
Created2016-05-20
127833-Thumbnail Image.png
Description

There are many data mining and machine learning techniques to manage large sets of complex energy supply and demand data for building, organization and city. As the amount of data continues to grow, new data analysis methods are needed to address the increasing complexity. Using data from the energy loss

There are many data mining and machine learning techniques to manage large sets of complex energy supply and demand data for building, organization and city. As the amount of data continues to grow, new data analysis methods are needed to address the increasing complexity. Using data from the energy loss between the supply (energy production sources) and demand (buildings and cities consumption), this paper proposes a Semi-Supervised Energy Model (SSEM) to analyse different loss factors for a building cluster. This is done by deep machine learning by training machines to semi-supervise the learning, understanding and manage the process of energy losses. Semi-Supervised Energy Model (SSEM) aims at understanding the demand-supply characteristics of a building cluster and utilizes the confident unlabelled data (loss factors) using deep machine learning techniques. The research findings involves sample data from one of the university campuses and presents the output, which provides an estimate of losses that can be reduced. The paper also provides a list of loss factors that contributes to the total losses and suggests a threshold value for each loss factor, which is determined through real time experiments. The conclusion of this paper provides a proposed energy model that can provide accurate numbers on energy demand, which in turn helps the suppliers to adopt such a model to optimize their supply strategies.

ContributorsNaganathan, Hariharan (Author) / Chong, Oswald (Author) / Chen, Xue-wen (Author) / Ira A. Fulton Schools of Engineering (Contributor)
Created2015-09-14
127830-Thumbnail Image.png
Description

Recent infectious outbreaks highlight the need for platform technologies that can be quickly deployed to develop therapeutics needed to contain the outbreak. We present a simple concept for rapid development of new antimicrobials. The goal was to produce in as little as one week thousands of doses of an intervention

Recent infectious outbreaks highlight the need for platform technologies that can be quickly deployed to develop therapeutics needed to contain the outbreak. We present a simple concept for rapid development of new antimicrobials. The goal was to produce in as little as one week thousands of doses of an intervention for a new pathogen. We tested the feasibility of a system based on antimicrobial synbodies. The system involves creating an array of 100 peptides that have been selected for broad capability to bind and/or kill viruses and bacteria. The peptides are pre-screened for low cell toxicity prior to large scale synthesis. Any pathogen is then assayed on the chip to find peptides that bind or kill it. Peptides are combined in pairs as synbodies and further screened for activity and toxicity. The lead synbody can be quickly produced in large scale, with completion of the entire process in one week.

ContributorsJohnston, Stephen (Author) / Domenyuk, Valeriy (Author) / Gupta, Nidhi (Author) / Tavares Batista, Milene (Author) / Lainson, John (Author) / Zhao, Zhan-Gong (Author) / Lusk, Joel (Author) / Loskutov, Andrey (Author) / Cichacz, Zbigniew (Author) / Stafford, Phillip (Author) / Legutki, Joseph Barten (Author) / Diehnelt, Chris (Author) / Biodesign Institute (Contributor)
Created2017-12-14
128413-Thumbnail Image.png
Description

One of the gravest dangers facing cancer patients is an extended symptom-free lull between tumor initiation and the first diagnosis. Detection of tumors is critical for effective intervention. Using the body’s immune system to detect and amplify tumor-specific signals may enable detection of cancer using an inexpensive immunoassay. Immunosignatures are

One of the gravest dangers facing cancer patients is an extended symptom-free lull between tumor initiation and the first diagnosis. Detection of tumors is critical for effective intervention. Using the body’s immune system to detect and amplify tumor-specific signals may enable detection of cancer using an inexpensive immunoassay. Immunosignatures are one such assay: they provide a map of antibody interactions with random-sequence peptides. They enable detection of disease-specific patterns using classic train/test methods. However, to date, very little effort has gone into extracting information from the sequence of peptides that interact with disease-specific antibodies. Because it is difficult to represent all possible antigen peptides in a microarray format, we chose to synthesize only 330,000 peptides on a single immunosignature microarray. The 330,000 random-sequence peptides on the microarray represent 83% of all tetramers and 27% of all pentamers, creating an unbiased but substantial gap in the coverage of total sequence space. We therefore chose to examine many relatively short motifs from these random-sequence peptides. Time-variant analysis of recurrent subsequences provided a means to dissect amino acid sequences from the peptides while simultaneously retaining the antibody–peptide binding intensities. We first used a simple experiment in which monoclonal antibodies with known linear epitopes were exposed to these random-sequence peptides, and their binding intensities were used to create our algorithm. We then demonstrated the performance of the proposed algorithm by examining immunosignatures from patients with Glioblastoma multiformae (GBM), an aggressive form of brain cancer. Eight different frameshift targets were identified from the random-sequence peptides using this technique. If immune-reactive antigens can be identified using a relatively simple immune assay, it might enable a diagnostic test with sufficient sensitivity to detect tumors in a clinically useful way.

Created2015-06-18
129075-Thumbnail Image.png
Description

Background: High-throughput technologies such as DNA, RNA, protein, antibody and peptide microarrays are often used to examine differences across drug treatments, diseases, transgenic animals, and others. Typically one trains a classification system by gathering large amounts of probe-level data, selecting informative features, and classifies test samples using a small number of

Background: High-throughput technologies such as DNA, RNA, protein, antibody and peptide microarrays are often used to examine differences across drug treatments, diseases, transgenic animals, and others. Typically one trains a classification system by gathering large amounts of probe-level data, selecting informative features, and classifies test samples using a small number of features. As new microarrays are invented, classification systems that worked well for other array types may not be ideal. Expression microarrays, arguably one of the most prevalent array types, have been used for years to help develop classification algorithms. Many biological assumptions are built into classifiers that were designed for these types of data. One of the more problematic is the assumption of independence, both at the probe level and again at the biological level. Probes for RNA transcripts are designed to bind single transcripts. At the biological level, many genes have dependencies across transcriptional pathways where co-regulation of transcriptional units may make many genes appear as being completely dependent. Thus, algorithms that perform well for gene expression data may not be suitable when other technologies with different binding characteristics exist. The immunosignaturing microarray is based on complex mixtures of antibodies binding to arrays of random sequence peptides. It relies on many-to-many binding of antibodies to the random sequence peptides. Each peptide can bind multiple antibodies and each antibody can bind multiple peptides. This technology has been shown to be highly reproducible and appears promising for diagnosing a variety of disease states. However, it is not clear what is the optimal classification algorithm for analyzing this new type of data.

Results: We characterized several classification algorithms to analyze immunosignaturing data. We selected several datasets that range from easy to difficult to classify, from simple monoclonal binding to complex binding patterns in asthma patients. We then classified the biological samples using 17 different classification algorithms. Using a wide variety of assessment criteria, we found ‘Naïve Bayes’ far more useful than other widely used methods due to its simplicity, robustness, speed and accuracy.

Conclusions: ‘Naïve Bayes’ algorithm appears to accommodate the complex patterns hidden within multilayered immunosignaturing microarray data due to its fundamental mathematical properties.

ContributorsKukreja, Muskan (Author) / Johnston, Stephen (Author) / Stafford, Phillip (Author) / Biodesign Institute (Contributor)
Created2012-06-21
129061-Thumbnail Image.png
Description

Introduction: Abundance of immune cells has been shown to have prognostic and predictive significance in many tumor types. Beyond abundance, the spatial organization of immune cells in relation to cancer cells may also have significant functional and clinical implications. However there is a lack of systematic methods to quantify spatial associations

Introduction: Abundance of immune cells has been shown to have prognostic and predictive significance in many tumor types. Beyond abundance, the spatial organization of immune cells in relation to cancer cells may also have significant functional and clinical implications. However there is a lack of systematic methods to quantify spatial associations between immune and cancer cells.

Methods: We applied ecological measures of species interactions to digital pathology images for investigating the spatial associations of immune and cancer cells in breast cancer. We used the Morisita-Horn similarity index, an ecological measure of community structure and predator–prey interactions, to quantify the extent to which cancer cells and immune cells colocalize in whole-tumor histology sections. We related this index to disease-specific survival of 486 women with breast cancer and validated our findings in a set of 516 patients from different hospitals.

Results: Colocalization of immune cells with cancer cells was significantly associated with a disease-specific survival benefit for all breast cancers combined. In HER2-positive subtypes, the prognostic value of immune-cancer cell colocalization was highly significant and exceeded those of known clinical variables. Furthermore, colocalization was a significant predictive factor for long-term outcome following chemotherapy and radiotherapy in HER2 and Luminal A subtypes, independent of and stronger than all known clinical variables.

Conclusions: Our study demonstrates how ecological methods applied to the tumor microenvironment using routine histology can provide reproducible, quantitative biomarkers for identifying high-risk breast cancer patients. We found that the clinical value of immune-cancer interaction patterns is highly subtype-specific but substantial and independent to known clinicopathologic variables that mostly focused on cancer itself. Our approach can be developed into computer-assisted prediction based on histology samples that are already routinely collected.

ContributorsMaley, Carlo (Author) / Koelble, Konrad (Author) / Natrajan, Rachael (Author) / Aktipis, C. Athena (Author) / Yuan, Yinyin (Author) / Biodesign Institute (Contributor)
Created2015-09-22
128468-Thumbnail Image.png
Description

In a meta-analysis published by myself and co-authors, we report differences in the life history risk factors for estrogen receptor negative (ER−) and estrogen receptor positive (ER+) breast cancers. Our meta-analysis did not find the association of ER− breast cancer risk with fast life history characteristics that Hidaka and Boddy

In a meta-analysis published by myself and co-authors, we report differences in the life history risk factors for estrogen receptor negative (ER−) and estrogen receptor positive (ER+) breast cancers. Our meta-analysis did not find the association of ER− breast cancer risk with fast life history characteristics that Hidaka and Boddy suggest in their response to our article. There are a number of possible explanations for the differences between their conclusions and the conclusions we drew from our meta-analysis, including limitations of our meta-analysis and methodological challenges in measuring and categorizing estrogen receptor status. These challenges, along with the association of ER+ breast cancer with slow life history characteristics, may make it challenging to find a clear signal of ER− breast cancer with fast life history characteristics, even if that relationship does exist. The contradictory results regarding breast cancer risk and life history characteristics illustrate a more general challenge in evolutionary medicine: often different sub-theories in evolutionary biology make contradictory predictions about disease risk. In this case, life history models predict that breast cancer risk should increase with faster life history characteristics, while the evolutionary mismatch hypothesis predicts that breast cancer risk should increase with delayed reproduction. Whether life history tradeoffs contribute to ER− breast cancer is still an open question, but current models and several lines of evidence suggest that it is a possibility.

ContributorsAktipis, C. Athena (Author) / College of Liberal Arts and Sciences (Contributor)
Created2016-05-21