This growing collection consists of scholarly works authored by ASU-affiliated faculty, staff, and community members, and it contains many open access articles. ASU-affiliated authors are encouraged to Share Your Work in KEEP.

Displaying 11 - 20 of 20
Filtering by

Clear all filters

128958-Thumbnail Image.png
Description

Background: Immunosignaturing is a new peptide microarray based technology for profiling of humoral immune responses. Despite new challenges, immunosignaturing gives us the opportunity to explore new and fundamentally different research questions. In addition to classifying samples based on disease status, the complex patterns and latent factors underlying immunosignatures, which we attempt

Background: Immunosignaturing is a new peptide microarray based technology for profiling of humoral immune responses. Despite new challenges, immunosignaturing gives us the opportunity to explore new and fundamentally different research questions. In addition to classifying samples based on disease status, the complex patterns and latent factors underlying immunosignatures, which we attempt to model, may have a diverse range of applications.

Methods: We investigate the utility of a number of statistical methods to determine model performance and address challenges inherent in analyzing immunosignatures. Some of these methods include exploratory and confirmatory factor analyses, classical significance testing, structural equation and mixture modeling.

Results: We demonstrate an ability to classify samples based on disease status and show that immunosignaturing is a very promising technology for screening and presymptomatic screening of disease. In addition, we are able to model complex patterns and latent factors underlying immunosignatures. These latent factors may serve as biomarkers for disease and may play a key role in a bioinformatic method for antibody discovery.

Conclusion: Based on this research, we lay out an analytic framework illustrating how immunosignatures may be useful as a general method for screening and presymptomatic screening of disease as well as antibody discovery.

ContributorsBrown, Justin (Author) / Stafford, Phillip (Author) / Johnston, Stephen (Author) / Dinu, Valentin (Author) / College of Health Solutions (Contributor)
Created2011-08-19
129075-Thumbnail Image.png
Description

Background: High-throughput technologies such as DNA, RNA, protein, antibody and peptide microarrays are often used to examine differences across drug treatments, diseases, transgenic animals, and others. Typically one trains a classification system by gathering large amounts of probe-level data, selecting informative features, and classifies test samples using a small number of

Background: High-throughput technologies such as DNA, RNA, protein, antibody and peptide microarrays are often used to examine differences across drug treatments, diseases, transgenic animals, and others. Typically one trains a classification system by gathering large amounts of probe-level data, selecting informative features, and classifies test samples using a small number of features. As new microarrays are invented, classification systems that worked well for other array types may not be ideal. Expression microarrays, arguably one of the most prevalent array types, have been used for years to help develop classification algorithms. Many biological assumptions are built into classifiers that were designed for these types of data. One of the more problematic is the assumption of independence, both at the probe level and again at the biological level. Probes for RNA transcripts are designed to bind single transcripts. At the biological level, many genes have dependencies across transcriptional pathways where co-regulation of transcriptional units may make many genes appear as being completely dependent. Thus, algorithms that perform well for gene expression data may not be suitable when other technologies with different binding characteristics exist. The immunosignaturing microarray is based on complex mixtures of antibodies binding to arrays of random sequence peptides. It relies on many-to-many binding of antibodies to the random sequence peptides. Each peptide can bind multiple antibodies and each antibody can bind multiple peptides. This technology has been shown to be highly reproducible and appears promising for diagnosing a variety of disease states. However, it is not clear what is the optimal classification algorithm for analyzing this new type of data.

Results: We characterized several classification algorithms to analyze immunosignaturing data. We selected several datasets that range from easy to difficult to classify, from simple monoclonal binding to complex binding patterns in asthma patients. We then classified the biological samples using 17 different classification algorithms. Using a wide variety of assessment criteria, we found ‘Naïve Bayes’ far more useful than other widely used methods due to its simplicity, robustness, speed and accuracy.

Conclusions: ‘Naïve Bayes’ algorithm appears to accommodate the complex patterns hidden within multilayered immunosignaturing microarray data due to its fundamental mathematical properties.

ContributorsKukreja, Muskan (Author) / Johnston, Stephen (Author) / Stafford, Phillip (Author) / Biodesign Institute (Contributor)
Created2012-06-21
128754-Thumbnail Image.png
Description

The rise in antibiotic resistance has led to an increased research focus on discovery of new antibacterial candidates. While broad-spectrum antibiotics are widely pursued, there is evidence that resistance arises in part from the wide spread use of these antibiotics. Our group has developed a system to produce protein affinity

The rise in antibiotic resistance has led to an increased research focus on discovery of new antibacterial candidates. While broad-spectrum antibiotics are widely pursued, there is evidence that resistance arises in part from the wide spread use of these antibiotics. Our group has developed a system to produce protein affinity agents, called synbodies, which have high affinity and specificity for their target. In this report, we describe the adaptation of this system to produce new antibacterial candidates towards a target bacterium. The system functions by screening target bacteria against an array of 10,000 random sequence peptides and, using a combination of membrane labeling and intracellular dyes, we identified peptides with target specific binding or killing functions. Binding and lytic peptides were identified in this manner and in vitro tests confirmed the activity of the lead peptides. A peptide with antibacterial activity was linked to a peptide specifically binding Staphylococcus aureus to create a synbody with increased antibacterial activity. Subsequent tests showed that this peptide could block S. aureus induced killing of HEK293 cells in a co-culture experiment. These results demonstrate the feasibility of using the synbody system to discover new antibacterial candidate agents.

ContributorsDomenyuk, Valeriy (Author) / Loskutov, Andrey (Author) / Johnston, Stephen (Author) / Diehnelt, Chris (Author) / Biodesign Institute (Contributor)
Created2013-01-23
127949-Thumbnail Image.png
Description

The United State generates the most waste among OECD countries, and there are adverse effects of the waste generation. One of the most serious adverse effects is greenhouse gas, especially CH4, which causes global warming. However, the amount of waste generation is not decreasing, and the United State recycling rate,

The United State generates the most waste among OECD countries, and there are adverse effects of the waste generation. One of the most serious adverse effects is greenhouse gas, especially CH4, which causes global warming. However, the amount of waste generation is not decreasing, and the United State recycling rate, which could reduce waste generation, is only 26%, which is lower than other OECD countries. Thus, waste generation and greenhouse gas emission should decrease, and in order for that to happen, identifying the causes should be made a priority. The research objective is to verify whether the Environmental Kuznets Curve relationship is supported for waste generation and GDP across the U.S. Moreover, it also confirmed that total waste generation and recycling waste influences carbon dioxide emissions from the waste sector. The annual-based U.S. data from 1990 to 2012 were used. The data were collected from various data sources, and the Granger causality test was applied for identifying the causal relationships. The results showed that there is no causality between GDP and waste generation, but total waste and recycling generation significantly cause positive and negative greenhouse gas emissions from the waste sector, respectively. This implies that the waste generation will not decrease even if GDP increases. And, if waste generation decreases or recycling rate increases, the greenhouse gas emission will decrease. Based on these results, it is expected that the waste generation and carbon dioxide emission from the waste sector can decrease more efficiently.

ContributorsLee, Seungtaek (Author) / Kim, Jonghoon (Author) / Chong, Oswald (Author) / Ira A. Fulton Schools of Engineering (Contributor)
Created2016-05-20
127931-Thumbnail Image.png
Description

Construction waste management has become extremely important due to stricter disposal and landfill regulations, and a lesser number of available landfills. There are extensive works done on waste treatment and management of the construction industry. Concepts like deconstruction, recyclability, and Design for Disassembly (DfD) are examples of better construction waste

Construction waste management has become extremely important due to stricter disposal and landfill regulations, and a lesser number of available landfills. There are extensive works done on waste treatment and management of the construction industry. Concepts like deconstruction, recyclability, and Design for Disassembly (DfD) are examples of better construction waste management methods. Although some authors and organizations have published rich guides addressing the DfD's principles, there are only a few buildings already developed in this area. This study aims to find the challenges in the current practice of deconstruction activities and the gaps between its theory and implementation. Furthermore, it aims to provide insights about how DfD can create opportunities to turn these concepts into strategies that can be largely adopted by the construction industry stakeholders in the near future.

ContributorsRios, Fernanda (Author) / Chong, Oswald (Author) / Grau, David (Author) / Julie Ann Wrigley Global Institute of Sustainability (Contributor)
Created2015-09-14
127929-Thumbnail Image.png
Description

Previous studies in building energy assessment clearly state that to meet sustainable energy goals, existing buildings, as well as new buildings, will need to improve their energy efficiency. Thus, meeting energy goals relies on retrofitting existing buildings. Most building energy models are bottom-up engineering models, meaning these models calculate energy

Previous studies in building energy assessment clearly state that to meet sustainable energy goals, existing buildings, as well as new buildings, will need to improve their energy efficiency. Thus, meeting energy goals relies on retrofitting existing buildings. Most building energy models are bottom-up engineering models, meaning these models calculate energy demand of individual buildings through their physical properties and energy use for specific end uses (e.g., lighting, appliances, and water heating). Researchers then scale up these model results to represent the building stock of the region studied.

Studies reveal that there is a lack of information about the building stock and associated modeling tools and this lack of knowledge affects the assessment of building energy efficiency strategies. Literature suggests that the level of complexity of energy models needs to be limited. Accuracy of these energy models can be elevated by reducing the input parameters, alleviating the need for users to make many assumptions about building construction and occupancy, among other factors. To mitigate the need for assumptions and the resulting model inaccuracies, the authors argue buildings should be described in a regional stock model with a restricted number of input parameters. One commonly-accepted method of identifying critical input parameters is sensitivity analysis, which requires a large number of runs that are both time consuming and may require high processing capacity.

This paper utilizes the Energy, Carbon and Cost Assessment for Buildings Stocks (ECCABS) model, which calculates the net energy demand of buildings and presents aggregated and individual- building-level, demand for specific end uses, e.g., heating, cooling, lighting, hot water and appliances. The model has already been validated using the Swedish, Spanish, and UK building stock data. This paper discusses potential improvements to this model by assessing the feasibility of using stepwise regression to identify the most important input parameters using the data from UK residential sector. The paper presents results of stepwise regression and compares these to sensitivity analysis; finally, the paper documents the advantages and challenges associated with each method.

ContributorsArababadi, Reza (Author) / Naganathan, Hariharan (Author) / Parrish, Kristen (Author) / Chong, Oswald (Author) / Ira A. Fulton Schools of Engineering (Contributor)
Created2015-09-14
127916-Thumbnail Image.png
Description

We have previously shown that the diversity of antibodies in an individual can be displayed on chips on which 130,000 peptides chosen from random sequence space have been synthesized. This immunosignature technology is unbiased in displaying antibody diversity relative to natural sequence space, and has been shown to have diagnostic

We have previously shown that the diversity of antibodies in an individual can be displayed on chips on which 130,000 peptides chosen from random sequence space have been synthesized. This immunosignature technology is unbiased in displaying antibody diversity relative to natural sequence space, and has been shown to have diagnostic and prognostic potential for a wide variety of diseases and vaccines. Here we show that a global measure such as Shannon’s entropy can be calculated for each immunosignature. The immune entropy was measured across a diverse set of 800 people and in 5 individuals over 3 months. The immune entropy is affected by some population characteristics and varies widely across individuals. We find that people with infections or breast cancer, generally have higher entropy values than non-diseased individuals. We propose that the immune entropy as measured from immunosignatures may be a simple method to monitor health in individuals and populations.

ContributorsWang, Lu (Author) / Whittemore, K. (Author) / Johnston, Stephen (Author) / Stafford, Phillip (Author) / Biodesign Institute (Contributor)
Created2017-12-22
127967-Thumbnail Image.png
Description

The heat-labile toxins (LT) produced by enterotoxigenic Escherichia coli display adjuvant effects to coadministered antigens, leading to enhanced production of serum antibodies. Despite extensive knowledge of the adjuvant properties of LT derivatives, including in vitro-generated non-toxic mutant forms, little is known about the capacity of these adjuvants to modulate the

The heat-labile toxins (LT) produced by enterotoxigenic Escherichia coli display adjuvant effects to coadministered antigens, leading to enhanced production of serum antibodies. Despite extensive knowledge of the adjuvant properties of LT derivatives, including in vitro-generated non-toxic mutant forms, little is known about the capacity of these adjuvants to modulate the epitope specificity of antibodies directed against antigens. This study characterizes the role of LT and its non-toxic B subunit (LTB) in the modulation of antibody responses to a coadministered antigen, the dengue virus (DENV) envelope glycoprotein domain III (EDIII), which binds to surface receptors and mediates virus entry into host cells. In contrast to non-adjuvanted or alum-adjuvanted formulations, antibodies induced in mice immunized with LT or LTB showed enhanced virus-neutralization effects that were not ascribed to a subclass shift or antigen affinity. Nonetheless, immunosignature analyses revealed that purified LT-adjuvanted EDIII-specific antibodies display distinct epitope-binding patterns with regard to antibodies raised in mice immunized with EDIII or the alum-adjuvanted vaccine. Notably, the analyses led to the identification of a specific EDIII epitope located in the EF to FG loop, which is involved in the entry of DENV into eukaryotic cells. The present results demonstrate that LT and LTB modulate the epitope specificity of antibodies generated after immunization with coadministered antigens that, in the case of EDIII, was associated with the induction of neutralizing antibody responses. These results open perspectives for the more rational development of vaccines with enhanced protective effects against DENV infections.

Created2017-09-25
127964-Thumbnail Image.png
Description

As the construction continue to be a leading industry in the number of injuries and fatalities annually, several organizations and agencies are working avidly to ensure the number of injuries and fatalities is minimized. The Occupational Safety and Health Administration (OSHA) is one such effort to assure safe and healthful

As the construction continue to be a leading industry in the number of injuries and fatalities annually, several organizations and agencies are working avidly to ensure the number of injuries and fatalities is minimized. The Occupational Safety and Health Administration (OSHA) is one such effort to assure safe and healthful working conditions for working men and women by setting and enforcing standards and by providing training, outreach, education and assistance. Given the large databases of OSHA historical events and reports, a manual analysis of the fatality and catastrophe investigations content is a time consuming and expensive process. This paper aims to evaluate the strength of unsupervised machine learning and Natural Language Processing (NLP) in supporting safety inspections and reorganizing accidents database on a state level. After collecting construction accident reports from the OSHA Arizona office, the methodology consists of preprocessing the accident reports and weighting terms in order to apply a data-driven unsupervised K-Means-based clustering approach. The proposed method classifies the collected reports in four clusters, each reporting a type of accident. The results show the construction accidents in the state of Arizona to be caused by falls (42.9%), struck by objects (34.3%), electrocutions (12.5%), and trenches collapse (10.3%). The findings of this research empower state and local agencies with a customized presentation of the accidents fitting their regulations and weather conditions. What is applicable to one climate might not be suitable for another; therefore, such rearrangement of the accidents database on a state based level is a necessary prerequisite to enhance the local safety applications and standards.

ContributorsChokor, Abbas (Author) / Naganathan, Hariharan (Author) / Chong, Oswald (Author) / El Asmar, Mounir (Author) / Ira A. Fulton Schools of Engineering (Contributor)
Created2016-05-20
128194-Thumbnail Image.png
Description

There is an increasing awareness that health care must move from post-symptomatic treatment to presymptomatic intervention. An ideal system would allow regular inexpensive monitoring of health status using circulating antibodies to report on health fluctuations. Recently, we demonstrated that peptide microarrays can do this through antibody signatures (immunosignatures). Unfortunately, printed

There is an increasing awareness that health care must move from post-symptomatic treatment to presymptomatic intervention. An ideal system would allow regular inexpensive monitoring of health status using circulating antibodies to report on health fluctuations. Recently, we demonstrated that peptide microarrays can do this through antibody signatures (immunosignatures). Unfortunately, printed microarrays are not scalable. Here we demonstrate a platform based on fabricating microarrays (~10 M peptides per slide, 330,000 peptides per assay) on silicon wafers using equipment common to semiconductor manufacturing. The potential of these microarrays for comprehensive health monitoring is verified through the simultaneous detection and classification of six different infectious diseases and six different cancers. Besides diagnostics, these high-density peptide chips have numerous other applications both in health care and elsewhere.

ContributorsLegutki, Joseph Barten (Author) / Zhao, Zhan-Gong (Author) / Greving, Matt (Author) / Woodbury, Neal (Author) / Johnston, Stephen (Author) / Stafford, Phillip (Author) / Biodesign Institute (Contributor)
Created2014-09-03