This growing collection consists of scholarly works authored by ASU-affiliated faculty, staff, and community members, and it contains many open access articles. ASU-affiliated authors are encouraged to Share Your Work in KEEP.

Displaying 11 - 20 of 49
Filtering by

Clear all filters

127897-Thumbnail Image.png
Description

Specification of PM2.5 transmission characteristics is important for pollution control and policymaking. We apply higher-order organization of complex networks to identify major potential PM2.5 contributors and PM2.5 transport pathways of a network of 189 cities in China. The network we create in this paper consists of major cities in China

Specification of PM2.5 transmission characteristics is important for pollution control and policymaking. We apply higher-order organization of complex networks to identify major potential PM2.5 contributors and PM2.5 transport pathways of a network of 189 cities in China. The network we create in this paper consists of major cities in China and contains information on meteorological conditions of wind speed and wind direction, data on geographic distance, mountains, and PM2.5 concentrations. We aim to reveal PM2.5 mobility between cities in China. Two major conclusions are revealed through motif analysis of complex networks. First, major potential PM2.5 pollution contributors are identified for each cluster by one motif, which reflects movements from source to target. Second, transport pathways of PM2.5 are revealed by another motif, which reflects transmission routes. To our knowledge, this is the first work to apply higher-order network analysis to study PM2.5 transport.

ContributorsWang, Yufang (Author) / Wang, Haiyan (Author) / Chang, Shuhua (Author) / Liu, Maoxing (Author) / New College of Interdisciplinary Arts and Sciences (Contributor)
Created2017-10-16
127887-Thumbnail Image.png
Description

To investigate dual-process persuasion theories in the context of group decision making, we studied low and high need-for-cognition (NFC) participants within a mock trial study. Participants considered plaintiff and defense expert scientific testimony that varied in argument strength. All participants heard a cross-examination of the experts focusing on peripheral information

To investigate dual-process persuasion theories in the context of group decision making, we studied low and high need-for-cognition (NFC) participants within a mock trial study. Participants considered plaintiff and defense expert scientific testimony that varied in argument strength. All participants heard a cross-examination of the experts focusing on peripheral information (e.g., credentials) about the expert, but half were randomly assigned to also hear central information highlighting flaws in the expert’s message (e.g., quality of the research presented by the expert). Participants rendered pre- and post-group-deliberation verdicts, which were considered “scientifically accurate” if the verdicts reflected the strong (versus weak) expert message, and “scientifically inaccurate” if they reflected the weak (versus strong) expert message. For individual participants, we replicated studies testing classic persuasion theories: Factors promoting reliance on central information (i.e., central cross-examination, high NFC) improved verdict accuracy because they sensitized individual participants to the quality discrepancy between the experts’ messages. Interestingly, however, at the group level, the more that scientifically accurate mock jurors discussed peripheral (versus central) information about the experts, the more likely their group was to reach the scientifically accurate verdict. When participants were arguing for the scientifically accurate verdict consistent with the strong expert message, peripheral comments increased their persuasiveness, which made the group more likely to reach the more scientifically accurate verdict.

Created2017-09-20
127830-Thumbnail Image.png
Description

Recent infectious outbreaks highlight the need for platform technologies that can be quickly deployed to develop therapeutics needed to contain the outbreak. We present a simple concept for rapid development of new antimicrobials. The goal was to produce in as little as one week thousands of doses of an intervention

Recent infectious outbreaks highlight the need for platform technologies that can be quickly deployed to develop therapeutics needed to contain the outbreak. We present a simple concept for rapid development of new antimicrobials. The goal was to produce in as little as one week thousands of doses of an intervention for a new pathogen. We tested the feasibility of a system based on antimicrobial synbodies. The system involves creating an array of 100 peptides that have been selected for broad capability to bind and/or kill viruses and bacteria. The peptides are pre-screened for low cell toxicity prior to large scale synthesis. Any pathogen is then assayed on the chip to find peptides that bind or kill it. Peptides are combined in pairs as synbodies and further screened for activity and toxicity. The lead synbody can be quickly produced in large scale, with completion of the entire process in one week.

ContributorsJohnston, Stephen (Author) / Domenyuk, Valeriy (Author) / Gupta, Nidhi (Author) / Tavares Batista, Milene (Author) / Lainson, John (Author) / Zhao, Zhan-Gong (Author) / Lusk, Joel (Author) / Loskutov, Andrey (Author) / Cichacz, Zbigniew (Author) / Stafford, Phillip (Author) / Legutki, Joseph Barten (Author) / Diehnelt, Chris (Author) / Biodesign Institute (Contributor)
Created2017-12-14
128413-Thumbnail Image.png
Description

One of the gravest dangers facing cancer patients is an extended symptom-free lull between tumor initiation and the first diagnosis. Detection of tumors is critical for effective intervention. Using the body’s immune system to detect and amplify tumor-specific signals may enable detection of cancer using an inexpensive immunoassay. Immunosignatures are

One of the gravest dangers facing cancer patients is an extended symptom-free lull between tumor initiation and the first diagnosis. Detection of tumors is critical for effective intervention. Using the body’s immune system to detect and amplify tumor-specific signals may enable detection of cancer using an inexpensive immunoassay. Immunosignatures are one such assay: they provide a map of antibody interactions with random-sequence peptides. They enable detection of disease-specific patterns using classic train/test methods. However, to date, very little effort has gone into extracting information from the sequence of peptides that interact with disease-specific antibodies. Because it is difficult to represent all possible antigen peptides in a microarray format, we chose to synthesize only 330,000 peptides on a single immunosignature microarray. The 330,000 random-sequence peptides on the microarray represent 83% of all tetramers and 27% of all pentamers, creating an unbiased but substantial gap in the coverage of total sequence space. We therefore chose to examine many relatively short motifs from these random-sequence peptides. Time-variant analysis of recurrent subsequences provided a means to dissect amino acid sequences from the peptides while simultaneously retaining the antibody–peptide binding intensities. We first used a simple experiment in which monoclonal antibodies with known linear epitopes were exposed to these random-sequence peptides, and their binding intensities were used to create our algorithm. We then demonstrated the performance of the proposed algorithm by examining immunosignatures from patients with Glioblastoma multiformae (GBM), an aggressive form of brain cancer. Eight different frameshift targets were identified from the random-sequence peptides using this technique. If immune-reactive antigens can be identified using a relatively simple immune assay, it might enable a diagnostic test with sufficient sensitivity to detect tumors in a clinically useful way.

Created2015-06-18
128376-Thumbnail Image.png
Description

In order to determine the feasibility of utilizing novel rexinoids for chemotherapeutics and as potential treatments for neurological conditions, we undertook an assessment of the side effect profile of select rexinoid X receptor (RXR) analogs that we reported previously. We assessed pharmacokinetic profiles, lipid and thyroid-stimulating hormone (TSH) levels in

In order to determine the feasibility of utilizing novel rexinoids for chemotherapeutics and as potential treatments for neurological conditions, we undertook an assessment of the side effect profile of select rexinoid X receptor (RXR) analogs that we reported previously. We assessed pharmacokinetic profiles, lipid and thyroid-stimulating hormone (TSH) levels in rats, and cell culture activity of rexinoids in sterol regulatory element-binding protein (SREBP) induction and thyroid hormone inhibition assays. We also performed RNA sequencing of the brain tissues of rats that had been dosed with the compounds. We show here for the first time that potent rexinoid activity can be uncoupled from drastic lipid changes and thyroid axis variations, and we propose that rexinoids can be developed with improved side effect profiles than the parent compound, bexarotene (1).

ContributorsMarshall, Pamela (Author) / Jurutka, Peter (Author) / Wagner, Carl (Author) / van der Vaart, Arjan (Author) / Kaneko, Ichiro (Author) / Chavez, Pedro I. (Author) / Ma, Ning (Author) / Bhogal, Jaskaran (Author) / Shahani, Pritika (Author) / Swierski, Johnathon (Author) / MacNeill, Mairi (Author) / New College of Interdisciplinary Arts and Sciences (Contributor)
Created2015-03-16
128374-Thumbnail Image.png
Description

Given species inventories of all sites in a planning area, integer programming or heuristic algorithms can prioritize sites in terms of the site's complementary value, that is, the ability of the site to complement (add unrepresented species to) other sites prioritized for conservation. The utility of these procedures is limited

Given species inventories of all sites in a planning area, integer programming or heuristic algorithms can prioritize sites in terms of the site's complementary value, that is, the ability of the site to complement (add unrepresented species to) other sites prioritized for conservation. The utility of these procedures is limited because distributions of species are typically available only as coarse atlases or range maps, whereas conservation planners need to prioritize relatively small sites. If such coarse-resolution information can be used to identify small sites that efficiently represent species (i.e., downscaled), then such data can be useful for conservation planning. We develop and test a new type of surrogate for biodiversity, which we call downscaled complementarity. In this approach, complementarity values from large cells are downscaled to small cells, using statistical methods or simple map overlays. We illustrate our approach for birds in Spain by building models at coarse scale (50 × 50 km atlas of European birds, and global range maps of birds interpreted at the same 50 × 50 km grid size), using this model to predict complementary value for 10 × 10 km cells in Spain, and testing how well-prioritized cells represented bird distributions in an independent bird atlas of those 10 × 10 km cells. Downscaled complementarity was about 63–77% as effective as having full knowledge of the 10-km atlas data in its ability to improve on random selection of sites. Downscaled complementarity has relatively low data acquisition cost and meets representation goals well compared with other surrogates currently in use. Our study justifies additional tests to determine whether downscaled complementarity is an effective surrogate for other regions and taxa, and at spatial resolution finer than 10 × 10 km cells. Until such tests have been completed, we caution against assuming that any surrogate can reliably prioritize sites for species representation.

Created2016-05-18
128370-Thumbnail Image.png
Description

Lack of biodiversity data is a major impediment to prioritizing sites for species representation. Because comprehensive species data are not available in any planning area, planners often use surrogates (such as vegetation communities, or mapped occurrences of a well-inventoried taxon) to prioritize sites. We propose and demonstrate the effectiveness of

Lack of biodiversity data is a major impediment to prioritizing sites for species representation. Because comprehensive species data are not available in any planning area, planners often use surrogates (such as vegetation communities, or mapped occurrences of a well-inventoried taxon) to prioritize sites. We propose and demonstrate the effectiveness of predicted rarity-weighted richness (PRWR) as a surrogate in situations where species inventories may be available for a portion of the planning area. Use of PRWR as a surrogate involves several steps. First, rarity-weighted richness (RWR) is calculated from species inventories for a q% subset of sites. Then random forest models are used to model RWR as a function of freely available environmental variables for that q% subset. This function is then used to calculate PRWR for all sites (including those for which no species inventories are available), and PRWR is used to prioritize all sites. We tested PRWR on plant and bird datasets, using the species accumulation index to measure efficiency of PRWR. Sites with the highest PRWR represented species with median efficiency of 56% (range 32%–77% across six datasets) when q = 20%, and with median efficiency of 39% (range 20%–63%) when q = 10%. An efficiency of 56% means that selecting sites in order of PRWR rank was 56% as effective as having full knowledge of species distributions in PRWR's ability to improve on the number of species represented in the same number of randomly selected sites. Our results suggest that PRWR may be able to help prioritize sites to represent species if a planner has species inventories for 10%–20% of the sites in the planning area.

Created2016-10-27
128566-Thumbnail Image.png
Description

MALDI-TOF MS profiling has been shown to be a rapid and reliable method to characterize pure cultures of bacteria. Currently, there is keen interest in using this technique to identify bacteria in mixtures. Promising results have been reported with two- or three-isolate model systems using biomarker-based approaches. In this work,

MALDI-TOF MS profiling has been shown to be a rapid and reliable method to characterize pure cultures of bacteria. Currently, there is keen interest in using this technique to identify bacteria in mixtures. Promising results have been reported with two- or three-isolate model systems using biomarker-based approaches. In this work, we applied MALDI-TOF MS-based methods to a more complex model mixture containing six bacteria. We employed: 1) a biomarker-based approach that has previously been shown to be useful in identification of individual bacteria in pure cultures and simple mixtures and 2) a similarity coefficient-based approach that is routinely and nearly exclusively applied to identification of individual bacteria in pure cultures. Both strategies were developed and evaluated using blind-coded mixtures. With regard to the biomarker-based approach, results showed that most peaks in mixture spectra could be assigned to those found in spectra of each component bacterium; however, peaks shared by two isolates as well as peaks that could not be assigned to any individual component isolate were observed. For two-isolate blind-coded samples, bacteria were correctly identified using both similarity coefficient- and biomarker-based strategies, while for blind-coded samples containing more than two isolates, bacteria were more effectively identified using a biomarker-based strategy.

ContributorsZhang, Lin (Author) / Smart, Sonja (Author) / Sandrin, Todd (Author) / New College of Interdisciplinary Arts and Sciences (Contributor)
Created2015-11-05
129075-Thumbnail Image.png
Description

Background: High-throughput technologies such as DNA, RNA, protein, antibody and peptide microarrays are often used to examine differences across drug treatments, diseases, transgenic animals, and others. Typically one trains a classification system by gathering large amounts of probe-level data, selecting informative features, and classifies test samples using a small number of

Background: High-throughput technologies such as DNA, RNA, protein, antibody and peptide microarrays are often used to examine differences across drug treatments, diseases, transgenic animals, and others. Typically one trains a classification system by gathering large amounts of probe-level data, selecting informative features, and classifies test samples using a small number of features. As new microarrays are invented, classification systems that worked well for other array types may not be ideal. Expression microarrays, arguably one of the most prevalent array types, have been used for years to help develop classification algorithms. Many biological assumptions are built into classifiers that were designed for these types of data. One of the more problematic is the assumption of independence, both at the probe level and again at the biological level. Probes for RNA transcripts are designed to bind single transcripts. At the biological level, many genes have dependencies across transcriptional pathways where co-regulation of transcriptional units may make many genes appear as being completely dependent. Thus, algorithms that perform well for gene expression data may not be suitable when other technologies with different binding characteristics exist. The immunosignaturing microarray is based on complex mixtures of antibodies binding to arrays of random sequence peptides. It relies on many-to-many binding of antibodies to the random sequence peptides. Each peptide can bind multiple antibodies and each antibody can bind multiple peptides. This technology has been shown to be highly reproducible and appears promising for diagnosing a variety of disease states. However, it is not clear what is the optimal classification algorithm for analyzing this new type of data.

Results: We characterized several classification algorithms to analyze immunosignaturing data. We selected several datasets that range from easy to difficult to classify, from simple monoclonal binding to complex binding patterns in asthma patients. We then classified the biological samples using 17 different classification algorithms. Using a wide variety of assessment criteria, we found ‘Naïve Bayes’ far more useful than other widely used methods due to its simplicity, robustness, speed and accuracy.

Conclusions: ‘Naïve Bayes’ algorithm appears to accommodate the complex patterns hidden within multilayered immunosignaturing microarray data due to its fundamental mathematical properties.

ContributorsKukreja, Muskan (Author) / Johnston, Stephen (Author) / Stafford, Phillip (Author) / Biodesign Institute (Contributor)
Created2012-06-21
128446-Thumbnail Image.png
Description

Students often self-identify as visual learners and prefer to engage with a topic in an active, hands-on way. Indeed, much research has shown that students who actively engage with the material and are engrossed in the topics retain concepts better than students who are passive receivers of information. However, much

Students often self-identify as visual learners and prefer to engage with a topic in an active, hands-on way. Indeed, much research has shown that students who actively engage with the material and are engrossed in the topics retain concepts better than students who are passive receivers of information. However, much of learning life science concepts is still driven by books and static pictures. One concept students have a hard time grasping is how a linear chain of amino acids folds to becomes a 3D protein structure. Adding three dimensional activities to the topic of protein structure and function should allow for a deeper understanding of the primary, secondary, tertiary, and quaternary structure of proteins and how proteins function in a cell. Here, I review protein folding activities and describe using Apps and 3D visualization to enhance student understanding of protein structure.

Created2014-12