This growing collection consists of scholarly works authored by ASU-affiliated faculty, staff, and community members, and it contains many open access articles. ASU-affiliated authors are encouraged to Share Your Work in KEEP.

Displaying 1 - 10 of 35
Filtering by

Clear all filters

129370-Thumbnail Image.png
Description

Adaptation requires genetic variation, but founder populations are generally genetically depleted. Here we sequence two populations of an inbred ant that diverge in phenotype to determine how variability is generated. Cardiocondyla obscurior has the smallest of the sequenced ant genomes and its structure suggests a fundamental role of transposable elements

Adaptation requires genetic variation, but founder populations are generally genetically depleted. Here we sequence two populations of an inbred ant that diverge in phenotype to determine how variability is generated. Cardiocondyla obscurior has the smallest of the sequenced ant genomes and its structure suggests a fundamental role of transposable elements (TEs) in adaptive evolution. Accumulations of TEs (TE islands) comprising 7.18% of the genome evolve faster than other regions with regard to single-nucleotide variants, gene/exon duplications and deletions and gene homology. A non-random distribution of gene families, larvae/adult specific gene expression and signs of differential methylation in TE islands indicate intragenomic differences in regulation, evolutionary rates and coalescent effective population size. Our study reveals a tripartite interplay between TEs, life history and adaptation in an invasive species.

ContributorsSchrader, Lukas (Author) / Kim, Jay W. (Author) / Ence, Daniel (Author) / Zimin, Aleksey (Author) / Klein, Antonia (Author) / Wyschetzki, Katharina (Author) / Weichselgartner, Tobias (Author) / Kemena, Carsten (Author) / Stoekl, Johannes (Author) / Schultner, Eva (Author) / Wurm, Yannick (Author) / Smith, Christopher D. (Author) / Yandell, Mark (Author) / Heinze, Juergen (Author) / Gadau, Juergen (Author) / Oettler, Jan (Author) / College of Liberal Arts and Sciences (Contributor)
Created2014-12-01
129287-Thumbnail Image.png
Description

The phenomenon of Fano resonance is ubiquitous in a large variety of wave scattering systems, where the resonance profile is typically asymmetric. Whether the parameter characterizing the asymmetry should be complex or real is an issue of great experimental interest. Using coherent quantum transport as a paradigm and taking into

The phenomenon of Fano resonance is ubiquitous in a large variety of wave scattering systems, where the resonance profile is typically asymmetric. Whether the parameter characterizing the asymmetry should be complex or real is an issue of great experimental interest. Using coherent quantum transport as a paradigm and taking into account of the collective contribution from all available scattering channels, we derive a universal formula for the Fano-resonance profile. We show that our formula bridges naturally the traditional Fano formulas with complex and real asymmetry parameters, indicating that the two types of formulas are fundamentally equivalent (except for an offset). The connection also reveals a clear footprint for the conductance resonance during a dephasing process. Therefore, the emergence of complex asymmetric parameter when fitting with experimental data needs to be properly interpreted. Furthermore, we have provided a theory for the width of the resonance, which relates explicitly the width to the degree of localization of the close-by eigenstates and the corresponding coupling matrices or the self-energies caused by the leads. Our work not only resolves the issue about the nature of the asymmetry parameter, but also provides deeper physical insights into the origin of Fano resonance. Since the only assumption in our treatment is that the transport can be described by the Green’s function formalism, our results are also valid for broad disciplines including scattering problems of electromagnetic waves, acoustics, and seismology.

ContributorsHuang, Liang (Author) / Lai, Ying-Cheng (Author) / Luo, Hong-Gang (Author) / Grebogi, Celso (Author) / Ira A. Fulton Schools of Engineering (Contributor)
Created2015-01-01
129298-Thumbnail Image.png
Description

Persistent currents (PCs), one of the most intriguing manifestations of the Aharonov-Bohm (AB) effect, are known to vanish for Schrödinger particles in the presence of random scatterings, e.g., due to classical chaos. But would this still be the case for Dirac fermions? Addressing this question is of significant value due

Persistent currents (PCs), one of the most intriguing manifestations of the Aharonov-Bohm (AB) effect, are known to vanish for Schrödinger particles in the presence of random scatterings, e.g., due to classical chaos. But would this still be the case for Dirac fermions? Addressing this question is of significant value due to the tremendous recent interest in two-dimensional Dirac materials. We investigate relativistic quantum AB rings threaded by a magnetic flux and find that PCs are extremely robust. Even for highly asymmetric rings that host fully developed classical chaos, the amplitudes of PCs are of the same order of magnitude as those for integrable rings, henceforth the term superpersistent currents (SPCs). A striking finding is that the SPCs can be attributed to a robust type of relativistic quantum states, i.e., Dirac whispering gallery modes (WGMs) that carry large angular momenta and travel along the boundaries. We propose an experimental scheme using topological insulators to observe and characterize Dirac WGMs and SPCs, and speculate that these features can potentially be the base for a new class of relativistic qubit systems. Our discovery of WGMs in relativistic quantum systems is remarkable because, although WGMs are common in photonic systems, they are relatively rare in electronic systems.

ContributorsXu, Hongya (Author) / Huang, Liang (Author) / Lai, Ying-Cheng (Author) / Grebogi, Celso (Author) / Ira A. Fulton Schools of Engineering (Contributor)
Created2015-03-11
129422-Thumbnail Image.png
Description

Faced with numerous seemingly intractable social and environmental challenges, many scholars and practitioners are increasingly interested in understanding how to actively engage and transform the existing systems holding such problems in place. Although a variety of analytical models have emerged in recent years, most emphasize either the social or ecological

Faced with numerous seemingly intractable social and environmental challenges, many scholars and practitioners are increasingly interested in understanding how to actively engage and transform the existing systems holding such problems in place. Although a variety of analytical models have emerged in recent years, most emphasize either the social or ecological elements of such transformations rather than their coupled nature. To address this, first we have presented a definition of the core elements of a social-ecological system (SES) that could potentially be altered in a transformation. Second, we drew on insights about transformation from three branches of literature focused on radical change, i.e., social movements, socio-technical transitions, and social innovation, and gave consideration to the similarities and differences with the current studies by resilience scholars. Drawing on these findings, we have proposed a framework that outlines the process and phases of transformative change in an SES. Future research will be able to utilize the framework as a tool for analyzing the alteration of social-ecological feedbacks, identifying critical barriers and leverage points and assessing the outcome of social-ecological transformations.

ContributorsMoore, Michele-Lee (Author) / Tjornbo, Ola (Author) / Enfors, Elin (Author) / Knapp, Corrie (Author) / Hodbod, Jennifer (Author) / Baggio, Jacopo (Author) / Norstrom, Albert (Author) / Olsson, Per (Author) / Biggs, Duan (Author) / Julie Ann Wrigley Global Institute of Sustainability (Contributor)
Created2013-11-30
128778-Thumbnail Image.png
Description

Online communities are becoming increasingly important as platforms for large-scale human cooperation. These communities allow users seeking and sharing professional skills to solve problems collaboratively. To investigate how users cooperate to complete a large number of knowledge-producing tasks, we analyze Stack Exchange, one of the largest question and answer systems

Online communities are becoming increasingly important as platforms for large-scale human cooperation. These communities allow users seeking and sharing professional skills to solve problems collaboratively. To investigate how users cooperate to complete a large number of knowledge-producing tasks, we analyze Stack Exchange, one of the largest question and answer systems in the world. We construct attention networks to model the growth of 110 communities in the Stack Exchange system and quantify individual answering strategies using the linking dynamics on attention networks. We identify two answering strategies. Strategy A aims at performing maintenance by doing simple tasks, whereas strategy B aims at investing time in doing challenging tasks. Both strategies are important: empirical evidence shows that strategy A decreases the median waiting time for answers and strategy B increases the acceptance rate of answers. In investigating the strategic persistence of users, we find that users tends to stick on the same strategy over time in a community, but switch from one strategy to the other across communities. This finding reveals the different sets of knowledge and skills between users. A balance between the population of users taking A and B strategies that approximates 2:1, is found to be optimal to the sustainable growth of communities.

ContributorsWu, Lingfei (Author) / Baggio, Jacopo (Author) / Janssen, Marco (Author) / ASU-SFI Center for Biosocial Complex Systems (Contributor)
Created2016-03-02
128958-Thumbnail Image.png
Description

Background: Immunosignaturing is a new peptide microarray based technology for profiling of humoral immune responses. Despite new challenges, immunosignaturing gives us the opportunity to explore new and fundamentally different research questions. In addition to classifying samples based on disease status, the complex patterns and latent factors underlying immunosignatures, which we attempt

Background: Immunosignaturing is a new peptide microarray based technology for profiling of humoral immune responses. Despite new challenges, immunosignaturing gives us the opportunity to explore new and fundamentally different research questions. In addition to classifying samples based on disease status, the complex patterns and latent factors underlying immunosignatures, which we attempt to model, may have a diverse range of applications.

Methods: We investigate the utility of a number of statistical methods to determine model performance and address challenges inherent in analyzing immunosignatures. Some of these methods include exploratory and confirmatory factor analyses, classical significance testing, structural equation and mixture modeling.

Results: We demonstrate an ability to classify samples based on disease status and show that immunosignaturing is a very promising technology for screening and presymptomatic screening of disease. In addition, we are able to model complex patterns and latent factors underlying immunosignatures. These latent factors may serve as biomarkers for disease and may play a key role in a bioinformatic method for antibody discovery.

Conclusion: Based on this research, we lay out an analytic framework illustrating how immunosignatures may be useful as a general method for screening and presymptomatic screening of disease as well as antibody discovery.

ContributorsBrown, Justin (Author) / Stafford, Phillip (Author) / Johnston, Stephen (Author) / Dinu, Valentin (Author) / College of Health Solutions (Contributor)
Created2011-08-19
128960-Thumbnail Image.png
Description

Background: Microarray image analysis processes scanned digital images of hybridized arrays to produce the input spot-level data for downstream analysis, so it can have a potentially large impact on those and subsequent analysis. Signal saturation is an optical effect that occurs when some pixel values for highly expressed genes or

Background: Microarray image analysis processes scanned digital images of hybridized arrays to produce the input spot-level data for downstream analysis, so it can have a potentially large impact on those and subsequent analysis. Signal saturation is an optical effect that occurs when some pixel values for highly expressed genes or peptides exceed the upper detection threshold of the scanner software (216 - 1 = 65, 535 for 16-bit images). In practice, spots with a sizable number of saturated pixels are often flagged and discarded. Alternatively, the saturated values are used without adjustments for estimating spot intensities. The resulting expression data tend to be biased downwards and can distort high-level analysis that relies on these data. Hence, it is crucial to effectively correct for signal saturation.

Results: We developed a flexible mixture model-based segmentation and spot intensity estimation procedure that accounts for saturated pixels by incorporating a censored component in the mixture model. As demonstrated with biological data and simulation, our method extends the dynamic range of expression data beyond the saturation threshold and is effective in correcting saturation-induced bias when the lost information is not tremendous. We further illustrate the impact of image processing on downstream classification, showing that the proposed method can increase diagnostic accuracy using data from a lymphoma cancer diagnosis study.

Conclusions: The presented method adjusts for signal saturation at the segmentation stage that identifies a pixel as part of the foreground, background or other. The cluster membership of a pixel can be altered versus treating saturated values as truly observed. Thus, the resulting spot intensity estimates may be more accurate than those obtained from existing methods that correct for saturation based on already segmented data. As a model-based segmentation method, our procedure is able to identify inner holes, fuzzy edges and blank spots that are common in microarray images. The approach is independent of microarray platform and applicable to both single- and dual-channel microarrays.

ContributorsYang, Yan (Author) / Stafford, Phillip (Author) / Kim, YoonJoo (Author) / College of Liberal Arts and Sciences (Contributor)
Created2011-11-30
129075-Thumbnail Image.png
Description

Background: High-throughput technologies such as DNA, RNA, protein, antibody and peptide microarrays are often used to examine differences across drug treatments, diseases, transgenic animals, and others. Typically one trains a classification system by gathering large amounts of probe-level data, selecting informative features, and classifies test samples using a small number of

Background: High-throughput technologies such as DNA, RNA, protein, antibody and peptide microarrays are often used to examine differences across drug treatments, diseases, transgenic animals, and others. Typically one trains a classification system by gathering large amounts of probe-level data, selecting informative features, and classifies test samples using a small number of features. As new microarrays are invented, classification systems that worked well for other array types may not be ideal. Expression microarrays, arguably one of the most prevalent array types, have been used for years to help develop classification algorithms. Many biological assumptions are built into classifiers that were designed for these types of data. One of the more problematic is the assumption of independence, both at the probe level and again at the biological level. Probes for RNA transcripts are designed to bind single transcripts. At the biological level, many genes have dependencies across transcriptional pathways where co-regulation of transcriptional units may make many genes appear as being completely dependent. Thus, algorithms that perform well for gene expression data may not be suitable when other technologies with different binding characteristics exist. The immunosignaturing microarray is based on complex mixtures of antibodies binding to arrays of random sequence peptides. It relies on many-to-many binding of antibodies to the random sequence peptides. Each peptide can bind multiple antibodies and each antibody can bind multiple peptides. This technology has been shown to be highly reproducible and appears promising for diagnosing a variety of disease states. However, it is not clear what is the optimal classification algorithm for analyzing this new type of data.

Results: We characterized several classification algorithms to analyze immunosignaturing data. We selected several datasets that range from easy to difficult to classify, from simple monoclonal binding to complex binding patterns in asthma patients. We then classified the biological samples using 17 different classification algorithms. Using a wide variety of assessment criteria, we found ‘Naïve Bayes’ far more useful than other widely used methods due to its simplicity, robustness, speed and accuracy.

Conclusions: ‘Naïve Bayes’ algorithm appears to accommodate the complex patterns hidden within multilayered immunosignaturing microarray data due to its fundamental mathematical properties.

ContributorsKukreja, Muskan (Author) / Johnston, Stephen (Author) / Stafford, Phillip (Author) / Biodesign Institute (Contributor)
Created2012-06-21
Description

On-going efforts to understand the dynamics of coupled social-ecological (or more broadly, coupled infrastructure) systems and common pool resources have led to the generation of numerous datasets based on a large number of case studies. This data has facilitated the identification of important factors and fundamental principles which increase our

On-going efforts to understand the dynamics of coupled social-ecological (or more broadly, coupled infrastructure) systems and common pool resources have led to the generation of numerous datasets based on a large number of case studies. This data has facilitated the identification of important factors and fundamental principles which increase our understanding of such complex systems. However, the data at our disposal are often not easily comparable, have limited scope and scale, and are based on disparate underlying frameworks inhibiting synthesis, meta-analysis, and the validation of findings. Research efforts are further hampered when case inclusion criteria, variable definitions, coding schema, and inter-coder reliability testing are not made explicit in the presentation of research and shared among the research community. This paper first outlines challenges experienced by researchers engaged in a large-scale coding project; then highlights valuable lessons learned; and finally discusses opportunities for further research on comparative case study analysis focusing on social-ecological systems and common pool resources. Includes supplemental materials and appendices published in the International Journal of the Commons 2016 Special Issue. Volume 10 - Issue 2 - 2016.

ContributorsRatajczyk, Elicia (Author) / Brady, Ute (Author) / Baggio, Jacopo (Author) / Barnett, Allain J. (Author) / Perez Ibarra, Irene (Author) / Rollins, Nathan (Author) / Rubinos, Cathy (Author) / Shin, Hoon Cheol (Author) / Yu, David (Author) / Aggarwal, Rimjhim (Author) / Anderies, John (Author) / Janssen, Marco (Author) / ASU-SFI Center for Biosocial Complex Systems (Contributor)
Created2016-09-09
Description

Governing common pool resources (CPR) in the face of disturbances such as globalization and climate change is challenging. The outcome of any CPR governance regime is the influenced by local combinations of social, institutional, and biophysical factors, as well as cross-scale interdependencies. In this study, we take a step towards

Governing common pool resources (CPR) in the face of disturbances such as globalization and climate change is challenging. The outcome of any CPR governance regime is the influenced by local combinations of social, institutional, and biophysical factors, as well as cross-scale interdependencies. In this study, we take a step towards understanding multiple-causation of CPR outcomes by analyzing 1) the co-occurrence of Design Principles (DP) by activity (irrigation, fishery and forestry), and 2) the combination(s) of DPs leading to social and ecological success. We analyzed 69 cases pertaining to three different activities: irrigation, fishery, and forestry. We find that the importance of the design principles is dependent upon the natural and hard human made infrastructure (i.e. canals, equipment, vessels etc.). For example, clearly defined social boundaries are important when the natural infrastructure is highly mobile (i.e. tuna fish), while monitoring is more important when the natural infrastructure is more static (i.e. forests or water contained within an irrigation system). However, we also find that congruence between local conditions and rules and proportionality between investment and extraction are key for CPR success independent from the natural and human hard made infrastructure. We further provide new visualization techniques for co-occurrence patterns and add to qualitative comparative analysis by introducing a reliability metric to deal with a large meta-analysis dataset on secondary data where information is missing or uncertain.

Includes supplemental materials and appendices publications in International Journal of the Commons 2016 Special Issue. Volume 10 - Issue 2 - 2016

ContributorsBaggio, Jacopo (Author) / Barnett, Alain J. (Author) / Perez, Irene (Author) / Brady, Ute (Author) / Ratajczyk, Elicia (Author) / Rollins, Nathan (Author) / Rubinos, Cathy (Author) / Shin, Hoon Cheol (Author) / Yu, David (Author) / Aggarwal, Rimjhim (Author) / Anderies, John (Author) / Janssen, Marco (Author) / Julie Ann Wrigley Global Institute of Sustainability (Contributor)
Created2016-09-09