This growing collection consists of scholarly works authored by ASU-affiliated faculty, staff, and community members, and it contains many open access articles. ASU-affiliated authors are encouraged to Share Your Work in KEEP.

Displaying 31 - 40 of 74
Filtering by

Clear all filters

128389-Thumbnail Image.png
Description

Recent works revealed that the energy required to control a complex network depends on the number of driving signals and the energy distribution follows an algebraic scaling law. If one implements control using a small number of drivers, e.g. as determined by the structural controllability theory, there is a high

Recent works revealed that the energy required to control a complex network depends on the number of driving signals and the energy distribution follows an algebraic scaling law. If one implements control using a small number of drivers, e.g. as determined by the structural controllability theory, there is a high probability that the energy will diverge. We develop a physical theory to explain the scaling behaviour through identification of the fundamental structural elements, the longest control chains (LCCs), that dominate the control energy. Based on the LCCs, we articulate a strategy to drastically reduce the control energy (e.g. in a large number of real-world networks). Owing to their structural nature, the LCCs may shed light on energy issues associated with control of nonlinear dynamical networks.

ContributorsChen, Yu-Zhong (Author) / Wang, Le-Zhi (Author) / Wang, Wen-Xu (Author) / Lai, Ying-Cheng (Author) / Ira A. Fulton Schools of Engineering (Contributor)
Created2016-04-20
128390-Thumbnail Image.png
Description

We develop a framework to uncover and analyse dynamical anomalies from massive, nonlinear and non-stationary time series data. The framework consists of three steps: preprocessing of massive datasets to eliminate erroneous data segments, application of the empirical mode decomposition and Hilbert transform paradigm to obtain the fundamental components embedded in

We develop a framework to uncover and analyse dynamical anomalies from massive, nonlinear and non-stationary time series data. The framework consists of three steps: preprocessing of massive datasets to eliminate erroneous data segments, application of the empirical mode decomposition and Hilbert transform paradigm to obtain the fundamental components embedded in the time series at distinct time scales, and statistical/scaling analysis of the components. As a case study, we apply our framework to detecting and characterizing high-frequency oscillations (HFOs) from a big database of rat electroencephalogram recordings. We find a striking phenomenon: HFOs exhibit on–off intermittency that can be quantified by algebraic scaling laws. Our framework can be generalized to big data-related problems in other fields such as large-scale sensor data and seismic data analysis.

ContributorsHuang, Liang (Author) / Ni, Xuan (Author) / Ditto, William L. (Author) / Spano, Mark (Author) / Carney, Paul R. (Author) / Lai, Ying-Cheng (Author) / Ira A. Fulton Schools of Engineering (Contributor)
Created2017-01-18
128391-Thumbnail Image.png
Description

Given a complex geospatial network with nodes distributed in a two-dimensional region of physical space, can the locations of the nodes be determined and their connection patterns be uncovered based solely on data? We consider the realistic situation where time series/signals can be collected from a single location. A key

Given a complex geospatial network with nodes distributed in a two-dimensional region of physical space, can the locations of the nodes be determined and their connection patterns be uncovered based solely on data? We consider the realistic situation where time series/signals can be collected from a single location. A key challenge is that the signals collected are necessarily time delayed, due to the varying physical distances from the nodes to the data collection centre. To meet this challenge, we develop a compressive-sensing-based approach enabling reconstruction of the full topology of the underlying geospatial network and more importantly, accurate estimate of the time delays. A standard triangularization algorithm can then be employed to find the physical locations of the nodes in the network. We further demonstrate successful detection of a hidden node (or a hidden source or threat), from which no signal can be obtained, through accurate detection of all its neighbouring nodes. As a geospatial network has the feature that a node tends to connect with geophysically nearby nodes, the localized region that contains the hidden node can be identified.

ContributorsSu, Riqi (Author) / Wang, Wen-Xu (Author) / Wang, Xiao (Author) / Lai, Ying-Cheng (Author) / Ira A. Fulton Schools of Engineering (Contributor)
Created2016-01-06
128342-Thumbnail Image.png
Description

Locating sources of diffusion and spreading from minimum data is a significant problem in network science with great applied values to the society. However, a general theoretical framework dealing with optimal source localization is lacking. Combining the controllability theory for complex networks and compressive sensing, we develop a framework with

Locating sources of diffusion and spreading from minimum data is a significant problem in network science with great applied values to the society. However, a general theoretical framework dealing with optimal source localization is lacking. Combining the controllability theory for complex networks and compressive sensing, we develop a framework with high efficiency and robustness for optimal source localization in arbitrary weighted networks with arbitrary distribution of sources. We offer a minimum output analysis to quantify the source locatability through a minimal number of messenger nodes that produce sufficient measurement for fully locating the sources. When the minimum messenger nodes are discerned, the problem of optimal source localization becomes one of sparse signal reconstruction, which can be solved using compressive sensing. Application of our framework to model and empirical networks demonstrates that sources in homogeneous and denser networks are more readily to be located. A surprising finding is that, for a connected undirected network with random link weights and weak noise, a single messenger node is sufficient for locating any number of sources. The framework deepens our understanding of the network source localization problem and offers efficient tools with broad applications.

ContributorsHu, Zhao-Long (Author) / Han, Xiao (Author) / Lai, Ying-Cheng (Author) / Wang, Wen-Xu (Author) / Ira A. Fulton Schools of Engineering (Contributor)
Created2017-04-12
128364-Thumbnail Image.png
Description

Of particular interest to the neuroscience and robotics communities is the understanding of how two humans could physically collaborate to perform motor tasks such as holding a tool or moving it across locations. When two humans physically interact with each other, sensory consequences and motor outcomes are not entirely predictable

Of particular interest to the neuroscience and robotics communities is the understanding of how two humans could physically collaborate to perform motor tasks such as holding a tool or moving it across locations. When two humans physically interact with each other, sensory consequences and motor outcomes are not entirely predictable as they also depend on the other agent’s actions. The sensory mechanisms involved in physical interactions are not well understood. The present study was designed (1) to quantify human–human physical interactions where one agent (“follower”) has to infer the intended or imagined—but not executed—direction of motion of another agent (“leader”) and (2) to reveal the underlying strategies used by the dyad. This study also aimed at verifying the extent to which visual feedback (VF) is necessary for communicating intended movement direction. We found that the control of leader on the relationship between force and motion was a critical factor in conveying his/her intended movement direction to the follower regardless of VF of the grasped handle or the arms. Interestingly, the dyad’s ability to communicate and infer movement direction with significant accuracy improved (>83%) after a relatively short amount of practice. These results indicate that the relationship between force and motion (interpreting as arm impedance modulation) may represent an important means for communicating intended movement direction between biological agents, as indicated by the modulation of this relationship to intended direction. Ongoing work is investigating the application of the present findings to optimize communication of high-level movement goals during physical interactions between biological and non-biological agents.

ContributorsMojtahedi, Keivan (Author) / Whitsell, Bryan (Author) / Artemiadis, Panagiotis (Author) / Santello, Marco (Author) / Ira A. Fulton Schools of Engineering (Contributor)
Created2017-04-13
129600-Thumbnail Image.png
Description

How effective are governmental incentives to achieve widespread vaccination coverage so as to prevent epidemic outbreak? The answer largely depends on the complex interplay among the type of incentive, individual behavioral responses, and the intrinsic epidemic dynamics. By incorporating evolutionary games into epidemic dynamics, we investigate the effects of two

How effective are governmental incentives to achieve widespread vaccination coverage so as to prevent epidemic outbreak? The answer largely depends on the complex interplay among the type of incentive, individual behavioral responses, and the intrinsic epidemic dynamics. By incorporating evolutionary games into epidemic dynamics, we investigate the effects of two types of incentives strategies: partial-subsidy policy in which certain fraction of the cost of vaccination is offset, and free-subsidy policy in which donees are randomly selected and vaccinated at no cost. Through mean-field analysis and computations, we find that, under the partial-subsidy policy, the vaccination coverage depends monotonically on the sensitivity of individuals to payoff difference, but the dependence is non-monotonous for the free-subsidy policy. Due to the role models of the donees for relatively irrational individuals and the unchanged strategies of the donees for rational individuals, the free-subsidy policy can in general lead to higher vaccination coverage. Our findings indicate that any disease-control policy should be exercised with extreme care: its success depends on the complex interplay among the intrinsic mathematical rules of epidemic spreading, governmental policies, and behavioral responses of individuals.

ContributorsZhang, Haifeng (Author) / Wu, Zhi-Xi (Author) / Tang, Ming (Author) / Lai, Ying-Cheng (Author) / Ira A. Fulton Schools of Engineering (Contributor)
Created2014-07-11
129618-Thumbnail Image.png
Description

A fundamental result in the evolutionary-game paradigm of cyclic competition in spatially extended ecological systems, as represented by the classic Reichenbach-Mobilia-Frey (RMF) model, is that high mobility tends to hamper or even exclude species coexistence. This result was obtained under the hypothesis that individuals move randomly without taking into account

A fundamental result in the evolutionary-game paradigm of cyclic competition in spatially extended ecological systems, as represented by the classic Reichenbach-Mobilia-Frey (RMF) model, is that high mobility tends to hamper or even exclude species coexistence. This result was obtained under the hypothesis that individuals move randomly without taking into account the suitability of their local environment. We incorporate local habitat suitability into the RMF model and investigate its effect on coexistence. In particular, we hypothesize the use of “basic instinct” of an individual to determine its movement at any time step. That is, an individual is more likely to move when the local habitat becomes hostile and is no longer favorable for survival and growth. We show that, when such local habitat suitability is taken into account, robust coexistence can emerge even in the high-mobility regime where extinction is certain in the RMF model. A surprising finding is that coexistence is accompanied by the occurrence of substantial empty space in the system. Reexamination of the RMF model confirms the necessity and the important role of empty space in coexistence. Our study implies that adaptation/movements according to local habitat suitability are a fundamental factor to promote species coexistence and, consequently, biodiversity.

ContributorsPark, Junpyo (Author) / Do, Younghae (Author) / Huang, Zi-Gang (Author) / Lai, Ying-Cheng (Author) / Ira A. Fulton Schools of Engineering (Contributor)
Created2014
129649-Thumbnail Image.png
Description

Nonhyperbolicity, as characterized by the coexistence of Kolmogorov-Arnold-Moser (KAM) tori and chaos in the phase space, is generic in classical Hamiltonian systems. An open but fundamental question in physics concerns the relativistic quantum manifestations of nonhyperbolic dynamics. We choose the mushroom billiard that has been mathematically proven to be nonhyperbolic,

Nonhyperbolicity, as characterized by the coexistence of Kolmogorov-Arnold-Moser (KAM) tori and chaos in the phase space, is generic in classical Hamiltonian systems. An open but fundamental question in physics concerns the relativistic quantum manifestations of nonhyperbolic dynamics. We choose the mushroom billiard that has been mathematically proven to be nonhyperbolic, and study the resonant tunneling dynamics of a massless Dirac fermion. We find that the tunneling rate as a function of the energy exhibits a striking "clustering" phenomenon, where the majority of the values of the rate concentrate on a narrow region, as a result of the chaos component in the classical phase space. Relatively few values of the tunneling rate, however, spread outside the clustering region due to the integrable component. Resonant tunneling of electrons in nonhyperbolic chaotic graphene systems exhibits a similar behavior. To understand these numerical results, we develop a theoretical framework by combining analytic solutions of the Dirac equation in certain integrable domains and physical intuitions gained from current understanding of the quantum manifestations of chaos. In particular, we employ a theoretical formalism based on the concept of self-energies to calculate the tunneling rate and analytically solve the Dirac equation in one dimension as well as in two dimensions for a circular-ring-type of tunneling systems exhibiting integrable dynamics in the classical limit. Because relatively few and distinct classical periodic orbits are present in the integrable component, the corresponding relativistic quantum states can have drastically different behaviors, leading to a wide spread in the values of the tunneling rate in the energy-rate plane. In contrast, the chaotic component has embedded within itself an infinite number of unstable periodic orbits, which provide far more quantum states for tunneling. Due to the nature of chaos, these states are characteristically similar, leading to clustering of the values of the tunneling rate in a narrow band. The appealing characteristic of our work is a demonstration and physical understanding of the "mixed" role played by chaos and regular dynamics in shaping relativistic quantum tunneling dynamics.

ContributorsNi, Xuan (Author) / Huang, Liang (Author) / Ying, Lei (Author) / Lai, Ying-Cheng (Author) / Ira A. Fulton Schools of Engineering (Contributor)
Created2013-09-18
127929-Thumbnail Image.png
Description

Previous studies in building energy assessment clearly state that to meet sustainable energy goals, existing buildings, as well as new buildings, will need to improve their energy efficiency. Thus, meeting energy goals relies on retrofitting existing buildings. Most building energy models are bottom-up engineering models, meaning these models calculate energy

Previous studies in building energy assessment clearly state that to meet sustainable energy goals, existing buildings, as well as new buildings, will need to improve their energy efficiency. Thus, meeting energy goals relies on retrofitting existing buildings. Most building energy models are bottom-up engineering models, meaning these models calculate energy demand of individual buildings through their physical properties and energy use for specific end uses (e.g., lighting, appliances, and water heating). Researchers then scale up these model results to represent the building stock of the region studied.

Studies reveal that there is a lack of information about the building stock and associated modeling tools and this lack of knowledge affects the assessment of building energy efficiency strategies. Literature suggests that the level of complexity of energy models needs to be limited. Accuracy of these energy models can be elevated by reducing the input parameters, alleviating the need for users to make many assumptions about building construction and occupancy, among other factors. To mitigate the need for assumptions and the resulting model inaccuracies, the authors argue buildings should be described in a regional stock model with a restricted number of input parameters. One commonly-accepted method of identifying critical input parameters is sensitivity analysis, which requires a large number of runs that are both time consuming and may require high processing capacity.

This paper utilizes the Energy, Carbon and Cost Assessment for Buildings Stocks (ECCABS) model, which calculates the net energy demand of buildings and presents aggregated and individual- building-level, demand for specific end uses, e.g., heating, cooling, lighting, hot water and appliances. The model has already been validated using the Swedish, Spanish, and UK building stock data. This paper discusses potential improvements to this model by assessing the feasibility of using stepwise regression to identify the most important input parameters using the data from UK residential sector. The paper presents results of stepwise regression and compares these to sensitivity analysis; finally, the paper documents the advantages and challenges associated with each method.

ContributorsArababadi, Reza (Author) / Naganathan, Hariharan (Author) / Parrish, Kristen (Author) / Chong, Oswald (Author) / Ira A. Fulton Schools of Engineering (Contributor)
Created2015-09-14
127931-Thumbnail Image.png
Description

Construction waste management has become extremely important due to stricter disposal and landfill regulations, and a lesser number of available landfills. There are extensive works done on waste treatment and management of the construction industry. Concepts like deconstruction, recyclability, and Design for Disassembly (DfD) are examples of better construction waste

Construction waste management has become extremely important due to stricter disposal and landfill regulations, and a lesser number of available landfills. There are extensive works done on waste treatment and management of the construction industry. Concepts like deconstruction, recyclability, and Design for Disassembly (DfD) are examples of better construction waste management methods. Although some authors and organizations have published rich guides addressing the DfD's principles, there are only a few buildings already developed in this area. This study aims to find the challenges in the current practice of deconstruction activities and the gaps between its theory and implementation. Furthermore, it aims to provide insights about how DfD can create opportunities to turn these concepts into strategies that can be largely adopted by the construction industry stakeholders in the near future.

ContributorsRios, Fernanda (Author) / Chong, Oswald (Author) / Grau, David (Author) / Julie Ann Wrigley Global Institute of Sustainability (Contributor)
Created2015-09-14