This growing collection consists of scholarly works authored by ASU-affiliated faculty, staff, and community members, and it contains many open access articles. ASU-affiliated authors are encouraged to Share Your Work in KEEP.

Displaying 1 - 10 of 49
Filtering by

Clear all filters

Description

It is known that in classical fluids turbulence typically occurs at high Reynolds numbers. But can turbulence occur at low Reynolds numbers? Here we investigate the transition to turbulence in the classic Taylor-Couette system in which the rotating fluids are manufactured ferrofluids with magnetized nanoparticles embedded in liquid carriers. We

It is known that in classical fluids turbulence typically occurs at high Reynolds numbers. But can turbulence occur at low Reynolds numbers? Here we investigate the transition to turbulence in the classic Taylor-Couette system in which the rotating fluids are manufactured ferrofluids with magnetized nanoparticles embedded in liquid carriers. We find that, in the presence of a magnetic field transverse to the symmetry axis of the system, turbulence can occur at Reynolds numbers that are at least one order of magnitude smaller than those in conventional fluids. This is established by extensive computational ferrohydrodynamics through a detailed investigation of transitions in the flow structure, and characterization of behaviors of physical quantities such as the energy, the wave number, and the angular momentum through the bifurcations. A finding is that, as the magnetic field is increased, onset of turbulence can be determined accurately and reliably. Our results imply that experimental investigation of turbulence may be feasible by using ferrofluids. Our study of transition to and evolution of turbulence in the Taylor-Couette ferrofluidic flow system provides insights into the challenging problem of turbulence control.

ContributorsAltmeyer, Sebastian (Author) / Do, Younghae (Author) / Lai, Ying-Cheng (Author) / Ira A. Fulton Schools of Engineering (Contributor)
Created2015-06-12
Description

A relatively unexplored issue in cybersecurity science and engineering is whether there exist intrinsic patterns of cyberattacks. Conventional wisdom favors absence of such patterns due to the overwhelming complexity of the modern cyberspace. Surprisingly, through a detailed analysis of an extensive data set that records the time-dependent frequencies of attacks

A relatively unexplored issue in cybersecurity science and engineering is whether there exist intrinsic patterns of cyberattacks. Conventional wisdom favors absence of such patterns due to the overwhelming complexity of the modern cyberspace. Surprisingly, through a detailed analysis of an extensive data set that records the time-dependent frequencies of attacks over a relatively wide range of consecutive IP addresses, we successfully uncover intrinsic spatiotemporal patterns underlying cyberattacks, where the term “spatio” refers to the IP address space. In particular, we focus on analyzing macroscopic properties of the attack traffic flows and identify two main patterns with distinct spatiotemporal characteristics: deterministic and stochastic. Strikingly, there are very few sets of major attackers committing almost all the attacks, since their attack “fingerprints” and target selection scheme can be unequivocally identified according to the very limited number of unique spatiotemporal characteristics, each of which only exists on a consecutive IP region and differs significantly from the others. We utilize a number of quantitative measures, including the flux-fluctuation law, the Markov state transition probability matrix, and predictability measures, to characterize the attack patterns in a comprehensive manner. A general finding is that the attack patterns possess high degrees of predictability, potentially paving the way to anticipating and, consequently, mitigating or even preventing large-scale cyberattacks using macroscopic approaches.

ContributorsChen, Yu-Zhong (Author) / Huang, Zi-Gang (Author) / Xu, Shouhuai (Author) / Lai, Ying-Cheng (Author) / Ira A. Fulton Schools of Engineering (Contributor)
Created2015-05-20
Description

Supply-demand processes take place on a large variety of real-world networked systems ranging from power grids and the internet to social networking and urban systems. In a modern infrastructure, supply-demand systems are constantly expanding, leading to constant increase in load requirement for resources and consequently, to problems such as low

Supply-demand processes take place on a large variety of real-world networked systems ranging from power grids and the internet to social networking and urban systems. In a modern infrastructure, supply-demand systems are constantly expanding, leading to constant increase in load requirement for resources and consequently, to problems such as low efficiency, resource scarcity, and partial system failures. Under certain conditions global catastrophe on the scale of the whole system can occur through the dynamical process of cascading failures. We investigate optimization and resilience of time-varying supply-demand systems by constructing network models of such systems, where resources are transported from the supplier sites to users through various links. Here by optimization we mean minimization of the maximum load on links, and system resilience can be characterized using the cascading failure size of users who fail to connect with suppliers.

We consider two representative classes of supply schemes: load driven supply and fix fraction supply. Our findings are: (1) optimized systems are more robust since relatively smaller cascading failures occur when triggered by external perturbation to the links; (2) a large fraction of links can be free of load if resources are directed to transport through the shortest paths; (3) redundant links in the performance of the system can help to reroute the traffic but may undesirably transmit and enlarge the failure size of the system; (4) the patterns of cascading failures depend strongly upon the capacity of links; (5) the specific location of the trigger determines the specific route of cascading failure, but has little effect on the final cascading size; (6) system expansion typically reduces the efficiency; and (7) when the locations of the suppliers are optimized over a long expanding period, fewer suppliers are required. These results hold for heterogeneous networks in general, providing insights into designing optimal and resilient complex supply-demand systems that expand constantly in time.

ContributorsZhang, Si-Ping (Author) / Huang, Zi-Gang (Author) / Dong, Jia-Qi (Author) / Eisenberg, Daniel (Author) / Seager, Thomas (Author) / Lai, Ying-Cheng (Author) / Ira A. Fulton Schools of Engineering (Contributor)
Created2015-06-23
127858-Thumbnail Image.png
Description

Background: While there is ample evidence for health risks associated with heat and other extreme weather events today, little is known about the impact of weather patterns on population health in preindustrial societies.

Objective: To investigate the impact of weather patterns on population health in Sweden before and during industrialization.

Methods: We

Background: While there is ample evidence for health risks associated with heat and other extreme weather events today, little is known about the impact of weather patterns on population health in preindustrial societies.

Objective: To investigate the impact of weather patterns on population health in Sweden before and during industrialization.

Methods: We obtained records of monthly mortality and of monthly mean temperatures and precipitation for Skellefteå parish, northern Sweden, for the period 1800-1950. The associations between monthly total mortality, as well as monthly mortality due to infectious and cardiovascular diseases, and monthly mean temperature and cumulative precipitation were modelled using a time series approach for three separate periods, 1800−1859, 1860-1909, and 1910-1950.

Results: We found higher temperatures and higher amounts of precipitation to be associated with lower mortality both in the medium term (same month and two-months lag) and in the long run (lag of six months up to a year). Similar patterns were found for mortality due to infectious and cardiovascular diseases. Furthermore, the effect of temperature and precipitation decreased over time.

Conclusions: Higher temperature and precipitation amounts were associated with reduced death counts with a lag of up to 12 months. The decreased effect over time may be due to improvements in nutritional status, decreased infant deaths, and other changes in society that occurred in the course of the demographic and epidemiological transition.

Contribution: The study contributes to a better understanding of the complex relationship between weather and mortality and, in particular, historical weather-related mortality.

ContributorsDaniel, Oudin Astrom (Author) / Edvinsson, Soren (Author) / Hondula, David M. (Author) / Rocklov, Joacim (Author) / Schumann, Barbara (Author) / College of Liberal Arts and Sciences (Contributor)
Created2016-10-05
128411-Thumbnail Image.png
Description

Background: Extreme heat is a public health challenge. The scarcity of directly comparable studies on the association of heat with morbidity and mortality and the inconsistent identification of threshold temperatures for severe impacts hampers the development of comprehensive strategies aimed at reducing adverse heat-health events.

Objectives: This quantitative study was designed

Background: Extreme heat is a public health challenge. The scarcity of directly comparable studies on the association of heat with morbidity and mortality and the inconsistent identification of threshold temperatures for severe impacts hampers the development of comprehensive strategies aimed at reducing adverse heat-health events.

Objectives: This quantitative study was designed to link temperature with mortality and morbidity events in Maricopa County, Arizona, USA, with a focus on the summer season.
Methods: Using Poisson regression models that controlled for temporal confounders, we assessed daily temperature–health associations for a suite of mortality and morbidity events, diagnoses, and temperature metrics. Minimum risk temperatures, increasing risk temperatures, and excess risk temperatures were statistically identified to represent different “trigger points” at which heat-health intervention measures might be activated.

Results: We found significant and consistent associations of high environmental temperature with all-cause mortality, cardiovascular mortality, heat-related mortality, and mortality resulting from conditions that are consequences of heat and dehydration. Hospitalizations and emergency department visits due to heat-related conditions and conditions associated with consequences of heat and dehydration were also strongly associated with high temperatures, and there were several times more of those events than there were deaths. For each temperature metric, we observed large contrasts in trigger points (up to 22°C) across multiple health events and diagnoses.

Conclusion: Consideration of multiple health events and diagnoses together with a comprehensive approach to identifying threshold temperatures revealed large differences in trigger points for possible interventions related to heat. Providing an array of heat trigger points applicable for different end-users may improve the public health response to a problem that is projected to worsen in the coming decades.

Created2015-07-28
128409-Thumbnail Image.png
Description

Background: Extreme heat is a leading weather-related cause of mortality in the United States, but little guidance is available regarding how temperature variable selection impacts heat–mortality relationships.
Objectives: We examined how the strength of the relationship between daily heat-related mortality and temperature varies as a function of temperature observation time, lag,

Background: Extreme heat is a leading weather-related cause of mortality in the United States, but little guidance is available regarding how temperature variable selection impacts heat–mortality relationships.
Objectives: We examined how the strength of the relationship between daily heat-related mortality and temperature varies as a function of temperature observation time, lag, and calculation method.
Methods: Long time series of daily mortality counts and hourly temperature for seven U.S. cities with different climates were examined using a generalized additive model. The temperature effect was modeled separately for each hour of the day (with up to 3-day lags) along with different methods of calculating daily maximum, minimum, and mean temperature. We estimated the temperature effect on mortality for each variable by comparing the 99th versus 85th temperature percentiles, as determined from the annual time series.

Results: In three northern cities (Boston, MA; Philadelphia, PA; and Seattle, WA) that appeared to have the greatest sensitivity to heat, hourly estimates were consistent with a diurnal pattern in the heat-mortality response, with strongest associations for afternoon or maximum temperature at lag 0 (day of death) or afternoon and evening of lag 1 (day before death). In warmer, southern cities, stronger associations were found with morning temperatures, but overall the relationships were weaker. The strongest temperature–mortality relationships were associated with maximum temperature, although mean temperature results were comparable.

Conclusions: There were systematic and substantial differences in the association between temperature and mortality based on the time and type of temperature observation. Because the strongest hourly temperature–mortality relationships were not always found at times typically associated with daily maximum temperatures, temperature variables should be selected independently for each study location. In general, heat-mortality was more closely coupled to afternoon and maximum temperatures in most cities we examined, particularly those typically prone to heat-related mortality.

Created2015-12-04
128391-Thumbnail Image.png
Description

Given a complex geospatial network with nodes distributed in a two-dimensional region of physical space, can the locations of the nodes be determined and their connection patterns be uncovered based solely on data? We consider the realistic situation where time series/signals can be collected from a single location. A key

Given a complex geospatial network with nodes distributed in a two-dimensional region of physical space, can the locations of the nodes be determined and their connection patterns be uncovered based solely on data? We consider the realistic situation where time series/signals can be collected from a single location. A key challenge is that the signals collected are necessarily time delayed, due to the varying physical distances from the nodes to the data collection centre. To meet this challenge, we develop a compressive-sensing-based approach enabling reconstruction of the full topology of the underlying geospatial network and more importantly, accurate estimate of the time delays. A standard triangularization algorithm can then be employed to find the physical locations of the nodes in the network. We further demonstrate successful detection of a hidden node (or a hidden source or threat), from which no signal can be obtained, through accurate detection of all its neighbouring nodes. As a geospatial network has the feature that a node tends to connect with geophysically nearby nodes, the localized region that contains the hidden node can be identified.

ContributorsSu, Riqi (Author) / Wang, Wen-Xu (Author) / Wang, Xiao (Author) / Lai, Ying-Cheng (Author) / Ira A. Fulton Schools of Engineering (Contributor)
Created2016-01-06
128390-Thumbnail Image.png
Description

We develop a framework to uncover and analyse dynamical anomalies from massive, nonlinear and non-stationary time series data. The framework consists of three steps: preprocessing of massive datasets to eliminate erroneous data segments, application of the empirical mode decomposition and Hilbert transform paradigm to obtain the fundamental components embedded in

We develop a framework to uncover and analyse dynamical anomalies from massive, nonlinear and non-stationary time series data. The framework consists of three steps: preprocessing of massive datasets to eliminate erroneous data segments, application of the empirical mode decomposition and Hilbert transform paradigm to obtain the fundamental components embedded in the time series at distinct time scales, and statistical/scaling analysis of the components. As a case study, we apply our framework to detecting and characterizing high-frequency oscillations (HFOs) from a big database of rat electroencephalogram recordings. We find a striking phenomenon: HFOs exhibit on–off intermittency that can be quantified by algebraic scaling laws. Our framework can be generalized to big data-related problems in other fields such as large-scale sensor data and seismic data analysis.

ContributorsHuang, Liang (Author) / Ni, Xuan (Author) / Ditto, William L. (Author) / Spano, Mark (Author) / Carney, Paul R. (Author) / Lai, Ying-Cheng (Author) / Ira A. Fulton Schools of Engineering (Contributor)
Created2017-01-18
128389-Thumbnail Image.png
Description

Recent works revealed that the energy required to control a complex network depends on the number of driving signals and the energy distribution follows an algebraic scaling law. If one implements control using a small number of drivers, e.g. as determined by the structural controllability theory, there is a high

Recent works revealed that the energy required to control a complex network depends on the number of driving signals and the energy distribution follows an algebraic scaling law. If one implements control using a small number of drivers, e.g. as determined by the structural controllability theory, there is a high probability that the energy will diverge. We develop a physical theory to explain the scaling behaviour through identification of the fundamental structural elements, the longest control chains (LCCs), that dominate the control energy. Based on the LCCs, we articulate a strategy to drastically reduce the control energy (e.g. in a large number of real-world networks). Owing to their structural nature, the LCCs may shed light on energy issues associated with control of nonlinear dynamical networks.

ContributorsChen, Yu-Zhong (Author) / Wang, Le-Zhi (Author) / Wang, Wen-Xu (Author) / Lai, Ying-Cheng (Author) / Ira A. Fulton Schools of Engineering (Contributor)
Created2016-04-20
128519-Thumbnail Image.png
Description

A challenging problem in network science is to control complex networks. In existing frameworks of structural or exact controllability, the ability to steer a complex network toward any desired state is measured by the minimum number of required driver nodes. However, if we implement actual control by imposing input signals

A challenging problem in network science is to control complex networks. In existing frameworks of structural or exact controllability, the ability to steer a complex network toward any desired state is measured by the minimum number of required driver nodes. However, if we implement actual control by imposing input signals on the minimum set of driver nodes, an unexpected phenomenon arises: due to computational or experimental error there is a great probability that convergence to the final state cannot be achieved. In fact, the associated control cost can become unbearably large, effectively preventing actual control from being realized physically. The difficulty is particularly severe when the network is deemed controllable with a small number of drivers. Here we develop a physical controllability framework based on the probability of achieving actual control. Using a recently identified fundamental chain structure underlying the control energy, we offer strategies to turn physically uncontrollable networks into physically controllable ones by imposing slightly augmented set of input signals on properly chosen nodes. Our findings indicate that, although full control can be theoretically guaranteed by the prevailing structural controllability theory, it is necessary to balance the number of driver nodes and control cost to achieve physical control.

ContributorsWang, Le-Zhi (Author) / Chen, Yu-Zhong (Author) / Wang, Wen-Xu (Author) / Lai, Ying-Cheng (Author) / Ira A. Fulton Schools of Engineering (Contributor)
Created2017-01-11