This growing collection consists of scholarly works authored by ASU-affiliated faculty, staff, and community members, and it contains many open access articles. ASU-affiliated authors are encouraged to Share Your Work in KEEP.

Displaying 1 - 10 of 47
Filtering by

Clear all filters

127858-Thumbnail Image.png
Description

Background: While there is ample evidence for health risks associated with heat and other extreme weather events today, little is known about the impact of weather patterns on population health in preindustrial societies.

Objective: To investigate the impact of weather patterns on population health in Sweden before and during industrialization.

Methods: We

Background: While there is ample evidence for health risks associated with heat and other extreme weather events today, little is known about the impact of weather patterns on population health in preindustrial societies.

Objective: To investigate the impact of weather patterns on population health in Sweden before and during industrialization.

Methods: We obtained records of monthly mortality and of monthly mean temperatures and precipitation for Skellefteå parish, northern Sweden, for the period 1800-1950. The associations between monthly total mortality, as well as monthly mortality due to infectious and cardiovascular diseases, and monthly mean temperature and cumulative precipitation were modelled using a time series approach for three separate periods, 1800−1859, 1860-1909, and 1910-1950.

Results: We found higher temperatures and higher amounts of precipitation to be associated with lower mortality both in the medium term (same month and two-months lag) and in the long run (lag of six months up to a year). Similar patterns were found for mortality due to infectious and cardiovascular diseases. Furthermore, the effect of temperature and precipitation decreased over time.

Conclusions: Higher temperature and precipitation amounts were associated with reduced death counts with a lag of up to 12 months. The decreased effect over time may be due to improvements in nutritional status, decreased infant deaths, and other changes in society that occurred in the course of the demographic and epidemiological transition.

Contribution: The study contributes to a better understanding of the complex relationship between weather and mortality and, in particular, historical weather-related mortality.

ContributorsDaniel, Oudin Astrom (Author) / Edvinsson, Soren (Author) / Hondula, David M. (Author) / Rocklov, Joacim (Author) / Schumann, Barbara (Author) / College of Liberal Arts and Sciences (Contributor)
Created2016-10-05
128411-Thumbnail Image.png
Description

Background: Extreme heat is a public health challenge. The scarcity of directly comparable studies on the association of heat with morbidity and mortality and the inconsistent identification of threshold temperatures for severe impacts hampers the development of comprehensive strategies aimed at reducing adverse heat-health events.

Objectives: This quantitative study was designed

Background: Extreme heat is a public health challenge. The scarcity of directly comparable studies on the association of heat with morbidity and mortality and the inconsistent identification of threshold temperatures for severe impacts hampers the development of comprehensive strategies aimed at reducing adverse heat-health events.

Objectives: This quantitative study was designed to link temperature with mortality and morbidity events in Maricopa County, Arizona, USA, with a focus on the summer season.
Methods: Using Poisson regression models that controlled for temporal confounders, we assessed daily temperature–health associations for a suite of mortality and morbidity events, diagnoses, and temperature metrics. Minimum risk temperatures, increasing risk temperatures, and excess risk temperatures were statistically identified to represent different “trigger points” at which heat-health intervention measures might be activated.

Results: We found significant and consistent associations of high environmental temperature with all-cause mortality, cardiovascular mortality, heat-related mortality, and mortality resulting from conditions that are consequences of heat and dehydration. Hospitalizations and emergency department visits due to heat-related conditions and conditions associated with consequences of heat and dehydration were also strongly associated with high temperatures, and there were several times more of those events than there were deaths. For each temperature metric, we observed large contrasts in trigger points (up to 22°C) across multiple health events and diagnoses.

Conclusion: Consideration of multiple health events and diagnoses together with a comprehensive approach to identifying threshold temperatures revealed large differences in trigger points for possible interventions related to heat. Providing an array of heat trigger points applicable for different end-users may improve the public health response to a problem that is projected to worsen in the coming decades.

Created2015-07-28
128409-Thumbnail Image.png
Description

Background: Extreme heat is a leading weather-related cause of mortality in the United States, but little guidance is available regarding how temperature variable selection impacts heat–mortality relationships.
Objectives: We examined how the strength of the relationship between daily heat-related mortality and temperature varies as a function of temperature observation time, lag,

Background: Extreme heat is a leading weather-related cause of mortality in the United States, but little guidance is available regarding how temperature variable selection impacts heat–mortality relationships.
Objectives: We examined how the strength of the relationship between daily heat-related mortality and temperature varies as a function of temperature observation time, lag, and calculation method.
Methods: Long time series of daily mortality counts and hourly temperature for seven U.S. cities with different climates were examined using a generalized additive model. The temperature effect was modeled separately for each hour of the day (with up to 3-day lags) along with different methods of calculating daily maximum, minimum, and mean temperature. We estimated the temperature effect on mortality for each variable by comparing the 99th versus 85th temperature percentiles, as determined from the annual time series.

Results: In three northern cities (Boston, MA; Philadelphia, PA; and Seattle, WA) that appeared to have the greatest sensitivity to heat, hourly estimates were consistent with a diurnal pattern in the heat-mortality response, with strongest associations for afternoon or maximum temperature at lag 0 (day of death) or afternoon and evening of lag 1 (day before death). In warmer, southern cities, stronger associations were found with morning temperatures, but overall the relationships were weaker. The strongest temperature–mortality relationships were associated with maximum temperature, although mean temperature results were comparable.

Conclusions: There were systematic and substantial differences in the association between temperature and mortality based on the time and type of temperature observation. Because the strongest hourly temperature–mortality relationships were not always found at times typically associated with daily maximum temperatures, temperature variables should be selected independently for each study location. In general, heat-mortality was more closely coupled to afternoon and maximum temperatures in most cities we examined, particularly those typically prone to heat-related mortality.

Created2015-12-04
128391-Thumbnail Image.png
Description

Given a complex geospatial network with nodes distributed in a two-dimensional region of physical space, can the locations of the nodes be determined and their connection patterns be uncovered based solely on data? We consider the realistic situation where time series/signals can be collected from a single location. A key

Given a complex geospatial network with nodes distributed in a two-dimensional region of physical space, can the locations of the nodes be determined and their connection patterns be uncovered based solely on data? We consider the realistic situation where time series/signals can be collected from a single location. A key challenge is that the signals collected are necessarily time delayed, due to the varying physical distances from the nodes to the data collection centre. To meet this challenge, we develop a compressive-sensing-based approach enabling reconstruction of the full topology of the underlying geospatial network and more importantly, accurate estimate of the time delays. A standard triangularization algorithm can then be employed to find the physical locations of the nodes in the network. We further demonstrate successful detection of a hidden node (or a hidden source or threat), from which no signal can be obtained, through accurate detection of all its neighbouring nodes. As a geospatial network has the feature that a node tends to connect with geophysically nearby nodes, the localized region that contains the hidden node can be identified.

ContributorsSu, Riqi (Author) / Wang, Wen-Xu (Author) / Wang, Xiao (Author) / Lai, Ying-Cheng (Author) / Ira A. Fulton Schools of Engineering (Contributor)
Created2016-01-06
128390-Thumbnail Image.png
Description

We develop a framework to uncover and analyse dynamical anomalies from massive, nonlinear and non-stationary time series data. The framework consists of three steps: preprocessing of massive datasets to eliminate erroneous data segments, application of the empirical mode decomposition and Hilbert transform paradigm to obtain the fundamental components embedded in

We develop a framework to uncover and analyse dynamical anomalies from massive, nonlinear and non-stationary time series data. The framework consists of three steps: preprocessing of massive datasets to eliminate erroneous data segments, application of the empirical mode decomposition and Hilbert transform paradigm to obtain the fundamental components embedded in the time series at distinct time scales, and statistical/scaling analysis of the components. As a case study, we apply our framework to detecting and characterizing high-frequency oscillations (HFOs) from a big database of rat electroencephalogram recordings. We find a striking phenomenon: HFOs exhibit on–off intermittency that can be quantified by algebraic scaling laws. Our framework can be generalized to big data-related problems in other fields such as large-scale sensor data and seismic data analysis.

ContributorsHuang, Liang (Author) / Ni, Xuan (Author) / Ditto, William L. (Author) / Spano, Mark (Author) / Carney, Paul R. (Author) / Lai, Ying-Cheng (Author) / Ira A. Fulton Schools of Engineering (Contributor)
Created2017-01-18
128389-Thumbnail Image.png
Description

Recent works revealed that the energy required to control a complex network depends on the number of driving signals and the energy distribution follows an algebraic scaling law. If one implements control using a small number of drivers, e.g. as determined by the structural controllability theory, there is a high

Recent works revealed that the energy required to control a complex network depends on the number of driving signals and the energy distribution follows an algebraic scaling law. If one implements control using a small number of drivers, e.g. as determined by the structural controllability theory, there is a high probability that the energy will diverge. We develop a physical theory to explain the scaling behaviour through identification of the fundamental structural elements, the longest control chains (LCCs), that dominate the control energy. Based on the LCCs, we articulate a strategy to drastically reduce the control energy (e.g. in a large number of real-world networks). Owing to their structural nature, the LCCs may shed light on energy issues associated with control of nonlinear dynamical networks.

ContributorsChen, Yu-Zhong (Author) / Wang, Le-Zhi (Author) / Wang, Wen-Xu (Author) / Lai, Ying-Cheng (Author) / Ira A. Fulton Schools of Engineering (Contributor)
Created2016-04-20
128519-Thumbnail Image.png
Description

A challenging problem in network science is to control complex networks. In existing frameworks of structural or exact controllability, the ability to steer a complex network toward any desired state is measured by the minimum number of required driver nodes. However, if we implement actual control by imposing input signals

A challenging problem in network science is to control complex networks. In existing frameworks of structural or exact controllability, the ability to steer a complex network toward any desired state is measured by the minimum number of required driver nodes. However, if we implement actual control by imposing input signals on the minimum set of driver nodes, an unexpected phenomenon arises: due to computational or experimental error there is a great probability that convergence to the final state cannot be achieved. In fact, the associated control cost can become unbearably large, effectively preventing actual control from being realized physically. The difficulty is particularly severe when the network is deemed controllable with a small number of drivers. Here we develop a physical controllability framework based on the probability of achieving actual control. Using a recently identified fundamental chain structure underlying the control energy, we offer strategies to turn physically uncontrollable networks into physically controllable ones by imposing slightly augmented set of input signals on properly chosen nodes. Our findings indicate that, although full control can be theoretically guaranteed by the prevailing structural controllability theory, it is necessary to balance the number of driver nodes and control cost to achieve physical control.

ContributorsWang, Le-Zhi (Author) / Chen, Yu-Zhong (Author) / Wang, Wen-Xu (Author) / Lai, Ying-Cheng (Author) / Ira A. Fulton Schools of Engineering (Contributor)
Created2017-01-11
128511-Thumbnail Image.png
Description

Network reconstruction is a fundamental problem for understanding many complex systems with unknown interaction structures. In many complex systems, there are indirect interactions between two individuals without immediate connection but with common neighbors. Despite recent advances in network reconstruction, we continue to lack an approach for reconstructing complex networks with

Network reconstruction is a fundamental problem for understanding many complex systems with unknown interaction structures. In many complex systems, there are indirect interactions between two individuals without immediate connection but with common neighbors. Despite recent advances in network reconstruction, we continue to lack an approach for reconstructing complex networks with indirect interactions. Here we introduce a two-step strategy to resolve the reconstruction problem, where in the first step, we recover both direct and indirect interactions by employing the Lasso to solve a sparse signal reconstruction problem, and in the second step, we use matrix transformation and optimization to distinguish between direct and indirect interactions. The network structure corresponding to direct interactions can be fully uncovered. We exploit the public goods game occurring on complex networks as a paradigm for characterizing indirect interactions and test our reconstruction approach. We find that high reconstruction accuracy can be achieved for both homogeneous and heterogeneous networks, and a number of empirical networks in spite of insufficient data measurement contaminated by noise. Although a general framework for reconstructing complex networks with arbitrary types of indirect interactions is yet lacking, our approach opens new routes to separate direct and indirect interactions in a representative complex system.

ContributorsHan, Xiao (Author) / Shen, Zhesi (Author) / Wang, Wen-Xu (Author) / Lai, Ying-Cheng (Author) / Grebogi, Celso (Author) / Ira A. Fulton Schools of Engineering (Contributor)
Created2016-07-22
128495-Thumbnail Image.png
Description

Recently, the phenomenon of quantum-classical correspondence breakdown was uncovered in optomechanics, where in the classical regime the system exhibits chaos but in the corresponding quantum regime the motion is regular - there appears to be no signature of classical chaos whatsoever in the corresponding quantum system, generating a paradox. We

Recently, the phenomenon of quantum-classical correspondence breakdown was uncovered in optomechanics, where in the classical regime the system exhibits chaos but in the corresponding quantum regime the motion is regular - there appears to be no signature of classical chaos whatsoever in the corresponding quantum system, generating a paradox. We find that transient chaos, besides being a physically meaningful phenomenon by itself, provides a resolution. Using the method of quantum state diffusion to simulate the system dynamics subject to continuous homodyne detection, we uncover transient chaos associated with quantum trajectories. The transient behavior is consistent with chaos in the classical limit, while the long term evolution of the quantum system is regular. Transient chaos thus serves as a bridge for the quantum-classical transition (QCT). Strikingly, as the system transitions from the quantum to the classical regime, the average chaotic transient lifetime increases dramatically (faster than the Ehrenfest time characterizing the QCT for isolated quantum systems). We develop a physical theory to explain the scaling law.

ContributorsWang, Guanglei (Author) / Lai, Ying-Cheng (Author) / Grebogi, Celso (Author) / Ira A. Fulton Schools of Engineering (Contributor)
Created2016-10-17
128488-Thumbnail Image.png
Description

The process of cell fate determination has been depicted intuitively as cells travelling and resting on a rugged landscape, which has been probed by various theoretical studies. However, few studies have experimentally demonstrated how underlying gene regulatory networks shape the landscape and hence orchestrate cellular decision-making in the presence of

The process of cell fate determination has been depicted intuitively as cells travelling and resting on a rugged landscape, which has been probed by various theoretical studies. However, few studies have experimentally demonstrated how underlying gene regulatory networks shape the landscape and hence orchestrate cellular decision-making in the presence of both signal and noise. Here we tested different topologies and verified a synthetic gene circuit with mutual inhibition and auto-activations to be quadrastable, which enables direct study of quadruple cell fate determination on an engineered landscape. We show that cells indeed gravitate towards local minima and signal inductions dictate cell fates through modulating the shape of the multistable landscape. Experiments, guided by model predictions, reveal that sequential inductions generate distinct cell fates by changing landscape in sequence and hence navigating cells to different final states. This work provides a synthetic biology framework to approach cell fate determination and suggests a landscape-based explanation of fixed induction sequences for targeted differentiation.

ContributorsWu, Fuqing (Author) / Su, Riqi (Author) / Lai, Ying-Cheng (Author) / Wang, Xiao (Author) / Ira A. Fulton Schools of Engineering (Contributor)
Created2017-04-11