Matching Items (64)
Filtering by

Clear all filters

Description

Background:
Theory suggests that individual behavioral responses impact the spread of flu-like illnesses, but this has been difficult to empirically characterize. Social distancing is an important component of behavioral response, though analyses have been limited by a lack of behavioral data. Our objective is to use media data to characterize social

Background:
Theory suggests that individual behavioral responses impact the spread of flu-like illnesses, but this has been difficult to empirically characterize. Social distancing is an important component of behavioral response, though analyses have been limited by a lack of behavioral data. Our objective is to use media data to characterize social distancing behavior in order to empirically inform explanatory and predictive epidemiological models.

Methods:
We use data on variation in home television viewing as a proxy for variation in time spent in the home and, by extension, contact. This behavioral proxy is imperfect but appealing since information on a rich and representative sample is collected using consistent techniques across time and most major cities. We study the April-May 2009 outbreak of A/H1N1 in Central Mexico and examine the dynamic behavioral response in aggregate and contrast the observed patterns of various demographic subgroups. We develop and calibrate a dynamic behavioral model of disease transmission informed by the proxy data on daily variation in contact rates and compare it to a standard (non-adaptive) model and a fixed effects model that crudely captures behavior.

Results:
We find that after a demonstrable initial behavioral response (consistent with social distancing) at the onset of the outbreak, there was attenuation in the response before the conclusion of the public health intervention. We find substantial differences in the behavioral response across age subgroups and socioeconomic levels. We also find that the dynamic behavioral and fixed effects transmission models better account for variation in new confirmed cases, generate more stable estimates of the baseline rate of transmission over time and predict the number of new cases over a short horizon with substantially less error.

Conclusions:
Results suggest that A/H1N1 had an innate transmission potential greater than previously thought but this was masked by behavioral responses. Observed differences in behavioral response across demographic groups indicate a potential benefit from targeting social distancing outreach efforts.

ContributorsSpringborn, Michael (Author) / Chowell-Puente, Gerardo (Author) / MacLachlan, Matthew (Author) / Fenichel, Eli P. (Author)
Created2015-01-23
Description

It is known that in classical fluids turbulence typically occurs at high Reynolds numbers. But can turbulence occur at low Reynolds numbers? Here we investigate the transition to turbulence in the classic Taylor-Couette system in which the rotating fluids are manufactured ferrofluids with magnetized nanoparticles embedded in liquid carriers. We

It is known that in classical fluids turbulence typically occurs at high Reynolds numbers. But can turbulence occur at low Reynolds numbers? Here we investigate the transition to turbulence in the classic Taylor-Couette system in which the rotating fluids are manufactured ferrofluids with magnetized nanoparticles embedded in liquid carriers. We find that, in the presence of a magnetic field transverse to the symmetry axis of the system, turbulence can occur at Reynolds numbers that are at least one order of magnitude smaller than those in conventional fluids. This is established by extensive computational ferrohydrodynamics through a detailed investigation of transitions in the flow structure, and characterization of behaviors of physical quantities such as the energy, the wave number, and the angular momentum through the bifurcations. A finding is that, as the magnetic field is increased, onset of turbulence can be determined accurately and reliably. Our results imply that experimental investigation of turbulence may be feasible by using ferrofluids. Our study of transition to and evolution of turbulence in the Taylor-Couette ferrofluidic flow system provides insights into the challenging problem of turbulence control.

ContributorsAltmeyer, Sebastian (Author) / Do, Younghae (Author) / Lai, Ying-Cheng (Author) / Ira A. Fulton Schools of Engineering (Contributor)
Created2015-06-12
Description

A relatively unexplored issue in cybersecurity science and engineering is whether there exist intrinsic patterns of cyberattacks. Conventional wisdom favors absence of such patterns due to the overwhelming complexity of the modern cyberspace. Surprisingly, through a detailed analysis of an extensive data set that records the time-dependent frequencies of attacks

A relatively unexplored issue in cybersecurity science and engineering is whether there exist intrinsic patterns of cyberattacks. Conventional wisdom favors absence of such patterns due to the overwhelming complexity of the modern cyberspace. Surprisingly, through a detailed analysis of an extensive data set that records the time-dependent frequencies of attacks over a relatively wide range of consecutive IP addresses, we successfully uncover intrinsic spatiotemporal patterns underlying cyberattacks, where the term “spatio” refers to the IP address space. In particular, we focus on analyzing macroscopic properties of the attack traffic flows and identify two main patterns with distinct spatiotemporal characteristics: deterministic and stochastic. Strikingly, there are very few sets of major attackers committing almost all the attacks, since their attack “fingerprints” and target selection scheme can be unequivocally identified according to the very limited number of unique spatiotemporal characteristics, each of which only exists on a consecutive IP region and differs significantly from the others. We utilize a number of quantitative measures, including the flux-fluctuation law, the Markov state transition probability matrix, and predictability measures, to characterize the attack patterns in a comprehensive manner. A general finding is that the attack patterns possess high degrees of predictability, potentially paving the way to anticipating and, consequently, mitigating or even preventing large-scale cyberattacks using macroscopic approaches.

ContributorsChen, Yu-Zhong (Author) / Huang, Zi-Gang (Author) / Xu, Shouhuai (Author) / Lai, Ying-Cheng (Author) / Ira A. Fulton Schools of Engineering (Contributor)
Created2015-05-20
Description

Supply-demand processes take place on a large variety of real-world networked systems ranging from power grids and the internet to social networking and urban systems. In a modern infrastructure, supply-demand systems are constantly expanding, leading to constant increase in load requirement for resources and consequently, to problems such as low

Supply-demand processes take place on a large variety of real-world networked systems ranging from power grids and the internet to social networking and urban systems. In a modern infrastructure, supply-demand systems are constantly expanding, leading to constant increase in load requirement for resources and consequently, to problems such as low efficiency, resource scarcity, and partial system failures. Under certain conditions global catastrophe on the scale of the whole system can occur through the dynamical process of cascading failures. We investigate optimization and resilience of time-varying supply-demand systems by constructing network models of such systems, where resources are transported from the supplier sites to users through various links. Here by optimization we mean minimization of the maximum load on links, and system resilience can be characterized using the cascading failure size of users who fail to connect with suppliers.

We consider two representative classes of supply schemes: load driven supply and fix fraction supply. Our findings are: (1) optimized systems are more robust since relatively smaller cascading failures occur when triggered by external perturbation to the links; (2) a large fraction of links can be free of load if resources are directed to transport through the shortest paths; (3) redundant links in the performance of the system can help to reroute the traffic but may undesirably transmit and enlarge the failure size of the system; (4) the patterns of cascading failures depend strongly upon the capacity of links; (5) the specific location of the trigger determines the specific route of cascading failure, but has little effect on the final cascading size; (6) system expansion typically reduces the efficiency; and (7) when the locations of the suppliers are optimized over a long expanding period, fewer suppliers are required. These results hold for heterogeneous networks in general, providing insights into designing optimal and resilient complex supply-demand systems that expand constantly in time.

ContributorsZhang, Si-Ping (Author) / Huang, Zi-Gang (Author) / Dong, Jia-Qi (Author) / Eisenberg, Daniel (Author) / Seager, Thomas (Author) / Lai, Ying-Cheng (Author) / Ira A. Fulton Schools of Engineering (Contributor)
Created2015-06-23
129026-Thumbnail Image.png
Description

Background: Increasing our understanding of the factors affecting the severity of the 2009 A/H1N1 influenza pandemic in different regions of the world could lead to improved clinical practice and mitigation strategies for future influenza pandemics. Even though a number of studies have shed light into the risk factors associated with severe

Background: Increasing our understanding of the factors affecting the severity of the 2009 A/H1N1 influenza pandemic in different regions of the world could lead to improved clinical practice and mitigation strategies for future influenza pandemics. Even though a number of studies have shed light into the risk factors associated with severe outcomes of 2009 A/H1N1 influenza infections in different populations (e.g., [1-5]), analyses of the determinants of mortality risk spanning multiple pandemic waves and geographic regions are scarce. Between-country differences in the mortality burden of the 2009 pandemic could be linked to differences in influenza case management, underlying population health, or intrinsic differences in disease transmission [6]. Additional studies elucidating the determinants of disease severity globally are warranted to guide prevention efforts in future influenza pandemics.

In Mexico, the 2009 A/H1N1 influenza pandemic was characterized by a three-wave pattern occurring in the spring, summer, and fall of 2009 with substantial geographical heterogeneity [7]. A recent study suggests that Mexico experienced high excess mortality burden during the 2009 A/H1N1 influenza pandemic relative to other countries [6]. However, an assessment of potential factors that contributed to the relatively high pandemic death toll in Mexico are lacking. Here, we fill this gap by analyzing a large series of laboratory-confirmed A/H1N1 influenza cases, hospitalizations, and deaths monitored by the Mexican Social Security medical system during April 1 through December 31, 2009 in Mexico. In particular, we quantify the association between disease severity, hospital admission delays, and neuraminidase inhibitor use by demographic characteristics, pandemic wave, and geographic regions of Mexico.

Methods: We analyzed a large series of laboratory-confirmed pandemic A/H1N1 influenza cases from a prospective surveillance system maintained by the Mexican Social Security system, April-December 2009. We considered a spectrum of disease severity encompassing outpatient visits, hospitalizations, and deaths, and recorded demographic and geographic information on individual patients. We assessed the impact of neuraminidase inhibitor treatment and hospital admission delay (≤ > 2 days after disease onset) on the risk of death by multivariate logistic regression.

Results: Approximately 50% of all A/H1N1-positive patients received antiviral medication during the Spring and Summer 2009 pandemic waves in Mexico while only 9% of A/H1N1 cases received antiviral medications during the fall wave (P < 0.0001). After adjustment for age, gender, and geography, antiviral treatment significantly reduced the risk of death (OR = 0.52 (95% CI: 0.30, 0.90)) while longer hospital admission delays increased the risk of death by 2.8-fold (95% CI: 2.25, 3.41).

Conclusions: Our findings underscore the potential impact of decreasing admission delays and increasing antiviral use to mitigate the mortality burden of future influenza pandemics.

Created2012-04-20
128411-Thumbnail Image.png
Description

Background: Extreme heat is a public health challenge. The scarcity of directly comparable studies on the association of heat with morbidity and mortality and the inconsistent identification of threshold temperatures for severe impacts hampers the development of comprehensive strategies aimed at reducing adverse heat-health events.

Objectives: This quantitative study was designed

Background: Extreme heat is a public health challenge. The scarcity of directly comparable studies on the association of heat with morbidity and mortality and the inconsistent identification of threshold temperatures for severe impacts hampers the development of comprehensive strategies aimed at reducing adverse heat-health events.

Objectives: This quantitative study was designed to link temperature with mortality and morbidity events in Maricopa County, Arizona, USA, with a focus on the summer season.
Methods: Using Poisson regression models that controlled for temporal confounders, we assessed daily temperature–health associations for a suite of mortality and morbidity events, diagnoses, and temperature metrics. Minimum risk temperatures, increasing risk temperatures, and excess risk temperatures were statistically identified to represent different “trigger points” at which heat-health intervention measures might be activated.

Results: We found significant and consistent associations of high environmental temperature with all-cause mortality, cardiovascular mortality, heat-related mortality, and mortality resulting from conditions that are consequences of heat and dehydration. Hospitalizations and emergency department visits due to heat-related conditions and conditions associated with consequences of heat and dehydration were also strongly associated with high temperatures, and there were several times more of those events than there were deaths. For each temperature metric, we observed large contrasts in trigger points (up to 22°C) across multiple health events and diagnoses.

Conclusion: Consideration of multiple health events and diagnoses together with a comprehensive approach to identifying threshold temperatures revealed large differences in trigger points for possible interventions related to heat. Providing an array of heat trigger points applicable for different end-users may improve the public health response to a problem that is projected to worsen in the coming decades.

Created2015-07-28
128391-Thumbnail Image.png
Description

Given a complex geospatial network with nodes distributed in a two-dimensional region of physical space, can the locations of the nodes be determined and their connection patterns be uncovered based solely on data? We consider the realistic situation where time series/signals can be collected from a single location. A key

Given a complex geospatial network with nodes distributed in a two-dimensional region of physical space, can the locations of the nodes be determined and their connection patterns be uncovered based solely on data? We consider the realistic situation where time series/signals can be collected from a single location. A key challenge is that the signals collected are necessarily time delayed, due to the varying physical distances from the nodes to the data collection centre. To meet this challenge, we develop a compressive-sensing-based approach enabling reconstruction of the full topology of the underlying geospatial network and more importantly, accurate estimate of the time delays. A standard triangularization algorithm can then be employed to find the physical locations of the nodes in the network. We further demonstrate successful detection of a hidden node (or a hidden source or threat), from which no signal can be obtained, through accurate detection of all its neighbouring nodes. As a geospatial network has the feature that a node tends to connect with geophysically nearby nodes, the localized region that contains the hidden node can be identified.

ContributorsSu, Riqi (Author) / Wang, Wen-Xu (Author) / Wang, Xiao (Author) / Lai, Ying-Cheng (Author) / Ira A. Fulton Schools of Engineering (Contributor)
Created2016-01-06
128390-Thumbnail Image.png
Description

We develop a framework to uncover and analyse dynamical anomalies from massive, nonlinear and non-stationary time series data. The framework consists of three steps: preprocessing of massive datasets to eliminate erroneous data segments, application of the empirical mode decomposition and Hilbert transform paradigm to obtain the fundamental components embedded in

We develop a framework to uncover and analyse dynamical anomalies from massive, nonlinear and non-stationary time series data. The framework consists of three steps: preprocessing of massive datasets to eliminate erroneous data segments, application of the empirical mode decomposition and Hilbert transform paradigm to obtain the fundamental components embedded in the time series at distinct time scales, and statistical/scaling analysis of the components. As a case study, we apply our framework to detecting and characterizing high-frequency oscillations (HFOs) from a big database of rat electroencephalogram recordings. We find a striking phenomenon: HFOs exhibit on–off intermittency that can be quantified by algebraic scaling laws. Our framework can be generalized to big data-related problems in other fields such as large-scale sensor data and seismic data analysis.

ContributorsHuang, Liang (Author) / Ni, Xuan (Author) / Ditto, William L. (Author) / Spano, Mark (Author) / Carney, Paul R. (Author) / Lai, Ying-Cheng (Author) / Ira A. Fulton Schools of Engineering (Contributor)
Created2017-01-18
128389-Thumbnail Image.png
Description

Recent works revealed that the energy required to control a complex network depends on the number of driving signals and the energy distribution follows an algebraic scaling law. If one implements control using a small number of drivers, e.g. as determined by the structural controllability theory, there is a high

Recent works revealed that the energy required to control a complex network depends on the number of driving signals and the energy distribution follows an algebraic scaling law. If one implements control using a small number of drivers, e.g. as determined by the structural controllability theory, there is a high probability that the energy will diverge. We develop a physical theory to explain the scaling behaviour through identification of the fundamental structural elements, the longest control chains (LCCs), that dominate the control energy. Based on the LCCs, we articulate a strategy to drastically reduce the control energy (e.g. in a large number of real-world networks). Owing to their structural nature, the LCCs may shed light on energy issues associated with control of nonlinear dynamical networks.

ContributorsChen, Yu-Zhong (Author) / Wang, Le-Zhi (Author) / Wang, Wen-Xu (Author) / Lai, Ying-Cheng (Author) / Ira A. Fulton Schools of Engineering (Contributor)
Created2016-04-20
128519-Thumbnail Image.png
Description

A challenging problem in network science is to control complex networks. In existing frameworks of structural or exact controllability, the ability to steer a complex network toward any desired state is measured by the minimum number of required driver nodes. However, if we implement actual control by imposing input signals

A challenging problem in network science is to control complex networks. In existing frameworks of structural or exact controllability, the ability to steer a complex network toward any desired state is measured by the minimum number of required driver nodes. However, if we implement actual control by imposing input signals on the minimum set of driver nodes, an unexpected phenomenon arises: due to computational or experimental error there is a great probability that convergence to the final state cannot be achieved. In fact, the associated control cost can become unbearably large, effectively preventing actual control from being realized physically. The difficulty is particularly severe when the network is deemed controllable with a small number of drivers. Here we develop a physical controllability framework based on the probability of achieving actual control. Using a recently identified fundamental chain structure underlying the control energy, we offer strategies to turn physically uncontrollable networks into physically controllable ones by imposing slightly augmented set of input signals on properly chosen nodes. Our findings indicate that, although full control can be theoretically guaranteed by the prevailing structural controllability theory, it is necessary to balance the number of driver nodes and control cost to achieve physical control.

ContributorsWang, Le-Zhi (Author) / Chen, Yu-Zhong (Author) / Wang, Wen-Xu (Author) / Lai, Ying-Cheng (Author) / Ira A. Fulton Schools of Engineering (Contributor)
Created2017-01-11