This growing collection consists of scholarly works authored by ASU-affiliated faculty, staff, and community members, and it contains many open access articles. ASU-affiliated authors are encouraged to Share Your Work in KEEP.

Displaying 1 - 10 of 43
Filtering by

Clear all filters

129026-Thumbnail Image.png
Description

Background: Increasing our understanding of the factors affecting the severity of the 2009 A/H1N1 influenza pandemic in different regions of the world could lead to improved clinical practice and mitigation strategies for future influenza pandemics. Even though a number of studies have shed light into the risk factors associated with severe

Background: Increasing our understanding of the factors affecting the severity of the 2009 A/H1N1 influenza pandemic in different regions of the world could lead to improved clinical practice and mitigation strategies for future influenza pandemics. Even though a number of studies have shed light into the risk factors associated with severe outcomes of 2009 A/H1N1 influenza infections in different populations (e.g., [1-5]), analyses of the determinants of mortality risk spanning multiple pandemic waves and geographic regions are scarce. Between-country differences in the mortality burden of the 2009 pandemic could be linked to differences in influenza case management, underlying population health, or intrinsic differences in disease transmission [6]. Additional studies elucidating the determinants of disease severity globally are warranted to guide prevention efforts in future influenza pandemics.

In Mexico, the 2009 A/H1N1 influenza pandemic was characterized by a three-wave pattern occurring in the spring, summer, and fall of 2009 with substantial geographical heterogeneity [7]. A recent study suggests that Mexico experienced high excess mortality burden during the 2009 A/H1N1 influenza pandemic relative to other countries [6]. However, an assessment of potential factors that contributed to the relatively high pandemic death toll in Mexico are lacking. Here, we fill this gap by analyzing a large series of laboratory-confirmed A/H1N1 influenza cases, hospitalizations, and deaths monitored by the Mexican Social Security medical system during April 1 through December 31, 2009 in Mexico. In particular, we quantify the association between disease severity, hospital admission delays, and neuraminidase inhibitor use by demographic characteristics, pandemic wave, and geographic regions of Mexico.

Methods: We analyzed a large series of laboratory-confirmed pandemic A/H1N1 influenza cases from a prospective surveillance system maintained by the Mexican Social Security system, April-December 2009. We considered a spectrum of disease severity encompassing outpatient visits, hospitalizations, and deaths, and recorded demographic and geographic information on individual patients. We assessed the impact of neuraminidase inhibitor treatment and hospital admission delay (≤ > 2 days after disease onset) on the risk of death by multivariate logistic regression.

Results: Approximately 50% of all A/H1N1-positive patients received antiviral medication during the Spring and Summer 2009 pandemic waves in Mexico while only 9% of A/H1N1 cases received antiviral medications during the fall wave (P < 0.0001). After adjustment for age, gender, and geography, antiviral treatment significantly reduced the risk of death (OR = 0.52 (95% CI: 0.30, 0.90)) while longer hospital admission delays increased the risk of death by 2.8-fold (95% CI: 2.25, 3.41).

Conclusions: Our findings underscore the potential impact of decreasing admission delays and increasing antiviral use to mitigate the mortality burden of future influenza pandemics.

Created2012-04-20
129236-Thumbnail Image.png
Description

Perchloroethylene (PCE) is a highly utilized solvent in the dry cleaning industry because of its cleaning effectiveness and relatively low cost to consumers. According to the 2006 U.S. Census, approximately 28,000 dry cleaning operations used PCE as their principal cleaning agent. Widespread use of PCE is problematic because of its

Perchloroethylene (PCE) is a highly utilized solvent in the dry cleaning industry because of its cleaning effectiveness and relatively low cost to consumers. According to the 2006 U.S. Census, approximately 28,000 dry cleaning operations used PCE as their principal cleaning agent. Widespread use of PCE is problematic because of its adverse impacts on human health and environmental quality. As PCE use is curtailed, effective alternatives must be analyzed for their toxicity and impacts to human health and the environment. Potential alternatives to PCE in dry cleaning include dipropylene glycol n-butyl ether (DPnB) and dipropylene glycol tert-butyl ether (DPtB), both promising to pose a relatively smaller risk. To evaluate these two alternatives to PCE, we established and scored performance criteria, including chemical toxicity, employee and customer exposure levels, impacts on the general population, costs of each system, and cleaning efficacy. The scores received for PCE were 5, 5, 3, 5, 3, and 3, respectively, and DPnB and DPtB scored 3, 1, 2, 2, 4, and 4, respectively. An aggregate sum of the performance criteria yielded a favorably low score of “16” for both DPnB and DPtB compared to “24” for PCE. We conclude that DPnB and DPtB are preferable dry cleaning agents, exhibiting reduced human toxicity and a lesser adverse impact on human health and the environment compared to PCE, with comparable capital investments, and moderately higher annual operating costs.

ContributorsHesari, Nikou (Author) / Francis, Chelsea (Author) / Halden, Rolf (Author) / Ira A. Fulton Schools of Engineering (Contributor)
Created2014-04-03
128572-Thumbnail Image.png
Description

We designed and evaluated an active sampling device, using as analytical targets a family of pesticides purported to contribute to honeybee colony collapse disorder. Simultaneous sampling of bulk water and pore water was accomplished using a low-flow, multi-channel pump to deliver water to an array of solid-phase extraction cartridges. Analytes

We designed and evaluated an active sampling device, using as analytical targets a family of pesticides purported to contribute to honeybee colony collapse disorder. Simultaneous sampling of bulk water and pore water was accomplished using a low-flow, multi-channel pump to deliver water to an array of solid-phase extraction cartridges. Analytes were separated using either liquid or gas chromatography, and analysis was performed using tandem mass spectrometry (MS/MS). Achieved recoveries of fipronil and degradates in water spiked to nominal concentrations of 0.1, 1, and 10 ng/L ranged from 77 ± 12 to 110 ± 18%. Method detection limits (MDLs) were as low as 0.040–0.8 ng/L. Extraction and quantitation of total fiproles at a wastewater-receiving wetland yielded concentrations in surface water and pore water ranging from 9.9 ± 4.6 to 18.1 ± 4.6 ng/L and 9.1 ± 3.0 to 12.6 ± 2.1 ng/L, respectively. Detected concentrations were statistically indistinguishable from those determined by conventional, more laborious techniques (p > 0.2 for the three most abundant fiproles). Aside from offering time-averaged sampling capabilities for two phases simultaneously with picogram-per-liter MDLs, the novel methodology eliminates the need for water and sediment transport via in situ solid phase extraction.

ContributorsSupowit, Samuel (Author) / Roll, Isaac (Author) / Dang, Viet D. (Author) / Kroll, Kevin J. (Author) / Denslow, Nancy D. (Author) / Halden, Rolf (Author) / Biodesign Institute (Contributor)
Created2016-02-24
127882-Thumbnail Image.png
Description

The estimation of energy demand (by power plants) has traditionally relied on historical energy use data for the region(s) that a plant produces for. Regression analysis, artificial neural network and Bayesian theory are the most common approaches for analysing these data. Such data and techniques do not generate reliable results.

The estimation of energy demand (by power plants) has traditionally relied on historical energy use data for the region(s) that a plant produces for. Regression analysis, artificial neural network and Bayesian theory are the most common approaches for analysing these data. Such data and techniques do not generate reliable results. Consequently, excess energy has to be generated to prevent blackout; causes for energy surge are not easily determined; and potential energy use reduction from energy efficiency solutions is usually not translated into actual energy use reduction. The paper highlights the weaknesses of traditional techniques, and lays out a framework to improve the prediction of energy demand by combining energy use models of equipment, physical systems and buildings, with the proposed data mining algorithms for reverse engineering. The research team first analyses data samples from large complex energy data, and then, presents a set of computationally efficient data mining algorithms for reverse engineering. In order to develop a structural system model for reverse engineering, two focus groups are developed that has direct relation with cause and effect variables. The research findings of this paper includes testing out different sets of reverse engineering algorithms, understand their output patterns and modify algorithms to elevate accuracy of the outputs.

ContributorsNaganathan, Hariharan (Author) / Chong, Oswald (Author) / Ye, Long (Author) / Ira A. Fulton School of Engineering (Contributor)
Created2015-12-09
127878-Thumbnail Image.png
Description

Small and medium office buildings consume a significant parcel of the U.S. building stock energy consumption. Still, owners lack resources and experience to conduct detailed energy audits and retrofit analysis. We present an eight-steps framework for an energy retrofit assessment in small and medium office buildings. Through a bottom-up approach

Small and medium office buildings consume a significant parcel of the U.S. building stock energy consumption. Still, owners lack resources and experience to conduct detailed energy audits and retrofit analysis. We present an eight-steps framework for an energy retrofit assessment in small and medium office buildings. Through a bottom-up approach and a web-based retrofit toolkit tested on a case study in Arizona, this methodology was able to save about 50% of the total energy consumed by the case study building, depending on the adopted measures and invested capital. While the case study presented is a deep energy retrofit, the proposed framework is effective in guiding the decision-making process that precedes any energy retrofit, deep or light.

ContributorsRios, Fernanda (Author) / Parrish, Kristen (Author) / Chong, Oswald (Author) / Ira A. Fulton School of Engineering (Contributor)
Created2016-05-20
127865-Thumbnail Image.png
Description

Commercial buildings’ consumption is driven by multiple factors that include occupancy, system and equipment efficiency, thermal heat transfer, equipment plug loads, maintenance and operational procedures, and outdoor and indoor temperatures. A modern building energy system can be viewed as a complex dynamical system that is interconnected and influenced by external

Commercial buildings’ consumption is driven by multiple factors that include occupancy, system and equipment efficiency, thermal heat transfer, equipment plug loads, maintenance and operational procedures, and outdoor and indoor temperatures. A modern building energy system can be viewed as a complex dynamical system that is interconnected and influenced by external and internal factors. Modern large scale sensor measures some physical signals to monitor real-time system behaviors. Such data has the potentials to detect anomalies, identify consumption patterns, and analyze peak loads. The paper proposes a novel method to detect hidden anomalies in commercial building energy consumption system. The framework is based on Hilbert-Huang transform and instantaneous frequency analysis. The objectives are to develop an automated data pre-processing system that can detect anomalies and provide solutions with real-time consumption database using Ensemble Empirical Mode Decomposition (EEMD) method. The finding of this paper will also include the comparisons of Empirical mode decomposition and Ensemble empirical mode decomposition of three important type of institutional buildings.

ContributorsNaganathan, Hariharan (Author) / Chong, Oswald (Author) / Huang, Zigang (Author) / Cheng, Ying (Author) / Ira A. Fulton School of Engineering (Contributor)
Created2016-05-20
127833-Thumbnail Image.png
Description

There are many data mining and machine learning techniques to manage large sets of complex energy supply and demand data for building, organization and city. As the amount of data continues to grow, new data analysis methods are needed to address the increasing complexity. Using data from the energy loss

There are many data mining and machine learning techniques to manage large sets of complex energy supply and demand data for building, organization and city. As the amount of data continues to grow, new data analysis methods are needed to address the increasing complexity. Using data from the energy loss between the supply (energy production sources) and demand (buildings and cities consumption), this paper proposes a Semi-Supervised Energy Model (SSEM) to analyse different loss factors for a building cluster. This is done by deep machine learning by training machines to semi-supervise the learning, understanding and manage the process of energy losses. Semi-Supervised Energy Model (SSEM) aims at understanding the demand-supply characteristics of a building cluster and utilizes the confident unlabelled data (loss factors) using deep machine learning techniques. The research findings involves sample data from one of the university campuses and presents the output, which provides an estimate of losses that can be reduced. The paper also provides a list of loss factors that contributes to the total losses and suggests a threshold value for each loss factor, which is determined through real time experiments. The conclusion of this paper provides a proposed energy model that can provide accurate numbers on energy demand, which in turn helps the suppliers to adopt such a model to optimize their supply strategies.

ContributorsNaganathan, Hariharan (Author) / Chong, Oswald (Author) / Chen, Xue-wen (Author) / Ira A. Fulton Schools of Engineering (Contributor)
Created2015-09-14
127819-Thumbnail Image.png
Description

The Future of Wastewater Sensing workshop is part of a collaboration between Arizona State University Center for Nanotechnology in Society in the School for the Future of Innovation in Society, the Biodesign Institute’s Center for Environmental Security, LC Nano, and the Nano-enabled Water Treatment (NEWT) Systems NSF Engineering Research Center.

The Future of Wastewater Sensing workshop is part of a collaboration between Arizona State University Center for Nanotechnology in Society in the School for the Future of Innovation in Society, the Biodesign Institute’s Center for Environmental Security, LC Nano, and the Nano-enabled Water Treatment (NEWT) Systems NSF Engineering Research Center. The Future of Wastewater Sensing workshop explores how technologies for studying, monitoring, and mining wastewater and sewage sludge might develop in the future, and what consequences may ensue for public health, law enforcement, private industry, regulations and society at large. The workshop pays particular attention to how wastewater sensing (and accompanying research, technologies, and applications) can be innovated, regulated, and used to maximize societal benefit and minimize the risk of adverse outcomes, when addressing critical social and environmental challenges.

ContributorsWithycombe Keeler, Lauren (Researcher) / Halden, Rolf (Researcher) / Selin, Cynthia (Researcher) / Center for Nanotechnology in Society (Contributor)
Created2015-11-01
128839-Thumbnail Image.png
Description

The 1918 influenza pandemic was a major epidemiological event of the twentieth century resulting in at least twenty million deaths worldwide; however, despite its historical, epidemiological, and biological relevance, it remains poorly understood. Here we examine the relationship between annual pneumonia and influenza death rates in the pre-pandemic (1910–17) and

The 1918 influenza pandemic was a major epidemiological event of the twentieth century resulting in at least twenty million deaths worldwide; however, despite its historical, epidemiological, and biological relevance, it remains poorly understood. Here we examine the relationship between annual pneumonia and influenza death rates in the pre-pandemic (1910–17) and pandemic (1918–20) periods and the scaling of mortality with latitude, longitude and population size, using data from 66 large cities of the United States. The mean pre-pandemic pneumonia death rates were highly associated with pneumonia death rates during the pandemic period (Spearman ρ = 0.64–0.72; P<0.001). By contrast, there was a weak correlation between pre-pandemic and pandemic influenza mortality rates. Pneumonia mortality rates partially explained influenza mortality rates in 1918 (ρ = 0.34, P = 0.005) but not during any other year. Pneumonia death counts followed a linear relationship with population size in all study years, suggesting that pneumonia death rates were homogeneous across the range of population sizes studied. By contrast, influenza death counts followed a power law relationship with a scaling exponent of ∼0.81 (95%CI: 0.71, 0.91) in 1918, suggesting that smaller cities experienced worst outcomes during the pandemic. A linear relationship was observed for all other years. Our study suggests that mortality associated with the 1918–20 influenza pandemic was in part predetermined by pre-pandemic pneumonia death rates in 66 large US cities, perhaps through the impact of the physical and social structure of each city. Smaller cities suffered a disproportionately high per capita influenza mortality burden than larger ones in 1918, while city size did not affect pneumonia mortality rates in the pre-pandemic and pandemic periods.

Created2011-08-19
128838-Thumbnail Image.png
Description

Background: The historical Japanese influenza vaccination program targeted at schoolchildren provides a unique opportunity to evaluate the indirect benefits of vaccinating high-transmitter groups to mitigate disease burden among seniors. Here we characterize the indirect mortality benefits of vaccinating schoolchildren based on data from Japan and the US.

Methods: We compared age-specific influenza-related excess

Background: The historical Japanese influenza vaccination program targeted at schoolchildren provides a unique opportunity to evaluate the indirect benefits of vaccinating high-transmitter groups to mitigate disease burden among seniors. Here we characterize the indirect mortality benefits of vaccinating schoolchildren based on data from Japan and the US.

Methods: We compared age-specific influenza-related excess mortality rates in Japanese seniors aged ≥65 years during the schoolchildren vaccination program (1978–1994) and after the program was discontinued (1995–2006). Indirect vaccine benefits were adjusted for demographic changes, socioeconomics and dominant influenza subtype; US mortality data were used as a control.

Results: We estimate that the schoolchildren vaccination program conferred a 36% adjusted mortality reduction among Japanese seniors (95%CI: 17–51%), corresponding to ∼1,000 senior deaths averted by vaccination annually (95%CI: 400–1,800). In contrast, influenza-related mortality did not change among US seniors, despite increasing vaccine coverage in this population.

Conclusions: The Japanese schoolchildren vaccination program was associated with substantial indirect mortality benefits in seniors.

ContributorsCharu, Vivek (Author) / Viboud, Cecile (Author) / Simonsen, Lone (Author) / Sturm-Ramirez, Katharine (Author) / Shinjoh, Masayoshi (Author) / Chowell-Puente, Gerardo (Author) / Miller, Mark (Author) / Sugaya, Norio (Author) / College of Liberal Arts and Sciences (Contributor)
Created2011-11-07