This growing collection consists of scholarly works authored by ASU-affiliated faculty, staff, and community members, and it contains many open access articles. ASU-affiliated authors are encouraged to Share Your Work in KEEP.

Displaying 1 - 10 of 24
Filtering by

Clear all filters

141503-Thumbnail Image.png
Description

This paper studies the effect of targeted observations on state and parameter estimates determined with Kalman filter data assimilation (DA) techniques. We first provide an analytical result demonstrating that targeting observations within the Kalman filter for a linear model can significantly reduce state estimation error as opposed to fixed or

This paper studies the effect of targeted observations on state and parameter estimates determined with Kalman filter data assimilation (DA) techniques. We first provide an analytical result demonstrating that targeting observations within the Kalman filter for a linear model can significantly reduce state estimation error as opposed to fixed or randomly located observations. We next conduct observing system simulation experiments for a chaotic model of meteorological interest, where we demonstrate that the local ensemble transform Kalman filter (LETKF) with targeted observations based on largest ensemble variance is skillful in providing more accurate state estimates than the LETKF with randomly located observations. Additionally, we find that a hybrid ensemble Kalman filter parameter estimation method accurately updates model parameters within the targeted observation context to further improve state estimation.

ContributorsBellsky, Thomas (Author) / Kostelich, Eric (Author) / Mahalov, Alex (Author) / College of Liberal Arts and Sciences (Contributor)
Created2014-06-01
Description

The effects of urbanization on ozone levels have been widely investigated over cities primarily located in temperate and/or humid regions. In this study, nested WRF-Chem simulations with a finest grid resolution of 1 km are conducted to investigate ozone concentrations O3 due to urbanization within cities in arid/semi-arid environments. First,

The effects of urbanization on ozone levels have been widely investigated over cities primarily located in temperate and/or humid regions. In this study, nested WRF-Chem simulations with a finest grid resolution of 1 km are conducted to investigate ozone concentrations O3 due to urbanization within cities in arid/semi-arid environments. First, a method based on a shape preserving Monotonic Cubic Interpolation (MCI) is developed and used to downscale anthropogenic emissions from the 4 km resolution 2005 National Emissions Inventory (NEI05) to the finest model resolution of 1 km. Using the rapidly expanding Phoenix metropolitan region as the area of focus, we demonstrate the proposed MCI method achieves ozone simulation results with appreciably improved correspondence to observations relative to the default interpolation method of the WRF-Chem system. Next, two additional sets of experiments are conducted, with the recommended MCI approach, to examine impacts of urbanization on ozone production: (1) the urban land cover is included (i.e., urbanization experiments) and, (2) the urban land cover is replaced with the region's native shrubland. Impacts due to the presence of the built environment on O3 are highly heterogeneous across the metropolitan area. Increased near surface O3 due to urbanization of 10–20 ppb is predominantly a nighttime phenomenon while simulated impacts during daytime are negligible. Urbanization narrows the daily O3 range (by virtue of increasing nighttime minima), an impact largely due to the region's urban heat island. Our results demonstrate the importance of the MCI method for accurate representation of the diurnal profile of ozone, and highlight its utility for high-resolution air quality simulations for urban areas.

ContributorsLi, Jialun (Author) / Georgescu, Matei (Author) / Hyde, Peter (Author) / Mahalov, Alex (Author) / Moustaoui, Mohamed (Author) / Julie Ann Wrigley Global Institute of Sustainability (Contributor)
Created2014-11-01
127882-Thumbnail Image.png
Description

The estimation of energy demand (by power plants) has traditionally relied on historical energy use data for the region(s) that a plant produces for. Regression analysis, artificial neural network and Bayesian theory are the most common approaches for analysing these data. Such data and techniques do not generate reliable results.

The estimation of energy demand (by power plants) has traditionally relied on historical energy use data for the region(s) that a plant produces for. Regression analysis, artificial neural network and Bayesian theory are the most common approaches for analysing these data. Such data and techniques do not generate reliable results. Consequently, excess energy has to be generated to prevent blackout; causes for energy surge are not easily determined; and potential energy use reduction from energy efficiency solutions is usually not translated into actual energy use reduction. The paper highlights the weaknesses of traditional techniques, and lays out a framework to improve the prediction of energy demand by combining energy use models of equipment, physical systems and buildings, with the proposed data mining algorithms for reverse engineering. The research team first analyses data samples from large complex energy data, and then, presents a set of computationally efficient data mining algorithms for reverse engineering. In order to develop a structural system model for reverse engineering, two focus groups are developed that has direct relation with cause and effect variables. The research findings of this paper includes testing out different sets of reverse engineering algorithms, understand their output patterns and modify algorithms to elevate accuracy of the outputs.

ContributorsNaganathan, Hariharan (Author) / Chong, Oswald (Author) / Ye, Long (Author) / Ira A. Fulton School of Engineering (Contributor)
Created2015-12-09
127878-Thumbnail Image.png
Description

Small and medium office buildings consume a significant parcel of the U.S. building stock energy consumption. Still, owners lack resources and experience to conduct detailed energy audits and retrofit analysis. We present an eight-steps framework for an energy retrofit assessment in small and medium office buildings. Through a bottom-up approach

Small and medium office buildings consume a significant parcel of the U.S. building stock energy consumption. Still, owners lack resources and experience to conduct detailed energy audits and retrofit analysis. We present an eight-steps framework for an energy retrofit assessment in small and medium office buildings. Through a bottom-up approach and a web-based retrofit toolkit tested on a case study in Arizona, this methodology was able to save about 50% of the total energy consumed by the case study building, depending on the adopted measures and invested capital. While the case study presented is a deep energy retrofit, the proposed framework is effective in guiding the decision-making process that precedes any energy retrofit, deep or light.

ContributorsRios, Fernanda (Author) / Parrish, Kristen (Author) / Chong, Oswald (Author) / Ira A. Fulton School of Engineering (Contributor)
Created2016-05-20
127865-Thumbnail Image.png
Description

Commercial buildings’ consumption is driven by multiple factors that include occupancy, system and equipment efficiency, thermal heat transfer, equipment plug loads, maintenance and operational procedures, and outdoor and indoor temperatures. A modern building energy system can be viewed as a complex dynamical system that is interconnected and influenced by external

Commercial buildings’ consumption is driven by multiple factors that include occupancy, system and equipment efficiency, thermal heat transfer, equipment plug loads, maintenance and operational procedures, and outdoor and indoor temperatures. A modern building energy system can be viewed as a complex dynamical system that is interconnected and influenced by external and internal factors. Modern large scale sensor measures some physical signals to monitor real-time system behaviors. Such data has the potentials to detect anomalies, identify consumption patterns, and analyze peak loads. The paper proposes a novel method to detect hidden anomalies in commercial building energy consumption system. The framework is based on Hilbert-Huang transform and instantaneous frequency analysis. The objectives are to develop an automated data pre-processing system that can detect anomalies and provide solutions with real-time consumption database using Ensemble Empirical Mode Decomposition (EEMD) method. The finding of this paper will also include the comparisons of Empirical mode decomposition and Ensemble empirical mode decomposition of three important type of institutional buildings.

ContributorsNaganathan, Hariharan (Author) / Chong, Oswald (Author) / Huang, Zigang (Author) / Cheng, Ying (Author) / Ira A. Fulton School of Engineering (Contributor)
Created2016-05-20
127861-Thumbnail Image.png
Description

An interesting occurrence of a Rossby wave breaking event observed during the VORCORE experiment is presented and explained. Twenty-seven balloons were launched inside the Antarctic polar vortex. Almost all of these balloons evolved in the stratosphere around 500K within the vortex, except the one launched on 28 October 2005. In

An interesting occurrence of a Rossby wave breaking event observed during the VORCORE experiment is presented and explained. Twenty-seven balloons were launched inside the Antarctic polar vortex. Almost all of these balloons evolved in the stratosphere around 500K within the vortex, except the one launched on 28 October 2005. In this case, the balloon was caught within a tongue of high potential vorticity (PV), and was ejected from the polar vortex. The evolution of this event is studied for the period between 19 and 25 November 2005. It is found that at the beginning of this period, the polar vortex experienced distortions due to the presence of Rossby waves. Then, these waves break and a tongue of high PV develops. On 25 November, the tongue became separated from the vortex and the balloon was ejected into the surf zone. Lagrangian simulations demonstrate that the air masses surrounding the balloon after its ejection were originating from the vortex edge. The wave breaking and the development of the tongue are confined within a region where a planetary Quasi-Stationary Wave 1 (QSW1) induces wind speeds with weaker values. The QSW1 causes asymmetry in the wind speed and the horizontal PV gradient along the edge of the polar vortex, resulting in a localized jet. Rossby waves with smaller scales propagating on top of this jet amplify as they enter the jet exit region and then break. The role of the QSW1 on the formation of the weak flow conditions that caused the non-linear wave breaking observed near the vortex edge is confirmed by three-dimensional numerical simulations using forcing with and without the contribution of the QSW1.

ContributorsMoustaoui, Mohamed (Author) / Teitelbaum, H. (Author) / Mahalov, Alex (Author) / College of Liberal Arts and Sciences (Contributor)
Created2013-04-16
127833-Thumbnail Image.png
Description

There are many data mining and machine learning techniques to manage large sets of complex energy supply and demand data for building, organization and city. As the amount of data continues to grow, new data analysis methods are needed to address the increasing complexity. Using data from the energy loss

There are many data mining and machine learning techniques to manage large sets of complex energy supply and demand data for building, organization and city. As the amount of data continues to grow, new data analysis methods are needed to address the increasing complexity. Using data from the energy loss between the supply (energy production sources) and demand (buildings and cities consumption), this paper proposes a Semi-Supervised Energy Model (SSEM) to analyse different loss factors for a building cluster. This is done by deep machine learning by training machines to semi-supervise the learning, understanding and manage the process of energy losses. Semi-Supervised Energy Model (SSEM) aims at understanding the demand-supply characteristics of a building cluster and utilizes the confident unlabelled data (loss factors) using deep machine learning techniques. The research findings involves sample data from one of the university campuses and presents the output, which provides an estimate of losses that can be reduced. The paper also provides a list of loss factors that contributes to the total losses and suggests a threshold value for each loss factor, which is determined through real time experiments. The conclusion of this paper provides a proposed energy model that can provide accurate numbers on energy demand, which in turn helps the suppliers to adopt such a model to optimize their supply strategies.

ContributorsNaganathan, Hariharan (Author) / Chong, Oswald (Author) / Chen, Xue-wen (Author) / Ira A. Fulton Schools of Engineering (Contributor)
Created2015-09-14
128800-Thumbnail Image.png
Description

Insulin-like growth factor 1 (IGF1) is an important biomarker for the management of growth hormone disorders. Recently there has been rising interest in deploying mass spectrometric (MS) methods of detection for measuring IGF1. However, widespread clinical adoption of any MS-based IGF1 assay will require increased throughput and speed to justify

Insulin-like growth factor 1 (IGF1) is an important biomarker for the management of growth hormone disorders. Recently there has been rising interest in deploying mass spectrometric (MS) methods of detection for measuring IGF1. However, widespread clinical adoption of any MS-based IGF1 assay will require increased throughput and speed to justify the costs of analyses, and robust industrial platforms that are reproducible across laboratories. Presented here is an MS-based quantitative IGF1 assay with performance rating of >1,000 samples/day, and a capability of quantifying IGF1 point mutations and posttranslational modifications. The throughput of the IGF1 mass spectrometric immunoassay (MSIA) benefited from a simplified sample preparation step, IGF1 immunocapture in a tip format, and high-throughput MALDI-TOF MS analysis. The Limit of Detection and Limit of Quantification of the resulting assay were 1.5 μg/L and 5 μg/L, respectively, with intra- and inter-assay precision CVs of less than 10%, and good linearity and recovery characteristics. The IGF1 MSIA was benchmarked against commercially available IGF1 ELISA via Bland-Altman method comparison test, resulting in a slight positive bias of 16%. The IGF1 MSIA was employed in an optimized parallel workflow utilizing two pipetting robots and MALDI-TOF-MS instruments synced into one-hour phases of sample preparation, extraction and MSIA pipette tip elution, MS data collection, and data processing. Using this workflow, high-throughput IGF1 quantification of 1,054 human samples was achieved in approximately 9 hours. This rate of assaying is a significant improvement over existing MS-based IGF1 assays, and is on par with that of the enzyme-based immunoassays. Furthermore, a mutation was detected in ∼1% of the samples (SNP: rs17884626, creating an A→T substitution at position 67 of the IGF1), demonstrating the capability of IGF1 MSIA to detect point mutations and posttranslational modifications.

ContributorsOran, Paul (Author) / Trenchevska, Olgica (Author) / Nedelkov, Dobrin (Author) / Borges, Chad (Author) / Schaab, Matthew (Author) / Rehder, Douglas (Author) / Jarvis, Jason (Author) / Sherma, Nisha (Author) / Shen, Luhui (Author) / Krastins, Bryan (Author) / Lopez, Mary F. (Author) / Schwenke, Dawn (Author) / Reaven, Peter D. (Author) / Nelson, Randall (Author) / Biodesign Institute (Contributor)
Created2014-03-24
128773-Thumbnail Image.png
Description

Serum Amyloid A (SAA) is an acute phase protein complex consisting of several abundant isoforms. The N- terminus of SAA is critical to its function in amyloid formation. SAA is frequently truncated, either missing an arginine or an arginine-serine dipeptide, resulting in isoforms that may influence the capacity to form

Serum Amyloid A (SAA) is an acute phase protein complex consisting of several abundant isoforms. The N- terminus of SAA is critical to its function in amyloid formation. SAA is frequently truncated, either missing an arginine or an arginine-serine dipeptide, resulting in isoforms that may influence the capacity to form amyloid. However, the relative abundance of truncated SAA in diabetes and chronic kidney disease is not known.

Methods: Using mass spectrometric immunoassay, the abundance of SAA truncations relative to the native variants was examined in plasma of 91 participants with type 2 diabetes and chronic kidney disease and 69 participants without diabetes.

Results: The ratio of SAA 1.1 (missing N-terminal arginine) to native SAA 1.1 was lower in diabetics compared to non-diabetics (p = 0.004), and in males compared to females (p<0.001). This ratio was negatively correlated with glycated hemoglobin (r = −0.32, p<0.001) and triglyceride concentrations (r = −0.37, p<0.001), and positively correlated with HDL cholesterol concentrations (r = 0.32, p<0.001).

Conclusion: The relative abundance of the N-terminal arginine truncation of SAA1.1 is significantly decreased in diabetes and negatively correlates with measures of glycemic and lipid control.

ContributorsYassine, Hussein N. (Author) / Trenchevska, Olgica (Author) / He, Huijuan (Author) / Borges, Chad (Author) / Nedelkov, Dobrin (Author) / Mack, Wendy (Author) / Kono, Naoko (Author) / Koska, Juraj (Author) / Reaven, Peter D. (Author) / Nelson, Randall (Author) / Biodesign Institute (Contributor)
Created2015-01-21
129252-Thumbnail Image.png
Description

Physical mechanisms of incongruency between observations and Weather Research and Forecasting (WRF) Model predictions are examined. Limitations of evaluation are constrained by (i) parameterizations of model physics, (ii) parameterizations of input data, (iii) model resolution, and (iv) flux observation resolution. Observations from a new 22.1-m flux tower situated within a

Physical mechanisms of incongruency between observations and Weather Research and Forecasting (WRF) Model predictions are examined. Limitations of evaluation are constrained by (i) parameterizations of model physics, (ii) parameterizations of input data, (iii) model resolution, and (iv) flux observation resolution. Observations from a new 22.1-m flux tower situated within a residential neighborhood in Phoenix, Arizona, are utilized to evaluate the ability of the urbanized WRF to resolve finescale surface energy balance (SEB) when using the urban classes derived from the 30-m-resolution National Land Cover Database. Modeled SEB response to a large seasonal variation of net radiation forcing was tested during synoptically quiescent periods of high pressure in winter 2011 and premonsoon summer 2012. Results are presented from simulations employing five nested domains down to 333-m horizontal resolution. A comparative analysis of model cases testing parameterization of physical processes was done using four configurations of urban parameterization for the bulk urban scheme versus three representations with the Urban Canopy Model (UCM) scheme, and also for two types of planetary boundary layer parameterization: the local Mellor–Yamada–Janjić scheme and the nonlocal Yonsei University scheme. Diurnal variation in SEB constituent fluxes is examined in relation to surface-layer stability and modeled diagnostic variables. Improvement is found when adapting UCM for Phoenix with reduced errors in the SEB components. Finer model resolution is seen to have insignificant (<1 standard deviation) influence on mean absolute percent difference of 30-min diurnal mean SEB terms.

ContributorsShaffer, Stephen (Author) / Chow, Winston, 1951- (Author) / Georgescu, Matei (Author) / Hyde, Peter (Author) / Jenerette, G. D. (Author) / Mahalov, Alex (Author) / Moustaoui, Mohamed (Author) / Ruddell, Benjamin (Author) / College of Liberal Arts and Sciences (Contributor)
Created2015-06-11