Matching Items (74)
128228-Thumbnail Image.png
Description

Nanomaterials enabled technologies have been seamlessly integrated into applications such as aviation and space, chemical industry, optics, solar hydrogen, fuel cell, batteries, sensors, power generation, aeronautic industry, building/construction industry, automotive engineering, consumer electronics, thermoelectric devices, pharmaceuticals, and cosmetic industry. Clean energy and environmental applications often demand the development of novel

Nanomaterials enabled technologies have been seamlessly integrated into applications such as aviation and space, chemical industry, optics, solar hydrogen, fuel cell, batteries, sensors, power generation, aeronautic industry, building/construction industry, automotive engineering, consumer electronics, thermoelectric devices, pharmaceuticals, and cosmetic industry. Clean energy and environmental applications often demand the development of novel nanomaterials that can provide shortest reaction pathways for the enhancement of reaction kinetics. Understanding the physicochemical, structural, microstructural, surface, and interface properties of nanomaterials is vital for achieving the required efficiency, cycle life, and sustainability in various technological applications. Nanomaterials with specific size and shape such as nanotubes, nanofibers, nanowires, nanocones, nanocomposites, nanorods, nanoislands, nanoparticles, nanospheres, and nanoshells to provide unique properties can be synthesized by tuning the process conditions.

ContributorsSrinivasan, Sesha (Author) / Kannan, Arunachala Mada (Author) / Kothurkar, Nikhil (Author) / Khalil, Yehia (Author) / Kuravi, Sarada (Author) / Ira A. Fulton Schools of Engineering (Contributor)
Created2015-11-23
128203-Thumbnail Image.png
Description

In this study, WRF-Chem is utilized at high resolution (1.333 km grid spacing for the innermost domain) to investigate impacts of southern California anthropogenic emissions (SoCal) on Phoenix ground-level ozone concentrations ([O3]) for a pair of recent exceedance episodes. First, WRF-Chem control simulations, based on the US Environmental Protection Agency

In this study, WRF-Chem is utilized at high resolution (1.333 km grid spacing for the innermost domain) to investigate impacts of southern California anthropogenic emissions (SoCal) on Phoenix ground-level ozone concentrations ([O3]) for a pair of recent exceedance episodes. First, WRF-Chem control simulations, based on the US Environmental Protection Agency (EPA) 2005 National Emissions Inventories (NEI05), are conducted to evaluate model performance. Compared with surface observations of hourly ozone, CO, NOX, and wind fields, the control simulations reproduce observed variability well. Simulated [O3] are comparable with the previous studies in this region. Next, the relative contribution of SoCal and Arizona local anthropogenic emissions (AZ) to ozone exceedances within the Phoenix metropolitan area is investigated via a trio of sensitivity simulations: (1) SoCal emissions are excluded, with all other emissions as in Control; (2) AZ emissions are excluded with all other emissions as in Control; and (3) SoCal and AZ emissions are excluded (i.e., all anthropogenic emissions are eliminated) to account only for Biogenic emissions and lateral boundary inflow (BILB). Based on the USEPA NEI05, results for the selected events indicate the impacts of AZ emissions are dominant on daily maximum 8 h average (DMA8) [O3] in Phoenix. SoCal contributions to DMA8 [O3] for the Phoenix metropolitan area range from a few ppbv to over 30 ppbv (10–30 % relative to Control experiments). [O3] from SoCal and AZ emissions exhibit the expected diurnal characteristics that are determined by physical and photochemical processes, while BILB contributions to DMA8 [O3] in Phoenix also play a key role.

ContributorsLi, Jialun (Author) / Georgescu, Matei (Author) / Hyde, Peter (Author) / Mahalov, Alex (Author) / Moustaoui, Mohamed (Author) / Julie Ann Wrigley Global Institute of Sustainability (Contributor)
Created2015-08-21
135547-Thumbnail Image.png
Description
The Experimental Data Processing (EDP) software is a C++ GUI-based application to streamline the process of creating a model for structural systems based on experimental data. EDP is designed to process raw data, filter the data for noise and outliers, create a fitted model to describe that data, complete a

The Experimental Data Processing (EDP) software is a C++ GUI-based application to streamline the process of creating a model for structural systems based on experimental data. EDP is designed to process raw data, filter the data for noise and outliers, create a fitted model to describe that data, complete a probabilistic analysis to describe the variation between replicates of the experimental process, and analyze reliability of a structural system based on that model. In order to help design the EDP software to perform the full analysis, the probabilistic and regression modeling aspects of this analysis have been explored. The focus has been on creating and analyzing probabilistic models for the data, adding multivariate and nonparametric fits to raw data, and developing computational techniques that allow for these methods to be properly implemented within EDP. For creating a probabilistic model of replicate data, the normal, lognormal, gamma, Weibull, and generalized exponential distributions have been explored. Goodness-of-fit tests, including the chi-squared, Anderson-Darling, and Kolmogorov-Smirnoff tests, have been used in order to analyze the effectiveness of any of these probabilistic models in describing the variation of parameters between replicates of an experimental test. An example using Young's modulus data for a Kevlar-49 Swath stress-strain test was used in order to demonstrate how this analysis is performed within EDP. In order to implement the distributions, numerical solutions for the gamma, beta, and hypergeometric functions were implemented, along with an arbitrary precision library to store numbers that exceed the maximum size of double-precision floating point digits. To create a multivariate fit, the multilinear solution was created as the simplest solution to the multivariate regression problem. This solution was then extended to solve nonlinear problems that can be linearized into multiple separable terms. These problems were solved analytically with the closed-form solution for the multilinear regression, and then by using a QR decomposition to solve numerically while avoiding numerical instabilities associated with matrix inversion. For nonparametric regression, or smoothing, the loess method was developed as a robust technique for filtering noise while maintaining the general structure of the data points. The loess solution was created by addressing concerns associated with simpler smoothing methods, including the running mean, running line, and kernel smoothing techniques, and combining the ability of each of these methods to resolve those issues. The loess smoothing method involves weighting each point in a partition of the data set, and then adding either a line or a polynomial fit within that partition. Both linear and quadratic methods were applied to a carbon fiber compression test, showing that the quadratic model was more accurate but the linear model had a shape that was more effective for analyzing the experimental data. Finally, the EDP program itself was explored to consider its current functionalities for processing data, as described by shear tests on carbon fiber data, and the future functionalities to be developed. The probabilistic and raw data processing capabilities were demonstrated within EDP, and the multivariate and loess analysis was demonstrated using R. As the functionality and relevant considerations for these methods have been developed, the immediate goal is to finish implementing and integrating these additional features into a version of EDP that performs a full streamlined structural analysis on experimental data.
ContributorsMarkov, Elan Richard (Author) / Rajan, Subramaniam (Thesis director) / Khaled, Bilal (Committee member) / Chemical Engineering Program (Contributor) / School of Mathematical and Statistical Sciences (Contributor) / Ira A. Fulton School of Engineering (Contributor) / Barrett, The Honors College (Contributor)
Created2016-05
141503-Thumbnail Image.png
Description

This paper studies the effect of targeted observations on state and parameter estimates determined with Kalman filter data assimilation (DA) techniques. We first provide an analytical result demonstrating that targeting observations within the Kalman filter for a linear model can significantly reduce state estimation error as opposed to fixed or

This paper studies the effect of targeted observations on state and parameter estimates determined with Kalman filter data assimilation (DA) techniques. We first provide an analytical result demonstrating that targeting observations within the Kalman filter for a linear model can significantly reduce state estimation error as opposed to fixed or randomly located observations. We next conduct observing system simulation experiments for a chaotic model of meteorological interest, where we demonstrate that the local ensemble transform Kalman filter (LETKF) with targeted observations based on largest ensemble variance is skillful in providing more accurate state estimates than the LETKF with randomly located observations. Additionally, we find that a hybrid ensemble Kalman filter parameter estimation method accurately updates model parameters within the targeted observation context to further improve state estimation.

ContributorsBellsky, Thomas (Author) / Kostelich, Eric (Author) / Mahalov, Alex (Author) / College of Liberal Arts and Sciences (Contributor)
Created2014-06-01