Matching Items (16)
Filtering by

Clear all filters

149985-Thumbnail Image.png
Description
The high strength to weight ratio of woven fabric offers a cost effective solution to be used in a containment system for aircraft propulsion engines. Currently, Kevlar is the only Federal Aviation Administration (FAA) approved fabric for usage in systems intended to mitigate fan blade-out events. This research builds on

The high strength to weight ratio of woven fabric offers a cost effective solution to be used in a containment system for aircraft propulsion engines. Currently, Kevlar is the only Federal Aviation Administration (FAA) approved fabric for usage in systems intended to mitigate fan blade-out events. This research builds on an earlier constitutive model of Kevlar 49 fabric developed at Arizona State University (ASU) with the addition of new and improved modeling details. Latest stress strain experiments provided new and valuable data used to modify the material model post peak behavior. These changes reveal an overall improvement of the Finite Element (FE) model's ability to predict experimental results. First, the steel projectile is modeled using Johnson-Cook material model and provides a more realistic behavior in the FE ballistic models. This is particularly noticeable when comparing FE models with laboratory tests where large deformations in projectiles are observed. Second, follow-up analysis of the results obtained through the new picture frame tests conducted at ASU provides new values for the shear moduli and corresponding strains. The new approach for analysis of data from picture frame tests combines digital image analysis and a two-level factorial optimization formulation. Finally, an additional improvement in the material model for Kevlar involves checking the convergence at variation of mesh density of fabrics. The study performed and described herein shows the converging trend, therefore validating the FE model.
ContributorsMorea, Mihai I (Author) / Rajan, Subramaniam D. (Thesis advisor) / Arizona State University (Publisher)
Created2011
151367-Thumbnail Image.png
Description
This study focuses on implementing probabilistic nature of material properties (Kevlar® 49) to the existing deterministic finite element analysis (FEA) of fabric based engine containment system through Monte Carlo simulations (MCS) and implementation of probabilistic analysis in engineering designs through Reliability Based Design Optimization (RBDO). First, the emphasis is on

This study focuses on implementing probabilistic nature of material properties (Kevlar® 49) to the existing deterministic finite element analysis (FEA) of fabric based engine containment system through Monte Carlo simulations (MCS) and implementation of probabilistic analysis in engineering designs through Reliability Based Design Optimization (RBDO). First, the emphasis is on experimental data analysis focusing on probabilistic distribution models which characterize the randomness associated with the experimental data. The material properties of Kevlar® 49 are modeled using experimental data analysis and implemented along with an existing spiral modeling scheme (SMS) and user defined constitutive model (UMAT) for fabric based engine containment simulations in LS-DYNA. MCS of the model are performed to observe the failure pattern and exit velocities of the models. Then the solutions are compared with NASA experimental tests and deterministic results. MCS with probabilistic material data give a good prospective on results rather than a single deterministic simulation results. The next part of research is to implement the probabilistic material properties in engineering designs. The main aim of structural design is to obtain optimal solutions. In any case, in a deterministic optimization problem even though the structures are cost effective, it becomes highly unreliable if the uncertainty that may be associated with the system (material properties, loading etc.) is not represented or considered in the solution process. Reliable and optimal solution can be obtained by performing reliability optimization along with the deterministic optimization, which is RBDO. In RBDO problem formulation, in addition to structural performance constraints, reliability constraints are also considered. This part of research starts with introduction to reliability analysis such as first order reliability analysis, second order reliability analysis followed by simulation technique that are performed to obtain probability of failure and reliability of structures. Next, decoupled RBDO procedure is proposed with a new reliability analysis formulation with sensitivity analysis, which is performed to remove the highly reliable constraints in the RBDO, thereby reducing the computational time and function evaluations. Followed by implementation of the reliability analysis concepts and RBDO in finite element 2D truss problems and a planar beam problem are presented and discussed.
ContributorsDeivanayagam, Arumugam (Author) / Rajan, Subramaniam D. (Thesis advisor) / Mobasher, Barzin (Committee member) / Neithalath, Narayanan (Committee member) / Arizona State University (Publisher)
Created2012
151771-Thumbnail Image.png
Description
This research examines the current challenges of using Lamb wave interrogation methods to localize fatigue crack damage in a complex metallic structural component subjected to unknown temperatures. The goal of this work is to improve damage localization results for a structural component interrogated at an unknown temperature, by developing a

This research examines the current challenges of using Lamb wave interrogation methods to localize fatigue crack damage in a complex metallic structural component subjected to unknown temperatures. The goal of this work is to improve damage localization results for a structural component interrogated at an unknown temperature, by developing a probabilistic and reference-free framework for estimating Lamb wave velocities and the damage location. The methodology for damage localization at unknown temperatures includes the following key elements: i) a model that can describe the change in Lamb wave velocities with temperature; ii) the extension of an advanced time-frequency based signal processing technique for enhanced time-of-flight feature extraction from a dispersive signal; iii) the development of a Bayesian damage localization framework incorporating data association and sensor fusion. The technique requires no additional transducers to be installed on a structure, and allows for the estimation of both the temperature and the wave velocity in the component. Additionally, the framework of the algorithm allows it to function completely in an unsupervised manner by probabilistically accounting for all measurement origin uncertainty. The novel algorithm was experimentally validated using an aluminum lug joint with a growing fatigue crack. The lug joint was interrogated using piezoelectric transducers at multiple fatigue crack lengths, and at temperatures between 20°C and 80°C. The results showed that the algorithm could accurately predict the temperature and wave speed of the lug joint. The localization results for the fatigue damage were found to correlate well with the true locations at long crack lengths, but loss of accuracy was observed in localizing small cracks due to time-of-flight measurement errors. To validate the algorithm across a wider range of temperatures the electromechanically coupled LISA/SIM model was used to simulate the effects of temperatures. The numerical results showed that this approach would be capable of experimentally estimating the temperature and velocity in the lug joint for temperatures from -60°C to 150°C. The velocity estimation algorithm was found to significantly increase the accuracy of localization at temperatures above 120°C when error due to incorrect velocity selection begins to outweigh the error due to time-of-flight measurements.
ContributorsHensberry, Kevin (Author) / Chattopadhyay, Aditi (Thesis advisor) / Liu, Yongming (Committee member) / Papandreou-Suppappola, Antonia (Committee member) / Arizona State University (Publisher)
Created2013
161596-Thumbnail Image.png
Description
Additively Manufactured Thin-wall Inconel 718 specimens commonly find application in heat exchangers and Thermal Protection Systems (TPS) for space vehicles. The wall thicknesses in applications for these components typically range between 0.03-2.5mm. Laser Powder Bed Fusion (PBF) Fatigue standards assume thickness over 5mm and consider Hot Isostatic Pressing

Additively Manufactured Thin-wall Inconel 718 specimens commonly find application in heat exchangers and Thermal Protection Systems (TPS) for space vehicles. The wall thicknesses in applications for these components typically range between 0.03-2.5mm. Laser Powder Bed Fusion (PBF) Fatigue standards assume thickness over 5mm and consider Hot Isostatic Pressing (HIP) as conventional heat treatment. This study aims at investigating the dependence of High Cycle Fatigue (HCF) behavior on wall thickness and Hot Isostatic Pressing (HIP) for as-built Additively Manufactured Thin Wall Inconel 718 alloys. To address this aim, high cycle fatigue tests were performed on specimens of seven different thicknesses (0.3mm,0.35mm, 0.5mm, 0.75mm, 1mm, 1.5mm, and 2mm) using a Servohydraulic FatigueTesting Machine. Only half of the specimen underwent HIP, creating data for bothHIP and No-HIP specimens. Upon analyzing the collected data, it was noticed that the specimens that underwent HIP had similar fatigue behavior to that of sheet metal specimens. In addition, it was also noticed that the presence of Porosity in No-HIP specimens makes them more sensitive to changes in stress. A clear decrease in fatigue strength with the decrease in thickness was observed for all specimens.
ContributorsSaxena, Anushree (Author) / Bhate, Dhruv (Thesis advisor) / Liu, Yongming (Committee member) / Kwon, Beomjin (Committee member) / Arizona State University (Publisher)
Created2021
168584-Thumbnail Image.png
Description
Uncertainty quantification is critical for engineering design and analysis. Determining appropriate ways of dealing with uncertainties has been a constant challenge in engineering. Statistical methods provide a powerful aid to describe and understand uncertainties. This work focuses on applying Bayesian methods and machine learning in uncertainty quantification and prognostics among

Uncertainty quantification is critical for engineering design and analysis. Determining appropriate ways of dealing with uncertainties has been a constant challenge in engineering. Statistical methods provide a powerful aid to describe and understand uncertainties. This work focuses on applying Bayesian methods and machine learning in uncertainty quantification and prognostics among all the statistical methods. This study focuses on the mechanical properties of materials, both static and fatigue, the main engineering field on which this study focuses. This work can be summarized in the following items: First, maintaining the safety of vintage pipelines requires accurately estimating the strength. The objective is to predict the reliability-based strength using nondestructive multimodality surface information. Bayesian model averaging (BMA) is implemented for fusing multimodality non-destructive testing results for gas pipeline strength estimation. Several incremental improvements are proposed in the algorithm implementation. Second, the objective is to develop a statistical uncertainty quantification method for fatigue stress-life (S-N) curves with sparse data.Hierarchical Bayesian data augmentation (HBDA) is proposed to integrate hierarchical Bayesian modeling (HBM) and Bayesian data augmentation (BDA) to deal with sparse data problems for fatigue S-N curves. The third objective is to develop a physics-guided machine learning model to overcome limitations in parametric regression models and classical machine learning models for fatigue data analysis. A Probabilistic Physics-guided Neural Network (PPgNN) is proposed for probabilistic fatigue S-N curve estimation. This model is further developed for missing data and arbitrary output distribution problems. Fourth, multi-fidelity modeling combines the advantages of low- and high-fidelity models to achieve a required accuracy at a reasonable computation cost. The fourth objective is to develop a neural network approach for multi-fidelity modeling by learning the correlation between low- and high-fidelity models. Finally, conclusions are drawn, and future work is outlined based on the current study.
ContributorsChen, Jie (Author) / Liu, Yongming (Thesis advisor) / Chattopadhyay, Aditi (Committee member) / Mignolet, Marc (Committee member) / Ren, Yi (Committee member) / Yan, Hao (Committee member) / Arizona State University (Publisher)
Created2022
156779-Thumbnail Image.png
Description
This research summarizes the validation testing completed for the material model MAT213, currently implemented in the LS-DYNA finite element program. Testing was carried out using a carbon fiber composite material, T800-F3900. Stacked-ply tension and compression tests were performed for open-hole and full coupons. Comparisons of experimental and simulation results showed

This research summarizes the validation testing completed for the material model MAT213, currently implemented in the LS-DYNA finite element program. Testing was carried out using a carbon fiber composite material, T800-F3900. Stacked-ply tension and compression tests were performed for open-hole and full coupons. Comparisons of experimental and simulation results showed a good agreement between the two for metrics including, stress-strain response and displacements. Strains and displacements in the direction of loading were better predicted by the simulations than for that of the transverse direction.

Double cantilever beam and end notched flexure tests were performed experimentally and through simulations to determine the delamination properties of the material at the interlaminar layers. Experimental results gave the mode I critical energy release rate as having a range of 2.18 – 3.26 psi-in and the mode II critical energy release rate as 10.50 psi-in, both for the pre-cracked condition. Simulations were performed to calibrate other cohesive zone parameters required for modeling.

Samples of tested T800/F3900 coupons were processed and examined with scanning electron microscopy to determine and understand the underlying structure of the material. Tested coupons revealed damage and failure occurring at the micro scale for the composite material.
ContributorsHolt, Nathan T (Author) / Rajan, Subramaniam D. (Thesis advisor) / Mobasher, Barzin (Committee member) / Hoover, Christian (Committee member) / Arizona State University (Publisher)
Created2018
157550-Thumbnail Image.png
Description
An orthotropic elasto-plastic damage material model (OEPDMM) suitable for impact simulations has been developed through a joint research project funded by the Federal Aviation Administration (FAA) and the National Aeronautics and Space Administration (NASA). Development of the model includes derivation of the theoretical details, implementation of the theory into LS-DYNA®,

An orthotropic elasto-plastic damage material model (OEPDMM) suitable for impact simulations has been developed through a joint research project funded by the Federal Aviation Administration (FAA) and the National Aeronautics and Space Administration (NASA). Development of the model includes derivation of the theoretical details, implementation of the theory into LS-DYNA®, a commercially available nonlinear transient dynamic finite element code, as material model MAT 213, and verification and validation of the model. The material model is comprised of three major components: deformation, damage, and failure. The deformation sub-model is used to capture both linear and nonlinear deformations through a classical plasticity formulation. The damage sub-model is used to account for the reduction of elastic stiffness of the material as the degree of plastic strain is increased. Finally, the failure sub-model is used to predict the onset of loss of load carrying capacity in the material. OEPDMM is driven completely by tabulated experimental data obtained through physically meaningful material characterization tests, through high fidelity virtual tests, or both. The tabulated data includes stress-strain curves at different temperatures and strain rates to drive the deformation sub-model, damage parameter-total strain curves to drive the damage sub-model, and the failure sub-model can be driven by the data required for different failure theories implemented in the computer code. The work presented herein focuses on the experiments used to obtain the data necessary to drive as well as validate the material model, development and implementation of the damage model, verification of the deformation and damage models through single element (SE) and multi-element (ME) finite element simulations, development and implementation of experimental procedure for modeling delamination, and finally validation of the material model through low speed impact simulations and high speed impact simulations.
ContributorsKhaled, Bilal Marwan (Author) / Rajan, Subramaniam D. (Thesis advisor) / Mobasher, Barzin (Committee member) / Neithalath, Narayanan (Committee member) / Liu, Yongming (Committee member) / Goldberg, Robert K. (Committee member) / Arizona State University (Publisher)
Created2019
154829-Thumbnail Image.png
Description
There is a concerted effort in developing robust systems health monitoring/management (SHM) technology as a means to reduce the life cycle costs, improve availability, extend life and minimize downtime of various platforms including aerospace and civil infrastructure. The implementation of a robust SHM system requires a collaborative effort in a

There is a concerted effort in developing robust systems health monitoring/management (SHM) technology as a means to reduce the life cycle costs, improve availability, extend life and minimize downtime of various platforms including aerospace and civil infrastructure. The implementation of a robust SHM system requires a collaborative effort in a variety of areas such as sensor development, damage detection and localization, physics based models, and prognosis models for residual useful life (RUL) estimation. Damage localization and prediction is further complicated by geometric, material, loading, and environmental variabilities. Therefore, it is essential to develop robust SHM methodologies by taking into account such uncertainties. In this research, damage localization and RUL estimation of two different physical systems are addressed: (i) fatigue crack propagation in metallic materials under complex multiaxial loading and (ii) temporal scour prediction near bridge piers. With little modifications, the methodologies developed can be applied to other systems.

Current practice in fatigue life prediction is based on either physics based modeling or data-driven methods, and is limited to predicting RUL for simple geometries under uniaxial loading conditions. In this research, crack initiation and propagation behavior under uniaxial and complex biaxial fatigue loading is addressed. The crack propagation behavior is studied by performing extensive material characterization and fatigue testing under in-plane biaxial loading, both in-phase and out-of-phase, with different biaxiality ratios. A hybrid prognosis model, which combines machine learning with physics based modeling, is developed to account for the uncertainties in crack propagation and fatigue life prediction due to variabilities in material microstructural characteristics, crack localization information and environmental changes. The methodology iteratively combines localization information with hybrid prognosis models using sequential Bayesian techniques. The results show significant improvements in the localization and prediction accuracy under varying temperature.

For civil infrastructure, especially bridges, pier scour is a major failure mechanism. Currently available techniques are developed from a design perspective and provide highly conservative scour estimates. In this research, a fully probabilistic scour prediction methodology is developed using machine learning to accurately predict scour in real-time under varying flow conditions.
ContributorsNeerukatti, Rajesh Kumar (Author) / Chattopadhyay, Aditi (Thesis advisor) / Jiang, Hanqing (Committee member) / Liu, Yongming (Committee member) / Rajadas, John (Committee member) / Yekani Fard, Masoud (Committee member) / Arizona State University (Publisher)
Created2016
154985-Thumbnail Image.png
Description
There are many applications for polymer matrix composite materials in a variety of different industries, but designing and modeling with these materials remains a challenge due to the intricate architecture and damage modes. Multiscale modeling techniques of composite structures subjected to complex loadings are needed in order to address

There are many applications for polymer matrix composite materials in a variety of different industries, but designing and modeling with these materials remains a challenge due to the intricate architecture and damage modes. Multiscale modeling techniques of composite structures subjected to complex loadings are needed in order to address the scale-dependent behavior and failure. The rate dependency and nonlinearity of polymer matrix composite materials further complicates the modeling. Additionally, variability in the material constituents plays an important role in the material behavior and damage. The systematic consideration of uncertainties is as important as having the appropriate structural model, especially during model validation where the total error between physical observation and model prediction must be characterized. It is necessary to quantify the effects of uncertainties at every length scale in order to fully understand their impact on the structural response. Material variability may include variations in fiber volume fraction, fiber dimensions, fiber waviness, pure resin pockets, and void distributions. Therefore, a stochastic modeling framework with scale dependent constitutive laws and an appropriate failure theory is required to simulate the behavior and failure of polymer matrix composite structures subjected to complex loadings. Additionally, the variations in environmental conditions for aerospace applications and the effect of these conditions on the polymer matrix composite material need to be considered. The research presented in this dissertation provides the framework for stochastic multiscale modeling of composites and the characterization data needed to determine the effect of different environmental conditions on the material properties. The developed models extend sectional micromechanics techniques by incorporating 3D progressive damage theories and multiscale failure criteria. The mechanical testing of composites under various environmental conditions demonstrates the degrading effect these conditions have on the elastic and failure properties of the material. The methodologies presented in this research represent substantial progress toward understanding the failure and effect of variability for complex polymer matrix composites.
ContributorsJohnston, Joel Philip (Author) / Chattopadhyay, Aditi (Thesis advisor) / Liu, Yongming (Committee member) / Jiang, Hanqing (Committee member) / Dai, Lenore (Committee member) / Rajadas, John (Committee member) / Arizona State University (Publisher)
Created2016
154595-Thumbnail Image.png
Description
All structures suffer wear and tear because of impact, excessive load, fatigue, corrosion, etc. in addition to inherent defects during their manufacturing processes and their exposure to various environmental effects. These structural degradations are often imperceptible, but they can severely affect the structural performance of a component, thereby severely decreasing

All structures suffer wear and tear because of impact, excessive load, fatigue, corrosion, etc. in addition to inherent defects during their manufacturing processes and their exposure to various environmental effects. These structural degradations are often imperceptible, but they can severely affect the structural performance of a component, thereby severely decreasing its service life. Although previous studies of Structural Health Monitoring (SHM) have revealed extensive prior knowledge on the parts of SHM processes, such as the operational evaluation, data processing, and feature extraction, few studies have been conducted from a systematical perspective, the statistical model development.

The first part of this dissertation, the characteristics of inverse scattering problems, such as ill-posedness and nonlinearity, reviews ultrasonic guided wave-based structural health monitoring problems. The distinctive features and the selection of the domain analysis are investigated by analytically searching the conditions of the uniqueness solutions for ill-posedness and are validated experimentally.

Based on the distinctive features, a novel wave packet tracing (WPT) method for damage localization and size quantification is presented. This method involves creating time-space representations of the guided Lamb waves (GLWs), collected at a series of locations, with a spatially dense distribution along paths at pre-selected angles with respect to the direction, normal to the direction of wave propagation. The fringe patterns due to wave dispersion, which depends on the phase velocity, are selected as the primary features that carry information, regarding the wave propagation and scattering.

The following part of this dissertation presents a novel damage-localization framework, using a fully automated process. In order to construct the statistical model for autonomous damage localization deep-learning techniques, such as restricted Boltzmann machine and deep belief network, are trained and utilized to interpret nonlinear far-field wave patterns.

Next, a novel bridge scour estimation approach that comprises advantages of both empirical and data-driven models is developed. Two field datasets from the literature are used, and a Support Vector Machine (SVM), a machine-learning algorithm, is used to fuse the field data samples and classify the data with physical phenomena. The Fast Non-dominated Sorting Genetic Algorithm (NSGA-II) is evaluated on the model performance objective functions to search for Pareto optimal fronts.
ContributorsKim, Inho (Author) / Chattopadhyay, Aditi (Thesis advisor) / Jiang, Hanqing (Committee member) / Liu, Yongming (Committee member) / Mignolet, Marc (Committee member) / Rajadas, John (Committee member) / Arizona State University (Publisher)
Created2016