Matching Items (26)
Filtering by

Clear all filters

168682-Thumbnail Image.png
Description
In convective heat transfer processes, heat transfer rate increases generally with a large fluid velocity, which leads to complex flow patterns. However, numerically analyzing the complex transport process and conjugated heat transfer requires extensive time and computing resources. Recently, data-driven approach has risen as an alternative method to solve physical

In convective heat transfer processes, heat transfer rate increases generally with a large fluid velocity, which leads to complex flow patterns. However, numerically analyzing the complex transport process and conjugated heat transfer requires extensive time and computing resources. Recently, data-driven approach has risen as an alternative method to solve physical problems in a computational efficient manner without necessitating the iterative computations of the governing physical equations. However, the research on data-driven approach for convective heat transfer is still in nascent stage. This study aims to introduce data-driven approaches for modeling heat and mass convection phenomena. As the first step, this research explores a deep learning approach for modeling the internal forced convection heat transfer problems. Conditional generative adversarial networks (cGAN) are trained to predict the solution based on a graphical input describing fluid channel geometries and initial flow conditions. A trained cGAN model rapidly approximates the flow temperature, Nusselt number (Nu) and friction factor (f) of a flow in a heated channel over Reynolds number (Re) ranging from 100 to 27750. The optimized cGAN model exhibited an accuracy up to 97.6% when predicting the local distributions of Nu and f. Next, this research introduces a deep learning based surrogate model for three-dimensional (3D) transient mixed convention in a horizontal channel with a heated bottom surface. Conditional generative adversarial networks (cGAN) are trained to approximate the temperature maps at arbitrary channel locations and time steps. The model is developed for a mixed convection occurring at the Re of 100, Rayleigh number of 3.9E6, and Richardson number of 88.8. The cGAN with the PatchGAN based classifier without the strided convolutions infers the temperature map with the best clarity and accuracy. Finally, this study investigates how machine learning analyzes the mass transfer in 3D printed fluidic devices. Random forests algorithm is hired to classify the flow images taken from semi-transparent 3D printed tubes. Particularly, this work focuses on laminar-turbulent transition process occurring in a 3D wavy tube and a straight tube visualized by dye injection. The machine learning model automatically classifies experimentally obtained flow images with an accuracy > 0.95.
ContributorsKang, Munku (Author) / Kwon, Beomjin (Thesis advisor) / Phelan, Patrick (Committee member) / Ren, Yi (Committee member) / Rykaczewski, Konrad (Committee member) / Sohn, SungMin (Committee member) / Arizona State University (Publisher)
Created2022
168584-Thumbnail Image.png
Description
Uncertainty quantification is critical for engineering design and analysis. Determining appropriate ways of dealing with uncertainties has been a constant challenge in engineering. Statistical methods provide a powerful aid to describe and understand uncertainties. This work focuses on applying Bayesian methods and machine learning in uncertainty quantification and prognostics among

Uncertainty quantification is critical for engineering design and analysis. Determining appropriate ways of dealing with uncertainties has been a constant challenge in engineering. Statistical methods provide a powerful aid to describe and understand uncertainties. This work focuses on applying Bayesian methods and machine learning in uncertainty quantification and prognostics among all the statistical methods. This study focuses on the mechanical properties of materials, both static and fatigue, the main engineering field on which this study focuses. This work can be summarized in the following items: First, maintaining the safety of vintage pipelines requires accurately estimating the strength. The objective is to predict the reliability-based strength using nondestructive multimodality surface information. Bayesian model averaging (BMA) is implemented for fusing multimodality non-destructive testing results for gas pipeline strength estimation. Several incremental improvements are proposed in the algorithm implementation. Second, the objective is to develop a statistical uncertainty quantification method for fatigue stress-life (S-N) curves with sparse data.Hierarchical Bayesian data augmentation (HBDA) is proposed to integrate hierarchical Bayesian modeling (HBM) and Bayesian data augmentation (BDA) to deal with sparse data problems for fatigue S-N curves. The third objective is to develop a physics-guided machine learning model to overcome limitations in parametric regression models and classical machine learning models for fatigue data analysis. A Probabilistic Physics-guided Neural Network (PPgNN) is proposed for probabilistic fatigue S-N curve estimation. This model is further developed for missing data and arbitrary output distribution problems. Fourth, multi-fidelity modeling combines the advantages of low- and high-fidelity models to achieve a required accuracy at a reasonable computation cost. The fourth objective is to develop a neural network approach for multi-fidelity modeling by learning the correlation between low- and high-fidelity models. Finally, conclusions are drawn, and future work is outlined based on the current study.
ContributorsChen, Jie (Author) / Liu, Yongming (Thesis advisor) / Chattopadhyay, Aditi (Committee member) / Mignolet, Marc (Committee member) / Ren, Yi (Committee member) / Yan, Hao (Committee member) / Arizona State University (Publisher)
Created2022
168355-Thumbnail Image.png
Description
Ultra-fast 2D/3D material microstructure reconstruction and quantitative structure-property mapping are crucial components of integrated computational material engineering (ICME). It is particularly challenging for modeling random heterogeneous materials such as alloys, composites, polymers, porous media, and granular matters, which exhibit strong randomness and variations of their material properties due to

Ultra-fast 2D/3D material microstructure reconstruction and quantitative structure-property mapping are crucial components of integrated computational material engineering (ICME). It is particularly challenging for modeling random heterogeneous materials such as alloys, composites, polymers, porous media, and granular matters, which exhibit strong randomness and variations of their material properties due to the hierarchical uncertainties associated with their complex microstructure at different length scales. Such uncertainties also exist in disordered hyperuniform systems that are statistically isotropic and possess no Bragg peaks like liquids and glasses, yet they suppress large-scale density fluctuations in a similar manner as in perfect crystals. The unique hyperuniform long-range order in these systems endow them with nearly optimal transport, electronic and mechanical properties. The concept of hyperuniformity was originally introduced for many-particle systems and has subsequently been generalized to heterogeneous materials such as porous media, composites, polymers, and biological tissues for unconventional property discovery. An explicit mixture random field (MRF) model is proposed to characterize and reconstruct multi-phase stochastic material property and microstructure simultaneously, where no additional tuning step nor iteration is needed compared with other stochastic optimization approaches such as the simulated annealing. The proposed method is shown to have ultra-high computational efficiency and only requires minimal imaging and property input data. Considering microscale uncertainties, the material reliability will face the challenge of high dimensionality. To deal with the so-called “curse of dimensionality”, efficient material reliability analysis methods are developed. Then, the explicit hierarchical uncertainty quantification model and efficient material reliability solvers are applied to reliability-based topology optimization to pursue the lightweight under reliability constraint defined based on structural mechanical responses. Efficient and accurate methods for high-resolution microstructure and hyperuniform microstructure reconstruction, high-dimensional material reliability analysis, and reliability-based topology optimization are developed. The proposed framework can be readily incorporated into ICME for probabilistic analysis, discovery of novel disordered hyperuniform materials, material design and optimization.
ContributorsGao, Yi (Author) / Liu, Yongming (Thesis advisor) / Jiao, Yang (Committee member) / Ren, Yi (Committee member) / Pan, Rong (Committee member) / Mignolet, Marc (Committee member) / Arizona State University (Publisher)
Created2021
187873-Thumbnail Image.png
Description
Least squares fitting in 3D is applied to produce higher level geometric parameters that describe the optimum location of a line-profile through many nodal points that are derived from Finite Element Analysis (FEA) simulations of elastic spring-back of features both on stamped sheet metal components after they have been plasticly

Least squares fitting in 3D is applied to produce higher level geometric parameters that describe the optimum location of a line-profile through many nodal points that are derived from Finite Element Analysis (FEA) simulations of elastic spring-back of features both on stamped sheet metal components after they have been plasticly deformed in a press and released, and on simple assemblies made from them. Although the traditional Moore-Penrose inverse was used to solve the superabundant linear equations, the formulation of these equations was distinct and based on virtual work and statics applied to parallel-actuated robots in order to allow for both more complex profiles and a change in profile size. The output, a small displacement torsor (SDT) is used to describe the displacement of the profile from its nominal location. It may be regarded as a generalization of the slope and intercept parameters of a line which result from a Gauss-Markov regression fit of points in a plane. Additionally, minimum zone-magnitudes were computed that just capture the points along the profile. And finally, algorithms were created to compute simple parameters for cross-sectional shapes of components were also computed from sprung-back data points according to the protocol of simulations and benchmark experiments conducted by the metal forming community 30 years ago, although it was necessary to modify their protocol for some geometries that differed from the benchmark.
ContributorsSunkara, Sai Chandu (Author) / Davidson, Joseph (Thesis advisor) / Shah, Jami (Committee member) / Ren, Yi (Committee member) / Arizona State University (Publisher)
Created2023
158800-Thumbnail Image.png
Description
Bicycle stabilization has become a popular topic because of its complex dynamic behavior and the large body of bicycle modeling research. Riding a bicycle requires accurately performing several tasks, such as balancing and navigation which may be difficult for disabled people. Their problems could be partially reduced by providing steering

Bicycle stabilization has become a popular topic because of its complex dynamic behavior and the large body of bicycle modeling research. Riding a bicycle requires accurately performing several tasks, such as balancing and navigation which may be difficult for disabled people. Their problems could be partially reduced by providing steering assistance. For stabilization of these highly maneuverable and efficient machines, many control techniques have been applied – achieving interesting results, but with some limitations which includes strict environmental requirements. This thesis expands on the work of Randlov and Alstrom, using reinforcement learning for bicycle self-stabilization with robotic steering. This thesis applies the deep deterministic policy gradient algorithm, which can handle continuous action spaces which is not possible for Q-learning technique. The research involved algorithm training on virtual environments followed by simulations to assess its results. Furthermore, hardware testing was also conducted on Arizona State University’s RISE lab Smart bicycle platform for testing its self-balancing performance. Detailed analysis of the bicycle trial runs are presented. Validation of testing was done by plotting the real-time states and actions collected during the outdoor testing which included the roll angle of bicycle. Further improvements in regard to model training and hardware testing are also presented.
ContributorsTurakhia, Shubham (Author) / Zhang, Wenlong (Thesis advisor) / Yong, Sze Zheng (Committee member) / Ren, Yi (Committee member) / Arizona State University (Publisher)
Created2020
187466-Thumbnail Image.png
Description
Advanced driving assistance systems (ADAS) are one of the latest automotive technologies for improving vehicle safety. An efficient method to ensure vehicle safety is to limit vehicle states always within a predefined stability region. Hence, this thesis aims at designing a model predictive control (MPC) with non-overshooting constraints that always

Advanced driving assistance systems (ADAS) are one of the latest automotive technologies for improving vehicle safety. An efficient method to ensure vehicle safety is to limit vehicle states always within a predefined stability region. Hence, this thesis aims at designing a model predictive control (MPC) with non-overshooting constraints that always confine vehicle states in a predefined lateral stability region. To consider the feasibility and stability of MPC, terminal cost and constraints are investigated to guarantee the stability and recursive feasibility of the proposed non-overshooting MPC. The proposed non-overshooting MPC is first verified by using numerical examples of linear and nonlinear systems. Finally, the non-overshooting MPC is applied to guarantee vehicle lateral stability based on a nonlinear vehicle model for a cornering maneuver. The simulation results are presented and discussed through co-simulation of CarSim® and MATLAB/Simulink.
ContributorsSudhakhar, Monish Dev (Author) / Chen, Yan (Thesis advisor) / Ren, Yi (Committee member) / Xu, Zhe (Committee member) / Arizona State University (Publisher)
Created2023
171992-Thumbnail Image.png
Description
The need for autonomous cars has never been more vital, and for a vehicle to be completely autonomous, multiple components must work together, one of which is the capacity to park at the end of a mission. This thesis project aims to design and execute an automated parking assist system

The need for autonomous cars has never been more vital, and for a vehicle to be completely autonomous, multiple components must work together, one of which is the capacity to park at the end of a mission. This thesis project aims to design and execute an automated parking assist system (APAS). Traditional Automated parking assist systems (APAS) may not be effective in some constrained urban parking environments because of the parking space dimension. The thesis proposes a novel four-wheel steering (4-WS) vehicle for automated parallel parking to overcome this kind of challenge. Then, benefiting from the maneuverability enabled by the 4WS system, the feasible initial parking area is vastly expanded from those for the conventional 2WS vehicles. In addition, the expanded initial area is divided into four areas where different paths are planned correspondingly. In the proposed novel APAS first, a suitable parking space is identified through ultra-sonic sensors, which are mounted around the vehicle, and then depending upon the vehicle's initial position, various compact and smooth parallel parking paths are generated. An optimization function is built to get the smoothest (i.e., the smallest steering angle change and the shortest path) parallel parking path. With the full utilization of the 4WS system, the proposed path planning algorithm can allow a larger initial parking area that can be easily tracked by the 4WS vehicles. The proposed APAS for 4WS vehicles makes the automatic parking process in restricted spaces efficient. To verify the feasibility and effectiveness of the proposed APAS, a 4WS vehicle prototype is applied for validation through both simulation and experiment results.
ContributorsGujarathi, Kaushik Kumar (Author) / Chen, Yan (Thesis advisor) / Yong, Sze Zheng (Committee member) / Ren, Yi (Committee member) / Arizona State University (Publisher)
Created2022
171530-Thumbnail Image.png
Description
Autonomous systems inevitably must interact with other surrounding systems; thus, algorithms for intention/behavior estimation are of great interest. This thesis dissertation focuses on developing passive and active model discrimination algorithms (PMD and AMD) with applications to set-valued intention identification and fault detection for uncertain/bounded-error dynamical systems. PMD uses the obtained

Autonomous systems inevitably must interact with other surrounding systems; thus, algorithms for intention/behavior estimation are of great interest. This thesis dissertation focuses on developing passive and active model discrimination algorithms (PMD and AMD) with applications to set-valued intention identification and fault detection for uncertain/bounded-error dynamical systems. PMD uses the obtained input-output data to invalidate the models, while AMD designs an auxiliary input to assist the discrimination process. First, PMD algorithms are proposed for noisy switched nonlinear systems constrained by metric/signal temporal logic specifications, including systems with lossy data modeled by (m,k)-firm constraints. Specifically, optimization-based algorithms are introduced for analyzing the detectability/distinguishability of models and for ruling out models that are inconsistent with observations at run time. On the other hand, two AMD approaches are designed for noisy switched nonlinear models and piecewise affine inclusion models, which involve bilevel optimization with integer variables/constraints in the inner/lower level. The first approach solves the inner problem using mixed-integer parametric optimization, whose solution is included when solving the outer problem/higher level, while the second approach moves the integer variables/constraints to the outer problem in a manner that retains feasibility and recasts the problem as a tractable mixed-integer linear programming (MILP). Furthermore, AMD algorithms are proposed for noisy discrete-time affine time-invariant systems constrained by disjunctive and coupled safety constraints. To overcome the issues associated with generalized semi-infinite constraints due to state-dependent input constraints and disjunctive safety constraints, several constraint reformulations are proposed to recast the AMD problems as tractable MILPs. Finally, partition-based AMD approaches are proposed for noisy discrete-time affine time-invariant models with model-independent parameters and output measurement that are revealed at run time. Specifically, algorithms with fixed and adaptive partitions are proposed, where the latter improves on the performance of the former by allowing the partitions to be optimized. By partitioning the operation region, the problem is solved offline, and partition trees are constructed which can be used as a `look-up table' to determine the optimal input depending on revealed information at run time.
ContributorsNiu, Ruochen (Author) / Yong, Sze Zheng S.Z. (Thesis advisor) / Berman, Spring (Committee member) / Ren, Yi (Committee member) / Zhang, Wenlong (Committee member) / Zhuang, Houlong (Committee member) / Arizona State University (Publisher)
Created2022
158710-Thumbnail Image.png
Description
Information exists in various forms and a better utilization of the available information can benefit the system awareness and response predictions. The focus of this dissertation is on the fusion of different types of information using Bayesian-Entropy method. The Maximum Entropy method in information theory introduces a unique way of

Information exists in various forms and a better utilization of the available information can benefit the system awareness and response predictions. The focus of this dissertation is on the fusion of different types of information using Bayesian-Entropy method. The Maximum Entropy method in information theory introduces a unique way of handling information in the form of constraints. The Bayesian-Entropy (BE) principle is proposed to integrate the Bayes’ theorem and Maximum Entropy method to encode extra information. The posterior distribution in Bayesian-Entropy method has a Bayesian part to handle point observation data, and an Entropy part that encodes constraints, such as statistical moment information, range information and general function between variables. The proposed method is then extended to its network format as Bayesian Entropy Network (BEN), which serves as a generalized information fusion tool for diagnostics, prognostics, and surrogate modeling.

The proposed BEN is demonstrated and validated with extensive engineering applications. The BEN method is first demonstrated for diagnostics of gas pipelines and metal/composite plates for damage diagnostics. Both empirical knowledge and physics model are integrated with direct observations to improve the accuracy for diagnostics and to reduce the training samples. Next, the BEN is demonstrated in prognostics and safety assessment in air traffic management system. Various information types, such as human concepts, variable correlation functions, physical constraints, and tendency data, are fused in BEN to enhance the safety assessment and risk prediction in the National Airspace System (NAS). Following this, the BE principle is applied in surrogate modeling. Multiple algorithms are proposed based on different type of information encoding, such as Bayesian-Entropy Linear Regression (BELR), Bayesian-Entropy Semiparametric Gaussian Process (BESGP), and Bayesian-Entropy Gaussian Process (BEGP) are demonstrated with numerical toy problems and practical engineering analysis. The results show that the major benefits are the superior prediction/extrapolation performance and significant reduction of training samples by using additional physics/knowledge as constraints. The proposed BEN offers a systematic and rigorous way to incorporate various information sources. Several major conclusions are drawn based on the proposed study.
ContributorsWang, Yuhao (Author) / Liu, Yongming (Thesis advisor) / Chattopadhyay, Aditi (Committee member) / Mignolet, Marc (Committee member) / Yan, Hao (Committee member) / Ren, Yi (Committee member) / Arizona State University (Publisher)
Created2020
158735-Thumbnail Image.png
Description
Almost all mechanical and electro-mechanical products are assemblies of multiple parts, either because of requirements for relative motion, or use of different materials, shape/size differences. Thus, assembly design is the very crux of engineering design. In addition to nominal design of an assembly, there is also tolerance design to determine

Almost all mechanical and electro-mechanical products are assemblies of multiple parts, either because of requirements for relative motion, or use of different materials, shape/size differences. Thus, assembly design is the very crux of engineering design. In addition to nominal design of an assembly, there is also tolerance design to determine allowable manufacturing variations to ensure proper functioning and assemblability. Most of the flexible assemblies are made by stamping sheet metal. Sheet metal stamping process involves plastically deforming sheet metals using dies. Sub-assemblies of two or more components are made with either spot-welding or riveting operations. Various sub-assemblies are finally joined, using spot-welds or rivets, to create the desired assembly. When two components are brought together for assembly, they do not align exactly; this causes gaps and irregularities in assemblies. As multiple parts are stacked, errors accumulate further. Stamping leads to variable deformations due to residual stresses and elastic recovery from plastic strain of metals; this is called as the ‘spring-back’ effect. When multiple components are stacked or assembled using spot welds, input parameters variations, such as sheet metal thickness, number and order of spot welds, cause variations in the exact shape of the final assembly in its free state. It is essential to understand the influence of these input parameters on the geometric variations of both the individual components and the assembly created using these components. Design of Experiment is used to generate principal effect study which evaluates the influence of input parameters on output parameters. The scope of this study is to quantify the geometric variations for a flexible assembly and evaluate their dependence on specific input variables. The 3 input variables considered are the thickness of the sheet material, the number of spot welds used and the spot-welding order to create the assembly. To quantify the geometric variations, sprung-back nodal points along lines, circular arcs, a combination of these, and a specific profile are reduced to metrologically simulated features.
ContributorsJoshi, Abhishek (Author) / Ren, Yi (Thesis advisor) / Davidson, Joseph (Committee member) / Shah, Jami (Committee member) / Arizona State University (Publisher)
Created2020