Matching Items (41)
Filtering by

Clear all filters

168682-Thumbnail Image.png
Description
In convective heat transfer processes, heat transfer rate increases generally with a large fluid velocity, which leads to complex flow patterns. However, numerically analyzing the complex transport process and conjugated heat transfer requires extensive time and computing resources. Recently, data-driven approach has risen as an alternative method to solve physical

In convective heat transfer processes, heat transfer rate increases generally with a large fluid velocity, which leads to complex flow patterns. However, numerically analyzing the complex transport process and conjugated heat transfer requires extensive time and computing resources. Recently, data-driven approach has risen as an alternative method to solve physical problems in a computational efficient manner without necessitating the iterative computations of the governing physical equations. However, the research on data-driven approach for convective heat transfer is still in nascent stage. This study aims to introduce data-driven approaches for modeling heat and mass convection phenomena. As the first step, this research explores a deep learning approach for modeling the internal forced convection heat transfer problems. Conditional generative adversarial networks (cGAN) are trained to predict the solution based on a graphical input describing fluid channel geometries and initial flow conditions. A trained cGAN model rapidly approximates the flow temperature, Nusselt number (Nu) and friction factor (f) of a flow in a heated channel over Reynolds number (Re) ranging from 100 to 27750. The optimized cGAN model exhibited an accuracy up to 97.6% when predicting the local distributions of Nu and f. Next, this research introduces a deep learning based surrogate model for three-dimensional (3D) transient mixed convention in a horizontal channel with a heated bottom surface. Conditional generative adversarial networks (cGAN) are trained to approximate the temperature maps at arbitrary channel locations and time steps. The model is developed for a mixed convection occurring at the Re of 100, Rayleigh number of 3.9E6, and Richardson number of 88.8. The cGAN with the PatchGAN based classifier without the strided convolutions infers the temperature map with the best clarity and accuracy. Finally, this study investigates how machine learning analyzes the mass transfer in 3D printed fluidic devices. Random forests algorithm is hired to classify the flow images taken from semi-transparent 3D printed tubes. Particularly, this work focuses on laminar-turbulent transition process occurring in a 3D wavy tube and a straight tube visualized by dye injection. The machine learning model automatically classifies experimentally obtained flow images with an accuracy > 0.95.
ContributorsKang, Munku (Author) / Kwon, Beomjin (Thesis advisor) / Phelan, Patrick (Committee member) / Ren, Yi (Committee member) / Rykaczewski, Konrad (Committee member) / Sohn, SungMin (Committee member) / Arizona State University (Publisher)
Created2022
168584-Thumbnail Image.png
Description
Uncertainty quantification is critical for engineering design and analysis. Determining appropriate ways of dealing with uncertainties has been a constant challenge in engineering. Statistical methods provide a powerful aid to describe and understand uncertainties. This work focuses on applying Bayesian methods and machine learning in uncertainty quantification and prognostics among

Uncertainty quantification is critical for engineering design and analysis. Determining appropriate ways of dealing with uncertainties has been a constant challenge in engineering. Statistical methods provide a powerful aid to describe and understand uncertainties. This work focuses on applying Bayesian methods and machine learning in uncertainty quantification and prognostics among all the statistical methods. This study focuses on the mechanical properties of materials, both static and fatigue, the main engineering field on which this study focuses. This work can be summarized in the following items: First, maintaining the safety of vintage pipelines requires accurately estimating the strength. The objective is to predict the reliability-based strength using nondestructive multimodality surface information. Bayesian model averaging (BMA) is implemented for fusing multimodality non-destructive testing results for gas pipeline strength estimation. Several incremental improvements are proposed in the algorithm implementation. Second, the objective is to develop a statistical uncertainty quantification method for fatigue stress-life (S-N) curves with sparse data.Hierarchical Bayesian data augmentation (HBDA) is proposed to integrate hierarchical Bayesian modeling (HBM) and Bayesian data augmentation (BDA) to deal with sparse data problems for fatigue S-N curves. The third objective is to develop a physics-guided machine learning model to overcome limitations in parametric regression models and classical machine learning models for fatigue data analysis. A Probabilistic Physics-guided Neural Network (PPgNN) is proposed for probabilistic fatigue S-N curve estimation. This model is further developed for missing data and arbitrary output distribution problems. Fourth, multi-fidelity modeling combines the advantages of low- and high-fidelity models to achieve a required accuracy at a reasonable computation cost. The fourth objective is to develop a neural network approach for multi-fidelity modeling by learning the correlation between low- and high-fidelity models. Finally, conclusions are drawn, and future work is outlined based on the current study.
ContributorsChen, Jie (Author) / Liu, Yongming (Thesis advisor) / Chattopadhyay, Aditi (Committee member) / Mignolet, Marc (Committee member) / Ren, Yi (Committee member) / Yan, Hao (Committee member) / Arizona State University (Publisher)
Created2022
168714-Thumbnail Image.png
Description
Deep neural network-based methods have been proved to achieve outstanding performance on object detection and classification tasks. Deep neural networks follow the ``deeper model with deeper confidence'' belief to gain a higher recognition accuracy. However, reducing these networks' computational costs remains a challenge, which impedes their deployment on embedded devices.

Deep neural network-based methods have been proved to achieve outstanding performance on object detection and classification tasks. Deep neural networks follow the ``deeper model with deeper confidence'' belief to gain a higher recognition accuracy. However, reducing these networks' computational costs remains a challenge, which impedes their deployment on embedded devices. For instance, the intersection management of Connected Autonomous Vehicles (CAVs) requires running computationally intensive object recognition algorithms on low-power traffic cameras. This dissertation aims to study the effect of a dynamic hardware and software approach to address this issue. Characteristics of real-world applications can facilitate this dynamic adjustment and reduce the computation. Specifically, this dissertation starts with a dynamic hardware approach that adjusts itself based on the toughness of input and extracts deeper features if needed. Next, an adaptive learning mechanism has been studied that use extracted feature from previous inputs to improve system performance. Finally, a system (ARGOS) was proposed and evaluated that can be run on embedded systems while maintaining the desired accuracy. This system adopts shallow features at inference time, but it can switch to deep features if the system desires a higher accuracy. To improve the performance, ARGOS distills the temporal knowledge from deep features to the shallow system. Moreover, ARGOS reduces the computation furthermore by focusing on regions of interest. The response time and mean average precision are adopted for the performance evaluation to evaluate the proposed ARGOS system.
ContributorsFarhadi, Mohammad (Author) / Yang, Yezhou (Thesis advisor) / Vrudhula, Sarma (Committee member) / Wu, Carole-Jean (Committee member) / Ren, Yi (Committee member) / Arizona State University (Publisher)
Created2022
168355-Thumbnail Image.png
Description
Ultra-fast 2D/3D material microstructure reconstruction and quantitative structure-property mapping are crucial components of integrated computational material engineering (ICME). It is particularly challenging for modeling random heterogeneous materials such as alloys, composites, polymers, porous media, and granular matters, which exhibit strong randomness and variations of their material properties due to

Ultra-fast 2D/3D material microstructure reconstruction and quantitative structure-property mapping are crucial components of integrated computational material engineering (ICME). It is particularly challenging for modeling random heterogeneous materials such as alloys, composites, polymers, porous media, and granular matters, which exhibit strong randomness and variations of their material properties due to the hierarchical uncertainties associated with their complex microstructure at different length scales. Such uncertainties also exist in disordered hyperuniform systems that are statistically isotropic and possess no Bragg peaks like liquids and glasses, yet they suppress large-scale density fluctuations in a similar manner as in perfect crystals. The unique hyperuniform long-range order in these systems endow them with nearly optimal transport, electronic and mechanical properties. The concept of hyperuniformity was originally introduced for many-particle systems and has subsequently been generalized to heterogeneous materials such as porous media, composites, polymers, and biological tissues for unconventional property discovery. An explicit mixture random field (MRF) model is proposed to characterize and reconstruct multi-phase stochastic material property and microstructure simultaneously, where no additional tuning step nor iteration is needed compared with other stochastic optimization approaches such as the simulated annealing. The proposed method is shown to have ultra-high computational efficiency and only requires minimal imaging and property input data. Considering microscale uncertainties, the material reliability will face the challenge of high dimensionality. To deal with the so-called “curse of dimensionality”, efficient material reliability analysis methods are developed. Then, the explicit hierarchical uncertainty quantification model and efficient material reliability solvers are applied to reliability-based topology optimization to pursue the lightweight under reliability constraint defined based on structural mechanical responses. Efficient and accurate methods for high-resolution microstructure and hyperuniform microstructure reconstruction, high-dimensional material reliability analysis, and reliability-based topology optimization are developed. The proposed framework can be readily incorporated into ICME for probabilistic analysis, discovery of novel disordered hyperuniform materials, material design and optimization.
ContributorsGao, Yi (Author) / Liu, Yongming (Thesis advisor) / Jiao, Yang (Committee member) / Ren, Yi (Committee member) / Pan, Rong (Committee member) / Mignolet, Marc (Committee member) / Arizona State University (Publisher)
Created2021
168441-Thumbnail Image.png
Description
Generative models in various domain such as images, speeches, and videos are beingdeveloped actively over the last decades and recent deep generative models are now capable of synthesizing multimedia contents are difficult to be distinguishable from authentic contents. Such capabilities cause concerns such as malicious impersonation, Intellectual property theft(IP theft) and copyright infringement. One

Generative models in various domain such as images, speeches, and videos are beingdeveloped actively over the last decades and recent deep generative models are now capable of synthesizing multimedia contents are difficult to be distinguishable from authentic contents. Such capabilities cause concerns such as malicious impersonation, Intellectual property theft(IP theft) and copyright infringement. One method to solve these threats is to embedded attributable watermarking in synthesized contents so that user can identify the user-end models where the contents are generated from. This paper investigates a solution for model attribution, i.e., the classification of synthetic contents by their source models via watermarks embedded in the contents. Existing studies showed the feasibility of model attribution in the image domain and tradeoff between attribution accuracy and generation quality under the various adversarial attacks but not in speech domain. This work discuss the feasibility of model attribution in different domain and algorithmic improvements for generating user-end speech models that empirically achieve high accuracy of attribution while maintaining high generation quality. Lastly, several experiments are conducted show the tradeoff between attributability and generation quality under a variety of attacks on generated speech signals attempting to remove the watermarks.
ContributorsCho, Yongbaek (Author) / Yang, Yezhou (Thesis advisor) / Ren, Yi (Committee member) / Trieu, Ni (Committee member) / Arizona State University (Publisher)
Created2021
187873-Thumbnail Image.png
Description
Least squares fitting in 3D is applied to produce higher level geometric parameters that describe the optimum location of a line-profile through many nodal points that are derived from Finite Element Analysis (FEA) simulations of elastic spring-back of features both on stamped sheet metal components after they have been plasticly

Least squares fitting in 3D is applied to produce higher level geometric parameters that describe the optimum location of a line-profile through many nodal points that are derived from Finite Element Analysis (FEA) simulations of elastic spring-back of features both on stamped sheet metal components after they have been plasticly deformed in a press and released, and on simple assemblies made from them. Although the traditional Moore-Penrose inverse was used to solve the superabundant linear equations, the formulation of these equations was distinct and based on virtual work and statics applied to parallel-actuated robots in order to allow for both more complex profiles and a change in profile size. The output, a small displacement torsor (SDT) is used to describe the displacement of the profile from its nominal location. It may be regarded as a generalization of the slope and intercept parameters of a line which result from a Gauss-Markov regression fit of points in a plane. Additionally, minimum zone-magnitudes were computed that just capture the points along the profile. And finally, algorithms were created to compute simple parameters for cross-sectional shapes of components were also computed from sprung-back data points according to the protocol of simulations and benchmark experiments conducted by the metal forming community 30 years ago, although it was necessary to modify their protocol for some geometries that differed from the benchmark.
ContributorsSunkara, Sai Chandu (Author) / Davidson, Joseph (Thesis advisor) / Shah, Jami (Committee member) / Ren, Yi (Committee member) / Arizona State University (Publisher)
Created2023
158800-Thumbnail Image.png
Description
Bicycle stabilization has become a popular topic because of its complex dynamic behavior and the large body of bicycle modeling research. Riding a bicycle requires accurately performing several tasks, such as balancing and navigation which may be difficult for disabled people. Their problems could be partially reduced by providing steering

Bicycle stabilization has become a popular topic because of its complex dynamic behavior and the large body of bicycle modeling research. Riding a bicycle requires accurately performing several tasks, such as balancing and navigation which may be difficult for disabled people. Their problems could be partially reduced by providing steering assistance. For stabilization of these highly maneuverable and efficient machines, many control techniques have been applied – achieving interesting results, but with some limitations which includes strict environmental requirements. This thesis expands on the work of Randlov and Alstrom, using reinforcement learning for bicycle self-stabilization with robotic steering. This thesis applies the deep deterministic policy gradient algorithm, which can handle continuous action spaces which is not possible for Q-learning technique. The research involved algorithm training on virtual environments followed by simulations to assess its results. Furthermore, hardware testing was also conducted on Arizona State University’s RISE lab Smart bicycle platform for testing its self-balancing performance. Detailed analysis of the bicycle trial runs are presented. Validation of testing was done by plotting the real-time states and actions collected during the outdoor testing which included the roll angle of bicycle. Further improvements in regard to model training and hardware testing are also presented.
ContributorsTurakhia, Shubham (Author) / Zhang, Wenlong (Thesis advisor) / Yong, Sze Zheng (Committee member) / Ren, Yi (Committee member) / Arizona State University (Publisher)
Created2020
187626-Thumbnail Image.png
Description
National Airspace Systems (NAS) are complex cyber-physical systems that require swift air traffic management (ATM) to ensure flight safety and efficiency. With the surging demand for air travel and the increasing intricacy of aviation systems, the need for advanced technologies to support air traffic management and air traffic control (ATC)

National Airspace Systems (NAS) are complex cyber-physical systems that require swift air traffic management (ATM) to ensure flight safety and efficiency. With the surging demand for air travel and the increasing intricacy of aviation systems, the need for advanced technologies to support air traffic management and air traffic control (ATC) service has become more crucial than ever. Data-driven models or artificial intelligence (AI) have been conceptually investigated by various parties and shown immense potential, especially when provided with a vast volume of real-world data. These data include traffic information, weather contours, operational reports, terrain information, flight procedures, and aviation regulations. Data-driven models learn from historical experiences and observations and provide expeditious recommendations and decision support for various operation tasks, directly contributing to the digital transformation in aviation. This dissertation reports several research studies covering different aspects of air traffic management and ATC service utilizing data-driven modeling, which are validated using real-world big data (flight tracks, flight events, convective weather, workload probes). These studies encompass a range of topics, including trajectory recommendations, weather studies, landing operations, and aviation human factors. Specifically, the topics explored are (i) trajectory recommendations under weather conditions, which examine the impact of convective weather on last on-file flight plans and provide calibrated trajectories based on convective weather; (ii) multi-aircraft trajectory predictions, which study the intention of multiple mid-air aircraft in the near-terminal airspace and provide trajectory predictions; (iii) flight scheduling operations, which involve probabilistic machine learning-enhanced optimization algorithms for robust and efficient aircraft landing sequencing; (iv) aviation human factors, which predict air traffic controller workload level from flight traffic data with conformalized graph neural network. The uncertainties associated with these studies are given special attention and addressed through Bayesian/probabilistic machine learning. Finally, discussions on high-level AI-enabled ATM research directions are provided, hoping to extend the proposed studies in the future. This dissertation demonstrates that data-driven modeling has great potential for aviation digital twins, revolutionizing the aviation decision-making process and enhancing the safety and efficiency of ATM. Moreover, these research directions are not merely add-ons to existing aviation practices but also contribute to the future of transportation, particularly in the development of autonomous systems.
ContributorsPang, Yutian (Author) / Liu, Yongming (Thesis advisor) / Yan, Hao (Committee member) / Zhuang, Houlong (Committee member) / Marvi, Hamid (Committee member) / Ren, Yi (Committee member) / Arizona State University (Publisher)
Created2023
187466-Thumbnail Image.png
Description
Advanced driving assistance systems (ADAS) are one of the latest automotive technologies for improving vehicle safety. An efficient method to ensure vehicle safety is to limit vehicle states always within a predefined stability region. Hence, this thesis aims at designing a model predictive control (MPC) with non-overshooting constraints that always

Advanced driving assistance systems (ADAS) are one of the latest automotive technologies for improving vehicle safety. An efficient method to ensure vehicle safety is to limit vehicle states always within a predefined stability region. Hence, this thesis aims at designing a model predictive control (MPC) with non-overshooting constraints that always confine vehicle states in a predefined lateral stability region. To consider the feasibility and stability of MPC, terminal cost and constraints are investigated to guarantee the stability and recursive feasibility of the proposed non-overshooting MPC. The proposed non-overshooting MPC is first verified by using numerical examples of linear and nonlinear systems. Finally, the non-overshooting MPC is applied to guarantee vehicle lateral stability based on a nonlinear vehicle model for a cornering maneuver. The simulation results are presented and discussed through co-simulation of CarSim® and MATLAB/Simulink.
ContributorsSudhakhar, Monish Dev (Author) / Chen, Yan (Thesis advisor) / Ren, Yi (Committee member) / Xu, Zhe (Committee member) / Arizona State University (Publisher)
Created2023
171980-Thumbnail Image.png
Description
The increasing availability of data and advances in computation have spurred the development of data-driven approaches for modeling complex dynamical systems. These approaches are based on the idea that the underlying structure of a complex system can be discovered from data using mathematical and computational techniques. They also show promise

The increasing availability of data and advances in computation have spurred the development of data-driven approaches for modeling complex dynamical systems. These approaches are based on the idea that the underlying structure of a complex system can be discovered from data using mathematical and computational techniques. They also show promise for addressing the challenges of modeling high-dimensional, nonlinear systems with limited data. In this research expository, the state of the art in data-driven approaches for modeling complex dynamical systems is surveyed in a systemic way. First the general formulation of data-driven modeling of dynamical systems is discussed. Then several representative methods in feature engineering and system identification/prediction are reviewed, including recent advances and key challenges.
ContributorsShi, Wenlong (Author) / Ren, Yi (Thesis advisor) / Hong, Qijun (Committee member) / Jiao, Yang (Committee member) / Yang, Yezhou (Committee member) / Arizona State University (Publisher)
Created2022