Matching Items (50)
135702-Thumbnail Image.png
Description
A method has been developed that employs both procedural and optimization algorithms to adaptively slice CAD models for large-scale additive manufacturing (AM) applications. AM, the process of joining material layer by layer to create parts based on 3D model data, has been shown to be an effective method for quickly

A method has been developed that employs both procedural and optimization algorithms to adaptively slice CAD models for large-scale additive manufacturing (AM) applications. AM, the process of joining material layer by layer to create parts based on 3D model data, has been shown to be an effective method for quickly producing parts of a high geometric complexity in small quantities. 3D printing, a popular and successful implementation of this method, is well-suited to creating small-scale parts that require a fine layer resolution. However, it starts to become impractical for large-scale objects due to build volume and print speed limitations. The proposed layered manufacturing technique builds up models from layers of much thicker sheets of material that can be cut on three-axis CNC machines and assembled manually. Adaptive slicing techniques were utilized to vary layer thickness based on surface complexity to minimize both the cost and error of the layered model. This was realized as a multi-objective optimization problem where the number of layers used represented the cost and the geometric difference between the sliced model and the CAD model defined the error. This problem was approached with two different methods, one of which was a procedural process of placing layers from a set of discrete thicknesses based on the Boolean Exclusive OR (XOR) area difference between adjacent layers. The other method implemented an optimization solver to calculate the precise thickness of each layer to minimize the overall volumetric XOR difference between the sliced and original models. Both methods produced results that help validate the efficiency and practicality of the proposed layered manufacturing technique over existing AM technologies for large-scale applications.
ContributorsStobinske, Paul Anthony (Author) / Ren, Yi (Thesis director) / Bucholz, Leonard (Committee member) / Mechanical and Aerospace Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2016-05
147992-Thumbnail Image.png
Description

The research presented in this Honors Thesis provides development in machine learning models which predict future states of a system with unknown dynamics, based on observations of the system. Two case studies are presented for (1) a non-conservative pendulum and (2) a differential game dictating a two-car uncontrolled intersection scenario.

The research presented in this Honors Thesis provides development in machine learning models which predict future states of a system with unknown dynamics, based on observations of the system. Two case studies are presented for (1) a non-conservative pendulum and (2) a differential game dictating a two-car uncontrolled intersection scenario. In the paper we investigate how learning architectures can be manipulated for problem specific geometry. The result of this research provides that these problem specific models are valuable for accurate learning and predicting the dynamics of physics systems.<br/><br/>In order to properly model the physics of a real pendulum, modifications were made to a prior architecture which was sufficient in modeling an ideal pendulum. The necessary modifications to the previous network [13] were problem specific and not transferrable to all other non-conservative physics scenarios. The modified architecture successfully models real pendulum dynamics. This case study provides a basis for future research in augmenting the symplectic gradient of a Hamiltonian energy function to provide a generalized, non-conservative physics model.<br/><br/>A problem specific architecture was also utilized to create an accurate model for the two-car intersection case. The Costate Network proved to be an improvement from the previously used Value Network [17]. Note that this comparison is applied lightly due to slight implementation differences. The development of the Costate Network provides a basis for using characteristics to decompose functions and create a simplified learning problem.<br/><br/>This paper is successful in creating new opportunities to develop physics models, in which the sample cases should be used as a guide for modeling other real and pseudo physics. Although the focused models in this paper are not generalizable, it is important to note that these cases provide direction for future research.

ContributorsMerry, Tanner (Author) / Ren, Yi (Thesis director) / Zhang, Wenlong (Committee member) / Mechanical and Aerospace Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2021-05
148001-Thumbnail Image.png
Description

High-entropy alloys possessing mechanical, chemical, and electrical properties that far exceed those of conventional alloys have the potential to make a significant impact on many areas of engineering. Identifying element combinations and configurations to form these alloys, however, is a difficult, time-consuming, computationally intensive task. Machine learning has revolutionized many

High-entropy alloys possessing mechanical, chemical, and electrical properties that far exceed those of conventional alloys have the potential to make a significant impact on many areas of engineering. Identifying element combinations and configurations to form these alloys, however, is a difficult, time-consuming, computationally intensive task. Machine learning has revolutionized many different fields due to its ability to generalize well to different problems and produce computationally efficient, accurate predictions regarding the system of interest. In this thesis, we demonstrate the effectiveness of machine learning models applied to toy cases representative of simplified physics that are relevant to high-entropy alloy simulation. We show these models are effective at learning nonlinear dynamics for single and multi-particle cases and that more work is needed to accurately represent complex cases in which the system dynamics are chaotic. This thesis serves as a demonstration of the potential benefits of machine learning applied to high-entropy alloy simulations to generate fast, accurate predictions of nonlinear dynamics.

ContributorsDaly, John H (Author) / Ren, Yi (Thesis director) / Zhuang, Houlong (Committee member) / Mechanical and Aerospace Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2021-05
168682-Thumbnail Image.png
Description
In convective heat transfer processes, heat transfer rate increases generally with a large fluid velocity, which leads to complex flow patterns. However, numerically analyzing the complex transport process and conjugated heat transfer requires extensive time and computing resources. Recently, data-driven approach has risen as an alternative method to solve physical

In convective heat transfer processes, heat transfer rate increases generally with a large fluid velocity, which leads to complex flow patterns. However, numerically analyzing the complex transport process and conjugated heat transfer requires extensive time and computing resources. Recently, data-driven approach has risen as an alternative method to solve physical problems in a computational efficient manner without necessitating the iterative computations of the governing physical equations. However, the research on data-driven approach for convective heat transfer is still in nascent stage. This study aims to introduce data-driven approaches for modeling heat and mass convection phenomena. As the first step, this research explores a deep learning approach for modeling the internal forced convection heat transfer problems. Conditional generative adversarial networks (cGAN) are trained to predict the solution based on a graphical input describing fluid channel geometries and initial flow conditions. A trained cGAN model rapidly approximates the flow temperature, Nusselt number (Nu) and friction factor (f) of a flow in a heated channel over Reynolds number (Re) ranging from 100 to 27750. The optimized cGAN model exhibited an accuracy up to 97.6% when predicting the local distributions of Nu and f. Next, this research introduces a deep learning based surrogate model for three-dimensional (3D) transient mixed convention in a horizontal channel with a heated bottom surface. Conditional generative adversarial networks (cGAN) are trained to approximate the temperature maps at arbitrary channel locations and time steps. The model is developed for a mixed convection occurring at the Re of 100, Rayleigh number of 3.9E6, and Richardson number of 88.8. The cGAN with the PatchGAN based classifier without the strided convolutions infers the temperature map with the best clarity and accuracy. Finally, this study investigates how machine learning analyzes the mass transfer in 3D printed fluidic devices. Random forests algorithm is hired to classify the flow images taken from semi-transparent 3D printed tubes. Particularly, this work focuses on laminar-turbulent transition process occurring in a 3D wavy tube and a straight tube visualized by dye injection. The machine learning model automatically classifies experimentally obtained flow images with an accuracy > 0.95.
ContributorsKang, Munku (Author) / Kwon, Beomjin (Thesis advisor) / Phelan, Patrick (Committee member) / Ren, Yi (Committee member) / Rykaczewski, Konrad (Committee member) / Sohn, SungMin (Committee member) / Arizona State University (Publisher)
Created2022
168584-Thumbnail Image.png
Description
Uncertainty quantification is critical for engineering design and analysis. Determining appropriate ways of dealing with uncertainties has been a constant challenge in engineering. Statistical methods provide a powerful aid to describe and understand uncertainties. This work focuses on applying Bayesian methods and machine learning in uncertainty quantification and prognostics among

Uncertainty quantification is critical for engineering design and analysis. Determining appropriate ways of dealing with uncertainties has been a constant challenge in engineering. Statistical methods provide a powerful aid to describe and understand uncertainties. This work focuses on applying Bayesian methods and machine learning in uncertainty quantification and prognostics among all the statistical methods. This study focuses on the mechanical properties of materials, both static and fatigue, the main engineering field on which this study focuses. This work can be summarized in the following items: First, maintaining the safety of vintage pipelines requires accurately estimating the strength. The objective is to predict the reliability-based strength using nondestructive multimodality surface information. Bayesian model averaging (BMA) is implemented for fusing multimodality non-destructive testing results for gas pipeline strength estimation. Several incremental improvements are proposed in the algorithm implementation. Second, the objective is to develop a statistical uncertainty quantification method for fatigue stress-life (S-N) curves with sparse data.Hierarchical Bayesian data augmentation (HBDA) is proposed to integrate hierarchical Bayesian modeling (HBM) and Bayesian data augmentation (BDA) to deal with sparse data problems for fatigue S-N curves. The third objective is to develop a physics-guided machine learning model to overcome limitations in parametric regression models and classical machine learning models for fatigue data analysis. A Probabilistic Physics-guided Neural Network (PPgNN) is proposed for probabilistic fatigue S-N curve estimation. This model is further developed for missing data and arbitrary output distribution problems. Fourth, multi-fidelity modeling combines the advantages of low- and high-fidelity models to achieve a required accuracy at a reasonable computation cost. The fourth objective is to develop a neural network approach for multi-fidelity modeling by learning the correlation between low- and high-fidelity models. Finally, conclusions are drawn, and future work is outlined based on the current study.
ContributorsChen, Jie (Author) / Liu, Yongming (Thesis advisor) / Chattopadhyay, Aditi (Committee member) / Mignolet, Marc (Committee member) / Ren, Yi (Committee member) / Yan, Hao (Committee member) / Arizona State University (Publisher)
Created2022
168714-Thumbnail Image.png
Description
Deep neural network-based methods have been proved to achieve outstanding performance on object detection and classification tasks. Deep neural networks follow the ``deeper model with deeper confidence'' belief to gain a higher recognition accuracy. However, reducing these networks' computational costs remains a challenge, which impedes their deployment on embedded devices.

Deep neural network-based methods have been proved to achieve outstanding performance on object detection and classification tasks. Deep neural networks follow the ``deeper model with deeper confidence'' belief to gain a higher recognition accuracy. However, reducing these networks' computational costs remains a challenge, which impedes their deployment on embedded devices. For instance, the intersection management of Connected Autonomous Vehicles (CAVs) requires running computationally intensive object recognition algorithms on low-power traffic cameras. This dissertation aims to study the effect of a dynamic hardware and software approach to address this issue. Characteristics of real-world applications can facilitate this dynamic adjustment and reduce the computation. Specifically, this dissertation starts with a dynamic hardware approach that adjusts itself based on the toughness of input and extracts deeper features if needed. Next, an adaptive learning mechanism has been studied that use extracted feature from previous inputs to improve system performance. Finally, a system (ARGOS) was proposed and evaluated that can be run on embedded systems while maintaining the desired accuracy. This system adopts shallow features at inference time, but it can switch to deep features if the system desires a higher accuracy. To improve the performance, ARGOS distills the temporal knowledge from deep features to the shallow system. Moreover, ARGOS reduces the computation furthermore by focusing on regions of interest. The response time and mean average precision are adopted for the performance evaluation to evaluate the proposed ARGOS system.
ContributorsFarhadi, Mohammad (Author) / Yang, Yezhou (Thesis advisor) / Vrudhula, Sarma (Committee member) / Wu, Carole-Jean (Committee member) / Ren, Yi (Committee member) / Arizona State University (Publisher)
Created2022
Description

We propose a new strategy for blackjack, BB-Player, which leverages Hidden Markov Models (HMMs) in online planning to sample a normalized predicted deck distribution for a partially-informed distance heuristic. Viterbi learning is applied to the most-likely sampled future sequence in each game state to generate transition and emission matrices for

We propose a new strategy for blackjack, BB-Player, which leverages Hidden Markov Models (HMMs) in online planning to sample a normalized predicted deck distribution for a partially-informed distance heuristic. Viterbi learning is applied to the most-likely sampled future sequence in each game state to generate transition and emission matrices for this upcoming sequence. These are then iteratively updated with each observed game on a given deck. Ultimately, this process informs a heuristic to estimate the true symbolic distance left, which allows BB-Player to determine the action with the highest likelihood of winning (by opponent bust or blackjack) and not going bust. We benchmark this strategy against six common card counting strategies from three separate levels of difficulty and a randomized action strategy. On average, BB-Player is observed to beat card-counting strategies in win optimality, attaining a 30.00% expected win percentage, though it falls short of beating state-of-the-art methods.

ContributorsLakamsani, Sreeharsha (Author) / Ren, Yi (Thesis director) / Lee, Heewook (Committee member) / Barrett, The Honors College (Contributor) / School of Mathematical and Statistical Sciences (Contributor) / Computer Science and Engineering Program (Contributor)
Created2023-05
168355-Thumbnail Image.png
Description
Ultra-fast 2D/3D material microstructure reconstruction and quantitative structure-property mapping are crucial components of integrated computational material engineering (ICME). It is particularly challenging for modeling random heterogeneous materials such as alloys, composites, polymers, porous media, and granular matters, which exhibit strong randomness and variations of their material properties due to

Ultra-fast 2D/3D material microstructure reconstruction and quantitative structure-property mapping are crucial components of integrated computational material engineering (ICME). It is particularly challenging for modeling random heterogeneous materials such as alloys, composites, polymers, porous media, and granular matters, which exhibit strong randomness and variations of their material properties due to the hierarchical uncertainties associated with their complex microstructure at different length scales. Such uncertainties also exist in disordered hyperuniform systems that are statistically isotropic and possess no Bragg peaks like liquids and glasses, yet they suppress large-scale density fluctuations in a similar manner as in perfect crystals. The unique hyperuniform long-range order in these systems endow them with nearly optimal transport, electronic and mechanical properties. The concept of hyperuniformity was originally introduced for many-particle systems and has subsequently been generalized to heterogeneous materials such as porous media, composites, polymers, and biological tissues for unconventional property discovery. An explicit mixture random field (MRF) model is proposed to characterize and reconstruct multi-phase stochastic material property and microstructure simultaneously, where no additional tuning step nor iteration is needed compared with other stochastic optimization approaches such as the simulated annealing. The proposed method is shown to have ultra-high computational efficiency and only requires minimal imaging and property input data. Considering microscale uncertainties, the material reliability will face the challenge of high dimensionality. To deal with the so-called “curse of dimensionality”, efficient material reliability analysis methods are developed. Then, the explicit hierarchical uncertainty quantification model and efficient material reliability solvers are applied to reliability-based topology optimization to pursue the lightweight under reliability constraint defined based on structural mechanical responses. Efficient and accurate methods for high-resolution microstructure and hyperuniform microstructure reconstruction, high-dimensional material reliability analysis, and reliability-based topology optimization are developed. The proposed framework can be readily incorporated into ICME for probabilistic analysis, discovery of novel disordered hyperuniform materials, material design and optimization.
ContributorsGao, Yi (Author) / Liu, Yongming (Thesis advisor) / Jiao, Yang (Committee member) / Ren, Yi (Committee member) / Pan, Rong (Committee member) / Mignolet, Marc (Committee member) / Arizona State University (Publisher)
Created2021
168441-Thumbnail Image.png
Description
Generative models in various domain such as images, speeches, and videos are beingdeveloped actively over the last decades and recent deep generative models are now capable of synthesizing multimedia contents are difficult to be distinguishable from authentic contents. Such capabilities cause concerns such as malicious impersonation, Intellectual property theft(IP theft) and copyright infringement. One

Generative models in various domain such as images, speeches, and videos are beingdeveloped actively over the last decades and recent deep generative models are now capable of synthesizing multimedia contents are difficult to be distinguishable from authentic contents. Such capabilities cause concerns such as malicious impersonation, Intellectual property theft(IP theft) and copyright infringement. One method to solve these threats is to embedded attributable watermarking in synthesized contents so that user can identify the user-end models where the contents are generated from. This paper investigates a solution for model attribution, i.e., the classification of synthetic contents by their source models via watermarks embedded in the contents. Existing studies showed the feasibility of model attribution in the image domain and tradeoff between attribution accuracy and generation quality under the various adversarial attacks but not in speech domain. This work discuss the feasibility of model attribution in different domain and algorithmic improvements for generating user-end speech models that empirically achieve high accuracy of attribution while maintaining high generation quality. Lastly, several experiments are conducted show the tradeoff between attributability and generation quality under a variety of attacks on generated speech signals attempting to remove the watermarks.
ContributorsCho, Yongbaek (Author) / Yang, Yezhou (Thesis advisor) / Ren, Yi (Committee member) / Trieu, Ni (Committee member) / Arizona State University (Publisher)
Created2021
187873-Thumbnail Image.png
Description
Least squares fitting in 3D is applied to produce higher level geometric parameters that describe the optimum location of a line-profile through many nodal points that are derived from Finite Element Analysis (FEA) simulations of elastic spring-back of features both on stamped sheet metal components after they have been plasticly

Least squares fitting in 3D is applied to produce higher level geometric parameters that describe the optimum location of a line-profile through many nodal points that are derived from Finite Element Analysis (FEA) simulations of elastic spring-back of features both on stamped sheet metal components after they have been plasticly deformed in a press and released, and on simple assemblies made from them. Although the traditional Moore-Penrose inverse was used to solve the superabundant linear equations, the formulation of these equations was distinct and based on virtual work and statics applied to parallel-actuated robots in order to allow for both more complex profiles and a change in profile size. The output, a small displacement torsor (SDT) is used to describe the displacement of the profile from its nominal location. It may be regarded as a generalization of the slope and intercept parameters of a line which result from a Gauss-Markov regression fit of points in a plane. Additionally, minimum zone-magnitudes were computed that just capture the points along the profile. And finally, algorithms were created to compute simple parameters for cross-sectional shapes of components were also computed from sprung-back data points according to the protocol of simulations and benchmark experiments conducted by the metal forming community 30 years ago, although it was necessary to modify their protocol for some geometries that differed from the benchmark.
ContributorsSunkara, Sai Chandu (Author) / Davidson, Joseph (Thesis advisor) / Shah, Jami (Committee member) / Ren, Yi (Committee member) / Arizona State University (Publisher)
Created2023