Matching Items (13)
Filtering by

Clear all filters

147992-Thumbnail Image.png
Description

The research presented in this Honors Thesis provides development in machine learning models which predict future states of a system with unknown dynamics, based on observations of the system. Two case studies are presented for (1) a non-conservative pendulum and (2) a differential game dictating a two-car uncontrolled intersection scenario.

The research presented in this Honors Thesis provides development in machine learning models which predict future states of a system with unknown dynamics, based on observations of the system. Two case studies are presented for (1) a non-conservative pendulum and (2) a differential game dictating a two-car uncontrolled intersection scenario. In the paper we investigate how learning architectures can be manipulated for problem specific geometry. The result of this research provides that these problem specific models are valuable for accurate learning and predicting the dynamics of physics systems.<br/><br/>In order to properly model the physics of a real pendulum, modifications were made to a prior architecture which was sufficient in modeling an ideal pendulum. The necessary modifications to the previous network [13] were problem specific and not transferrable to all other non-conservative physics scenarios. The modified architecture successfully models real pendulum dynamics. This case study provides a basis for future research in augmenting the symplectic gradient of a Hamiltonian energy function to provide a generalized, non-conservative physics model.<br/><br/>A problem specific architecture was also utilized to create an accurate model for the two-car intersection case. The Costate Network proved to be an improvement from the previously used Value Network [17]. Note that this comparison is applied lightly due to slight implementation differences. The development of the Costate Network provides a basis for using characteristics to decompose functions and create a simplified learning problem.<br/><br/>This paper is successful in creating new opportunities to develop physics models, in which the sample cases should be used as a guide for modeling other real and pseudo physics. Although the focused models in this paper are not generalizable, it is important to note that these cases provide direction for future research.

ContributorsMerry, Tanner (Author) / Ren, Yi (Thesis director) / Zhang, Wenlong (Committee member) / Mechanical and Aerospace Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2021-05
148001-Thumbnail Image.png
Description

High-entropy alloys possessing mechanical, chemical, and electrical properties that far exceed those of conventional alloys have the potential to make a significant impact on many areas of engineering. Identifying element combinations and configurations to form these alloys, however, is a difficult, time-consuming, computationally intensive task. Machine learning has revolutionized many

High-entropy alloys possessing mechanical, chemical, and electrical properties that far exceed those of conventional alloys have the potential to make a significant impact on many areas of engineering. Identifying element combinations and configurations to form these alloys, however, is a difficult, time-consuming, computationally intensive task. Machine learning has revolutionized many different fields due to its ability to generalize well to different problems and produce computationally efficient, accurate predictions regarding the system of interest. In this thesis, we demonstrate the effectiveness of machine learning models applied to toy cases representative of simplified physics that are relevant to high-entropy alloy simulation. We show these models are effective at learning nonlinear dynamics for single and multi-particle cases and that more work is needed to accurately represent complex cases in which the system dynamics are chaotic. This thesis serves as a demonstration of the potential benefits of machine learning applied to high-entropy alloy simulations to generate fast, accurate predictions of nonlinear dynamics.

ContributorsDaly, John H (Author) / Ren, Yi (Thesis director) / Zhuang, Houlong (Committee member) / Mechanical and Aerospace Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2021-05
151321-Thumbnail Image.png
Description
This thesis concerns the role of geometric imperfections on assemblies in which the location of a target part is dependent on supports at two features. In some applications, such as a turbo-machine rotor that is supported by a series of parts at each bearing, it is the interference or clearance

This thesis concerns the role of geometric imperfections on assemblies in which the location of a target part is dependent on supports at two features. In some applications, such as a turbo-machine rotor that is supported by a series of parts at each bearing, it is the interference or clearance at a functional target feature, such as at the blades that must be controlled. The first part of this thesis relates the limits of location for the target part to geometric imperfections of other parts when stacked-up in parallel paths. In this section parts are considered to be rigid (non-deformable). By understanding how much of variation from the supporting parts contribute to variations of the target feature, a designer can better utilize the tolerance budget when assigning values to individual tolerances. In this work, the T-Map®, a spatial math model is used to model the tolerance accumulation in parallel assemblies. In other applications where parts are flexible, deformations are induced when parts in parallel are clamped together during assembly. Presuming that perfectly manufactured parts have been designed to fit perfectly together and produce zero deformations, the clamping-induced deformations result entirely from the imperfect geometry that is produced during manufacture. The magnitudes and types of these deformations are a function of part dimensions and material stiffnesses, and they are limited by design tolerances that control manufacturing variations. These manufacturing variations, if uncontrolled, may produce high enough stresses when the parts are assembled that premature failure can occur before the design life. The last part of the thesis relates the limits on the largest von Mises stress in one part to functional tolerance limits that must be set at the beginning of a tolerance analysis of parts in such an assembly.
ContributorsJaishankar, Lupin Niranjan (Author) / Davidson, Joseph K. (Thesis advisor) / Shah, Jami J. (Committee member) / Mignolet, Marc P (Committee member) / Arizona State University (Publisher)
Created2012
151510-Thumbnail Image.png
Description
Tolerances on line profiles are used to control cross-sectional shapes of parts, such as turbine blades. A full life cycle for many mechanical devices depends (i) on a wise assignment of tolerances during design and (ii) on careful quality control of the manufacturing process to ensure adherence to the specified

Tolerances on line profiles are used to control cross-sectional shapes of parts, such as turbine blades. A full life cycle for many mechanical devices depends (i) on a wise assignment of tolerances during design and (ii) on careful quality control of the manufacturing process to ensure adherence to the specified tolerances. This thesis describes a new method for quality control of a manufacturing process by improving the method used to convert measured points on a part to a geometric entity that can be compared directly with tolerance specifications. The focus of this paper is the development of a new computational method for obtaining the least-squares fit of a set of points that have been measured with a coordinate measurement machine along a line-profile. The pseudo-inverse of a rectangular matrix is used to convert the measured points to the least-squares fit of the profile. Numerical examples are included for convex and concave line-profiles, that are formed from line- and circular arc-segments.
ContributorsSavaliya, Samir (Author) / Davidson, Joseph K. (Thesis advisor) / Shah, Jami J. (Committee member) / Santos, Veronica J (Committee member) / Arizona State University (Publisher)
Created2013
149542-Thumbnail Image.png
Description
The essence of this research is the reconciliation and standardization of feature fitting algorithms used in Coordinate Measuring Machine (CMM) software and the development of Inspection Maps (i-Maps) for representing geometric tolerances in the inspection stage based on these standardized algorithms. The i-Map is a hypothetical point-space that represents the

The essence of this research is the reconciliation and standardization of feature fitting algorithms used in Coordinate Measuring Machine (CMM) software and the development of Inspection Maps (i-Maps) for representing geometric tolerances in the inspection stage based on these standardized algorithms. The i-Map is a hypothetical point-space that represents the substitute feature evaluated for an actual part in the inspection stage. The first step in this research is to investigate the algorithms used for evaluating substitute features in current CMM software. For this, a survey of feature fitting algorithms available in the literature was performed and then a case study was done to reverse engineer the feature fitting algorithms used in commercial CMM software. The experiments proved that algorithms based on least squares technique are mostly used for GD&T; inspection and this wrong choice of fitting algorithm results in errors and deficiency in the inspection process. Based on the results, a standardization of fitting algorithms is proposed in light of the definition provided in the ASME Y14.5 standard and an interpretation of manual inspection practices. Standardized algorithms for evaluating substitute features from CMM data, consistent with the ASME Y14.5 standard and manual inspection practices for each tolerance type applicable to planar features are developed. Second, these standardized algorithms developed for substitute feature fitting are then used to develop i-Maps for size, orientation and flatness tolerances that apply to their respective feature types. Third, a methodology for Statistical Process Control (SPC) using the I-Maps is proposed by direct fitting of i-Maps into the parent T-Maps. Different methods of computing i-Maps, namely, finding mean, computing the convex hull and principal component analysis are explored. The control limits for the process are derived from inspection samples and a framework for statistical control of the process is developed. This also includes computation of basic SPC and process capability metrics.
ContributorsMani, Neelakantan (Author) / Shah, Jami J. (Thesis advisor) / Davidson, Joseph K. (Committee member) / Farin, Gerald (Committee member) / Arizona State University (Publisher)
Created2011
132368-Thumbnail Image.png
Description
A defense-by-randomization framework is proposed as an effective defense mechanism against different types of adversarial attacks on neural networks. Experiments were conducted by selecting a combination of differently constructed image classification neural networks to observe which combinations applied to this framework were most effective in maximizing classification accuracy. Furthermore, the

A defense-by-randomization framework is proposed as an effective defense mechanism against different types of adversarial attacks on neural networks. Experiments were conducted by selecting a combination of differently constructed image classification neural networks to observe which combinations applied to this framework were most effective in maximizing classification accuracy. Furthermore, the reasons why particular combinations were more effective than others is explored.
ContributorsMazboudi, Yassine Ahmad (Author) / Yang, Yezhou (Thesis director) / Ren, Yi (Committee member) / School of Mathematical and Statistical Sciences (Contributor) / Economics Program in CLAS (Contributor) / Barrett, The Honors College (Contributor)
Created2019-05
168584-Thumbnail Image.png
Description
Uncertainty quantification is critical for engineering design and analysis. Determining appropriate ways of dealing with uncertainties has been a constant challenge in engineering. Statistical methods provide a powerful aid to describe and understand uncertainties. This work focuses on applying Bayesian methods and machine learning in uncertainty quantification and prognostics among

Uncertainty quantification is critical for engineering design and analysis. Determining appropriate ways of dealing with uncertainties has been a constant challenge in engineering. Statistical methods provide a powerful aid to describe and understand uncertainties. This work focuses on applying Bayesian methods and machine learning in uncertainty quantification and prognostics among all the statistical methods. This study focuses on the mechanical properties of materials, both static and fatigue, the main engineering field on which this study focuses. This work can be summarized in the following items: First, maintaining the safety of vintage pipelines requires accurately estimating the strength. The objective is to predict the reliability-based strength using nondestructive multimodality surface information. Bayesian model averaging (BMA) is implemented for fusing multimodality non-destructive testing results for gas pipeline strength estimation. Several incremental improvements are proposed in the algorithm implementation. Second, the objective is to develop a statistical uncertainty quantification method for fatigue stress-life (S-N) curves with sparse data.Hierarchical Bayesian data augmentation (HBDA) is proposed to integrate hierarchical Bayesian modeling (HBM) and Bayesian data augmentation (BDA) to deal with sparse data problems for fatigue S-N curves. The third objective is to develop a physics-guided machine learning model to overcome limitations in parametric regression models and classical machine learning models for fatigue data analysis. A Probabilistic Physics-guided Neural Network (PPgNN) is proposed for probabilistic fatigue S-N curve estimation. This model is further developed for missing data and arbitrary output distribution problems. Fourth, multi-fidelity modeling combines the advantages of low- and high-fidelity models to achieve a required accuracy at a reasonable computation cost. The fourth objective is to develop a neural network approach for multi-fidelity modeling by learning the correlation between low- and high-fidelity models. Finally, conclusions are drawn, and future work is outlined based on the current study.
ContributorsChen, Jie (Author) / Liu, Yongming (Thesis advisor) / Chattopadhyay, Aditi (Committee member) / Mignolet, Marc (Committee member) / Ren, Yi (Committee member) / Yan, Hao (Committee member) / Arizona State University (Publisher)
Created2022
154869-Thumbnail Image.png
Description
There is very little in the way of prescriptive procedures to guide designers in tolerance specification. This shortcoming motivated the group at Design Automation Lab to automate tolerancing of mechanical assemblies. GD&T data generated by the Auto-Tolerancing software is semantically represented using a neutral Constraint Tolerance Feature (CTF) graph file

There is very little in the way of prescriptive procedures to guide designers in tolerance specification. This shortcoming motivated the group at Design Automation Lab to automate tolerancing of mechanical assemblies. GD&T data generated by the Auto-Tolerancing software is semantically represented using a neutral Constraint Tolerance Feature (CTF) graph file format that is consistent with the ASME Y14.5 standard and the ISO STEP Part 21 file. The primary objective of this research is to communicate GD&T information from the CTF file to a neutral machine readable format. The latest STEP AP 242 (ISO 10303-242) “Managed model based 3D engineering“ aims to support smart manufacturing by capturing semantic Product Manufacturing Information (PMI) within the 3D model and also helping with long-term archiving of the product information. In line with the recommended practices published by CAx Implementor Forum, this research discusses the implementation of CTF to AP 242 translator. The input geometry available in STEP AP 203 format is pre-processed using STEP-NC DLL and 3D InterOp. While the former is initially used to attach persistent IDs to the topological entities in STEP, the latter retains the IDs during translation to ACIS entities for consumption by other modules in the Auto-tolerancing module. The associativity of GD&T available in CTF file to the input geometry is through persistent IDs. C++ libraries used for the translation to STEP AP 242 is provided by StepTools Inc through the STEP-NC DLL. Finally, the output STEP file is tested using available AP 242 readers and shows full conformance with the STEP standard. Using the output AP 242 file, semantic GDT data can now be automatically consumed by downstream applications such as Computer Aided Process Planning (CAPP), Computer Aided Inspection (CAI), Computer Aided Tolerance Systems (CATS) and Coordinate Measuring Machines (CMM).
ContributorsVenkiteswaran, Adarsh (Author) / Shah, Jami J. (Thesis advisor) / Hardwick, Martin (Committee member) / Davidson, Joseph K. (Committee member) / Arizona State University (Publisher)
Created2016
153927-Thumbnail Image.png
Description
A process plan is an instruction set for the manufacture of parts generated from detailed design drawings or CAD models. While these plans are highly detailed about machines, tools, fixtures and operation parameters; tolerances typically show up in less formal manner in such plans, if at all. It is not

A process plan is an instruction set for the manufacture of parts generated from detailed design drawings or CAD models. While these plans are highly detailed about machines, tools, fixtures and operation parameters; tolerances typically show up in less formal manner in such plans, if at all. It is not uncommon to see only dimensional plus/minus values on rough sketches accompanying the instructions. On the other hand, design drawings use standard GD&T (Geometrical Dimensioning and tolerancing) symbols with datums and DRFs (Datum Reference Frames) clearly specified. This is not to say that process planners do not consider tolerances; they are implied by way of choices of fixtures, tools, machines, and operations. When converting design tolerances to the manufacturing datum flow, process planners do tolerance charting, that is based on operation sequence but the resulting plans cannot be audited for conformance to design specification.

In this thesis, I will present a framework for explicating the GD&T schema implied by machining process plans. The first step is to derive the DRFs from the fixturing method in each set-up. Then basic dimensions for the features to be machined in each set up are determined with respect to the extracted DRF. Using shop data for the machines and operations involved, the range of possible geometric variations are estimated for each type of tolerances (form, size, orientation, and position). The sequence of manufacturing operations determines the datum flow chain. Once we have a formal manufacturing GD&T schema, we can analyze and compare it to tolerance specifications from design using the T-map math model. Since the model is based on the manufacturing process plan, it is called resulting T-map or m-map. Then the process plan can be validated by adjusting parameters so that the m-map lies within the T-map created for the design drawing. How the m-map is created to be compared with the T-map is the focus of this research.
ContributorsHaghighi, Payam (Author) / Shah, Jami J. (Thesis advisor) / Davidson, Joseph K. (Committee member) / Ren, Yi (Committee member) / Arizona State University (Publisher)
Created2015
154994-Thumbnail Image.png
Description
When manufacturing large or complex parts, often a rough operation such as casting is used to create the majority of the part geometry. Due to the highly variable nature of the casting process, for mechanical components that require precision surfaces for functionality or assembly with others, some of the important

When manufacturing large or complex parts, often a rough operation such as casting is used to create the majority of the part geometry. Due to the highly variable nature of the casting process, for mechanical components that require precision surfaces for functionality or assembly with others, some of the important features are machined to specification. Depending on the relative locations of as-cast to-be-machined features and the amount of material at each, the part may be positioned or ‘set up’ on a fixture in a configuration that will ensure that the pre-specified machining operations will successfully clean up the rough surfaces and produce a part that conforms to any assigned tolerances. For a particular part whose features incur excessive deviation in the casting process, it may be that no setup would yield an acceptable final part. The proposed Setup-Map (S-Map) describes the positions and orientations of a part that will allow for it to be successfully machined, and will be able to determine if a particular part cannot be made to specification.

The Setup Map is a point space in six dimensions where each of the six orthogonal coordinates corresponds to one of the rigid-body displacements in three dimensional space: three rotations and three translations. Any point within the boundaries of the Setup-Map (S-Map) corresponds to a small displacement of the part that satisfies the condition that each feature will lie within its associated tolerance zone after machining. The process for creating the S-Map involves the representation of constraints imposed by the tolerances in simple coordinate systems for each to-be-machined feature. Constraints are then transformed to a single coordinate system where the intersection reveals the common allowable ‘setup’ points. Should an intersection of the six-dimensional constraints exist, an optimization scheme is used to choose a single setup that gives the best chance for machining to be completed successfully. Should no intersection exist, the particular part cannot be machined to specification or must be re-worked with weld metal added to specific locations.
ContributorsKalish, Nathan (Author) / Davidson, Joseph K. (Thesis advisor) / Shah, Jami J. (Thesis advisor) / Ren, Yi (Committee member) / Arizona State University (Publisher)
Created2016