Matching Items (91)
147992-Thumbnail Image.png
Description

The research presented in this Honors Thesis provides development in machine learning models which predict future states of a system with unknown dynamics, based on observations of the system. Two case studies are presented for (1) a non-conservative pendulum and (2) a differential game dictating a two-car uncontrolled intersection scenario.

The research presented in this Honors Thesis provides development in machine learning models which predict future states of a system with unknown dynamics, based on observations of the system. Two case studies are presented for (1) a non-conservative pendulum and (2) a differential game dictating a two-car uncontrolled intersection scenario. In the paper we investigate how learning architectures can be manipulated for problem specific geometry. The result of this research provides that these problem specific models are valuable for accurate learning and predicting the dynamics of physics systems.<br/><br/>In order to properly model the physics of a real pendulum, modifications were made to a prior architecture which was sufficient in modeling an ideal pendulum. The necessary modifications to the previous network [13] were problem specific and not transferrable to all other non-conservative physics scenarios. The modified architecture successfully models real pendulum dynamics. This case study provides a basis for future research in augmenting the symplectic gradient of a Hamiltonian energy function to provide a generalized, non-conservative physics model.<br/><br/>A problem specific architecture was also utilized to create an accurate model for the two-car intersection case. The Costate Network proved to be an improvement from the previously used Value Network [17]. Note that this comparison is applied lightly due to slight implementation differences. The development of the Costate Network provides a basis for using characteristics to decompose functions and create a simplified learning problem.<br/><br/>This paper is successful in creating new opportunities to develop physics models, in which the sample cases should be used as a guide for modeling other real and pseudo physics. Although the focused models in this paper are not generalizable, it is important to note that these cases provide direction for future research.

ContributorsMerry, Tanner (Author) / Ren, Yi (Thesis director) / Zhang, Wenlong (Committee member) / Mechanical and Aerospace Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2021-05
148001-Thumbnail Image.png
Description

High-entropy alloys possessing mechanical, chemical, and electrical properties that far exceed those of conventional alloys have the potential to make a significant impact on many areas of engineering. Identifying element combinations and configurations to form these alloys, however, is a difficult, time-consuming, computationally intensive task. Machine learning has revolutionized many

High-entropy alloys possessing mechanical, chemical, and electrical properties that far exceed those of conventional alloys have the potential to make a significant impact on many areas of engineering. Identifying element combinations and configurations to form these alloys, however, is a difficult, time-consuming, computationally intensive task. Machine learning has revolutionized many different fields due to its ability to generalize well to different problems and produce computationally efficient, accurate predictions regarding the system of interest. In this thesis, we demonstrate the effectiveness of machine learning models applied to toy cases representative of simplified physics that are relevant to high-entropy alloy simulation. We show these models are effective at learning nonlinear dynamics for single and multi-particle cases and that more work is needed to accurately represent complex cases in which the system dynamics are chaotic. This thesis serves as a demonstration of the potential benefits of machine learning applied to high-entropy alloy simulations to generate fast, accurate predictions of nonlinear dynamics.

ContributorsDaly, John H (Author) / Ren, Yi (Thesis director) / Zhuang, Houlong (Committee member) / Mechanical and Aerospace Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2021-05
150104-Thumbnail Image.png
Description
A full understanding of material behavior is important for the prediction of residual useful life of aerospace structures via computational modeling. In particular, the influence of rolling-induced anisotropy on fatigue properties has not been studied extensively and it is likely to have a meaningful effect. In this work, fatigue behavior

A full understanding of material behavior is important for the prediction of residual useful life of aerospace structures via computational modeling. In particular, the influence of rolling-induced anisotropy on fatigue properties has not been studied extensively and it is likely to have a meaningful effect. In this work, fatigue behavior of a wrought Al alloy (2024-T351) is studied using notched uniaxial samples with load axes along either the longitudinal or transverse direction, and center notched biaxial samples (cruciforms) with a uniaxial stress state of equivalent amplitude about the bore. Local composition and crystallography were quantified before testing using Energy Dispersive Spectroscopy and Electron Backscattering Diffraction. Interrupted fatigue testing at stresses close to yielding was performed on the samples to nucleate and propagate short cracks and nucleation sites were located and characterized using standard optical and Scanning Electron Microscopy. Results show that crack nucleation occurred due to fractured particles for longitudinal dogbone/cruciform samples; while transverse samples nucleated cracks by debonded and fractured particles. Change in crack nucleation mechanism is attributed to dimensional change of particles with respect to the material axes caused by global anisotropy. Crack nucleation from debonding reduced life till matrix fracture because debonded particles are sharper and generate matrix cracks sooner than their fractured counterparts. Longitudinal samples experienced multisite crack initiation because of reduced cross sectional areas of particles parallel to the loading direction. Conversely the favorable orientation of particles in transverse samples reduced instances of particle fracture eliminating multisite cracking and leading to increased fatigue life. Cyclic tests of cruciform samples showed that crack growth favors longitudinal and transverse directions with few instances of crack growth 45 degrees (diagonal) to the rolling direction. The diagonal crack growth is attributed to stronger influences of local anisotropy on crack nucleation. It was observed that majority of the time crack nucleation is governed by the mixed influences of global and local anisotropies. Measurements of crystal directions parallel to the load on main crack paths revealed directions clustered near the {110} planes and high index directions. This trend is attributed to environmental effects as a result of cyclic testing in air.
ContributorsMakaš, Admir (Author) / Peralta, Pedro D. (Thesis advisor) / Davidson, Joseph K. (Committee member) / Sieradzki, Karl (Committee member) / Arizona State University (Publisher)
Created2011
152173-Thumbnail Image.png
Description
Stream computing has emerged as an importantmodel of computation for embedded system applications particularly in the multimedia and network processing domains. In recent past several programming languages and embedded multi-core processors have been proposed for streaming applications. This thesis examines the execution and dynamic scheduling of stream programs on embedded

Stream computing has emerged as an importantmodel of computation for embedded system applications particularly in the multimedia and network processing domains. In recent past several programming languages and embedded multi-core processors have been proposed for streaming applications. This thesis examines the execution and dynamic scheduling of stream programs on embedded multi-core processors. The thesis addresses the problem in the context of a multi-tasking environment with a time varying allocation of processing elements for a particular streaming application. As a solution the thesis proposes a two step approach where the stream program is compiled to gather key application information, and to generate re-targetable code. A light weight dynamic scheduler incorporates the second stage of the approach. The dynamic scheduler utilizes the static information and available resources to assign or partition the application across the multi-core architecture. The objective of the dynamic scheduler is to maximize the throughput of the application, and it is sensitive to the resource (processing elements, scratch-pad memory, DMA bandwidth) constraints imposed by the target architecture. We evaluate the proposed approach by compiling and scheduling benchmark stream programs on a representative embedded multi-core processor. We present experimental results that evaluate the quality of the solutions generated by the proposed approach by comparisons with existing techniques.
ContributorsLee, Haeseung (Author) / Chatha, Karamvir (Thesis advisor) / Vrudhula, Sarma (Committee member) / Chakrabarti, Chaitali (Committee member) / Wu, Carole-Jean (Committee member) / Arizona State University (Publisher)
Created2013
151321-Thumbnail Image.png
Description
This thesis concerns the role of geometric imperfections on assemblies in which the location of a target part is dependent on supports at two features. In some applications, such as a turbo-machine rotor that is supported by a series of parts at each bearing, it is the interference or clearance

This thesis concerns the role of geometric imperfections on assemblies in which the location of a target part is dependent on supports at two features. In some applications, such as a turbo-machine rotor that is supported by a series of parts at each bearing, it is the interference or clearance at a functional target feature, such as at the blades that must be controlled. The first part of this thesis relates the limits of location for the target part to geometric imperfections of other parts when stacked-up in parallel paths. In this section parts are considered to be rigid (non-deformable). By understanding how much of variation from the supporting parts contribute to variations of the target feature, a designer can better utilize the tolerance budget when assigning values to individual tolerances. In this work, the T-Map®, a spatial math model is used to model the tolerance accumulation in parallel assemblies. In other applications where parts are flexible, deformations are induced when parts in parallel are clamped together during assembly. Presuming that perfectly manufactured parts have been designed to fit perfectly together and produce zero deformations, the clamping-induced deformations result entirely from the imperfect geometry that is produced during manufacture. The magnitudes and types of these deformations are a function of part dimensions and material stiffnesses, and they are limited by design tolerances that control manufacturing variations. These manufacturing variations, if uncontrolled, may produce high enough stresses when the parts are assembled that premature failure can occur before the design life. The last part of the thesis relates the limits on the largest von Mises stress in one part to functional tolerance limits that must be set at the beginning of a tolerance analysis of parts in such an assembly.
ContributorsJaishankar, Lupin Niranjan (Author) / Davidson, Joseph K. (Thesis advisor) / Shah, Jami J. (Committee member) / Mignolet, Marc P (Committee member) / Arizona State University (Publisher)
Created2012
151510-Thumbnail Image.png
Description
Tolerances on line profiles are used to control cross-sectional shapes of parts, such as turbine blades. A full life cycle for many mechanical devices depends (i) on a wise assignment of tolerances during design and (ii) on careful quality control of the manufacturing process to ensure adherence to the specified

Tolerances on line profiles are used to control cross-sectional shapes of parts, such as turbine blades. A full life cycle for many mechanical devices depends (i) on a wise assignment of tolerances during design and (ii) on careful quality control of the manufacturing process to ensure adherence to the specified tolerances. This thesis describes a new method for quality control of a manufacturing process by improving the method used to convert measured points on a part to a geometric entity that can be compared directly with tolerance specifications. The focus of this paper is the development of a new computational method for obtaining the least-squares fit of a set of points that have been measured with a coordinate measurement machine along a line-profile. The pseudo-inverse of a rectangular matrix is used to convert the measured points to the least-squares fit of the profile. Numerical examples are included for convex and concave line-profiles, that are formed from line- and circular arc-segments.
ContributorsSavaliya, Samir (Author) / Davidson, Joseph K. (Thesis advisor) / Shah, Jami J. (Committee member) / Santos, Veronica J (Committee member) / Arizona State University (Publisher)
Created2013
151527-Thumbnail Image.png
Description
Rapid technology scaling, the main driver of the power and performance improvements of computing solutions, has also rendered our computing systems extremely susceptible to transient errors called soft errors. Among the arsenal of techniques to protect computation from soft errors, Control Flow Checking (CFC) based techniques have gained a reputation

Rapid technology scaling, the main driver of the power and performance improvements of computing solutions, has also rendered our computing systems extremely susceptible to transient errors called soft errors. Among the arsenal of techniques to protect computation from soft errors, Control Flow Checking (CFC) based techniques have gained a reputation of effective, yet low-cost protection mechanism. The basic idea is that, there is a high probability that a soft-fault in program execution will eventually alter the control flow of the program. Therefore just by making sure that the control flow of the program is correct, significant protection can be achieved. More than a dozen techniques for CFC have been developed over the last several decades, ranging from hardware techniques, software techniques, and hardware-software hybrid techniques as well. Our analysis shows that existing CFC techniques are not only ineffective in protecting from soft errors, but cause additional power and performance overheads. For this analysis, we develop and validate a simulation based experimental setup to accurately and quantitatively estimate the architectural vulnerability of a program execution on a processor micro-architecture. We model the protection achieved by various state-of-the-art CFC techniques in this quantitative vulnerability estimation setup, and find out that software only CFC protection schemes (CFCSS, CFCSS+NA, CEDA) increase system vulnerability by 18% to 21% with 17% to 38% performance overhead. Hybrid CFC protection (CFEDC) increases vulnerability by 5%, while the vulnerability remains almost the same for hardware only CFC protection (CFCET); notwithstanding the hardware overheads of design cost, area, and power incurred in the hardware modifications required for their implementations.
ContributorsRhisheekesan, Abhishek (Author) / Shrivastava, Aviral (Thesis advisor) / Colbourn, Charles Joseph (Committee member) / Wu, Carole-Jean (Committee member) / Arizona State University (Publisher)
Created2013
151941-Thumbnail Image.png
Description
With increasing transistor volume and reducing feature size, it has become a major design constraint to reduce power consumption also. This has given rise to aggressive architectural changes for on-chip power management and rapid development to energy efficient hardware accelerators. Accordingly, the objective of this research work is to facilitate

With increasing transistor volume and reducing feature size, it has become a major design constraint to reduce power consumption also. This has given rise to aggressive architectural changes for on-chip power management and rapid development to energy efficient hardware accelerators. Accordingly, the objective of this research work is to facilitate software developers to leverage these hardware techniques and improve energy efficiency of the system. To achieve this, I propose two solutions for Linux kernel: Optimal use of these architectural enhancements to achieve greater energy efficiency requires accurate modeling of processor power consumption. Though there are many models available in literature to model processor power consumption, there is a lack of such models to capture power consumption at the task-level. Task-level energy models are a requirement for an operating system (OS) to perform real-time power management as OS time multiplexes tasks to enable sharing of hardware resources. I propose a detailed design methodology for constructing an architecture agnostic task-level power model and incorporating it into a modern operating system to build an online task-level power profiler. The profiler is implemented inside the latest Linux kernel and validated for Intel Sandy Bridge processor. It has a negligible overhead of less than 1\% hardware resource consumption. The profiler power prediction was demonstrated for various application benchmarks from SPEC to PARSEC with less than 4\% error. I also demonstrate the importance of the proposed profiler for emerging architectural techniques through use case scenarios, which include heterogeneous computing and fine grained per-core DVFS. Along with architectural enhancement in general purpose processors to improve energy efficiency, hardware accelerators like Coarse Grain reconfigurable architecture (CGRA) are gaining popularity. Unlike vector processors, which rely on data parallelism, CGRA can provide greater flexibility and compiler level control making it more suitable for present SoC environment. To provide streamline development environment for CGRA, I propose a flexible framework in Linux to do design space exploration for CGRA. With accurate and flexible hardware models, fine grained integration with accurate architectural simulator, and Linux memory management and DMA support, a user can carry out limitless experiments on CGRA in full system environment.
ContributorsDesai, Digant Pareshkumar (Author) / Vrudhula, Sarma (Thesis advisor) / Chakrabarti, Chaitali (Committee member) / Wu, Carole-Jean (Committee member) / Arizona State University (Publisher)
Created2013
135702-Thumbnail Image.png
Description
A method has been developed that employs both procedural and optimization algorithms to adaptively slice CAD models for large-scale additive manufacturing (AM) applications. AM, the process of joining material layer by layer to create parts based on 3D model data, has been shown to be an effective method for quickly

A method has been developed that employs both procedural and optimization algorithms to adaptively slice CAD models for large-scale additive manufacturing (AM) applications. AM, the process of joining material layer by layer to create parts based on 3D model data, has been shown to be an effective method for quickly producing parts of a high geometric complexity in small quantities. 3D printing, a popular and successful implementation of this method, is well-suited to creating small-scale parts that require a fine layer resolution. However, it starts to become impractical for large-scale objects due to build volume and print speed limitations. The proposed layered manufacturing technique builds up models from layers of much thicker sheets of material that can be cut on three-axis CNC machines and assembled manually. Adaptive slicing techniques were utilized to vary layer thickness based on surface complexity to minimize both the cost and error of the layered model. This was realized as a multi-objective optimization problem where the number of layers used represented the cost and the geometric difference between the sliced model and the CAD model defined the error. This problem was approached with two different methods, one of which was a procedural process of placing layers from a set of discrete thicknesses based on the Boolean Exclusive OR (XOR) area difference between adjacent layers. The other method implemented an optimization solver to calculate the precise thickness of each layer to minimize the overall volumetric XOR difference between the sliced and original models. Both methods produced results that help validate the efficiency and practicality of the proposed layered manufacturing technique over existing AM technologies for large-scale applications.
ContributorsStobinske, Paul Anthony (Author) / Ren, Yi (Thesis director) / Bucholz, Leonard (Committee member) / Mechanical and Aerospace Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2016-05
149542-Thumbnail Image.png
Description
The essence of this research is the reconciliation and standardization of feature fitting algorithms used in Coordinate Measuring Machine (CMM) software and the development of Inspection Maps (i-Maps) for representing geometric tolerances in the inspection stage based on these standardized algorithms. The i-Map is a hypothetical point-space that represents the

The essence of this research is the reconciliation and standardization of feature fitting algorithms used in Coordinate Measuring Machine (CMM) software and the development of Inspection Maps (i-Maps) for representing geometric tolerances in the inspection stage based on these standardized algorithms. The i-Map is a hypothetical point-space that represents the substitute feature evaluated for an actual part in the inspection stage. The first step in this research is to investigate the algorithms used for evaluating substitute features in current CMM software. For this, a survey of feature fitting algorithms available in the literature was performed and then a case study was done to reverse engineer the feature fitting algorithms used in commercial CMM software. The experiments proved that algorithms based on least squares technique are mostly used for GD&T; inspection and this wrong choice of fitting algorithm results in errors and deficiency in the inspection process. Based on the results, a standardization of fitting algorithms is proposed in light of the definition provided in the ASME Y14.5 standard and an interpretation of manual inspection practices. Standardized algorithms for evaluating substitute features from CMM data, consistent with the ASME Y14.5 standard and manual inspection practices for each tolerance type applicable to planar features are developed. Second, these standardized algorithms developed for substitute feature fitting are then used to develop i-Maps for size, orientation and flatness tolerances that apply to their respective feature types. Third, a methodology for Statistical Process Control (SPC) using the I-Maps is proposed by direct fitting of i-Maps into the parent T-Maps. Different methods of computing i-Maps, namely, finding mean, computing the convex hull and principal component analysis are explored. The control limits for the process are derived from inspection samples and a framework for statistical control of the process is developed. This also includes computation of basic SPC and process capability metrics.
ContributorsMani, Neelakantan (Author) / Shah, Jami J. (Thesis advisor) / Davidson, Joseph K. (Committee member) / Farin, Gerald (Committee member) / Arizona State University (Publisher)
Created2011