Matching Items (72)
148155-Thumbnail Image.png
Description

A novel concept for integration of flame-assisted fuel cells (FFC) with a gas turbine is analyzed in this paper. Six different fuels (CH4, C3H8, JP-4, JP-5, JP-10(L), and H2) are investigated for the analytical model of the FFC integrated gas turbine hybrid system. As equivalence ratio increases, the efficiency of

A novel concept for integration of flame-assisted fuel cells (FFC) with a gas turbine is analyzed in this paper. Six different fuels (CH4, C3H8, JP-4, JP-5, JP-10(L), and H2) are investigated for the analytical model of the FFC integrated gas turbine hybrid system. As equivalence ratio increases, the efficiency of the hybrid system increases initially then decreases because the decreasing flow rate of air begins to outweigh the increasing hydrogen concentration. This occurs at an equivalence ratio of 2 for CH4. The thermodynamic cycle is analyzed using a temperature entropy diagram and a pressure volume diagram. These thermodynamic diagrams show as equivalence ratio increases, the power generated by the turbine in the hybrid setup decreases. Thermodynamic analysis was performed to verify that energy is conserved and the total chemical energy going into the system was equal to the heat rejected by the system plus the power generated by the system. Of the six fuels, the hybrid system performs best with H2 as the fuel. The electrical efficiency with H2 is predicted to be 27%, CH4 is 24%, C3H8 is 22%, JP-4 is 21%, JP-5 is 20%, and JP-10(L) is 20%. When H2 fuel is used, the overall integrated system is predicted to be 24.5% more efficient than the standard gas turbine system. The integrated system is predicted to be 23.0% more efficient with CH4, 21.9% more efficient with C3H8, 22.7% more efficient with JP-4, 21.3% more efficient with JP-5, and 20.8% more efficient with JP-10(L). The sensitivity of the model is investigated using various fuel utilizations. When CH4 fuel is used, the integrated system is predicted to be 22.7% more efficient with a fuel utilization efficiency of 90% compared to that of 30%.

ContributorsRupiper, Lauren Nicole (Author) / Milcarek, Ryan (Thesis director) / Wang, Liping (Committee member) / Mechanical and Aerospace Engineering Program (Contributor) / School for Engineering of Matter,Transport & Enrgy (Contributor) / Barrett, The Honors College (Contributor)
Created2021-05
147992-Thumbnail Image.png
Description

The research presented in this Honors Thesis provides development in machine learning models which predict future states of a system with unknown dynamics, based on observations of the system. Two case studies are presented for (1) a non-conservative pendulum and (2) a differential game dictating a two-car uncontrolled intersection scenario.

The research presented in this Honors Thesis provides development in machine learning models which predict future states of a system with unknown dynamics, based on observations of the system. Two case studies are presented for (1) a non-conservative pendulum and (2) a differential game dictating a two-car uncontrolled intersection scenario. In the paper we investigate how learning architectures can be manipulated for problem specific geometry. The result of this research provides that these problem specific models are valuable for accurate learning and predicting the dynamics of physics systems.<br/><br/>In order to properly model the physics of a real pendulum, modifications were made to a prior architecture which was sufficient in modeling an ideal pendulum. The necessary modifications to the previous network [13] were problem specific and not transferrable to all other non-conservative physics scenarios. The modified architecture successfully models real pendulum dynamics. This case study provides a basis for future research in augmenting the symplectic gradient of a Hamiltonian energy function to provide a generalized, non-conservative physics model.<br/><br/>A problem specific architecture was also utilized to create an accurate model for the two-car intersection case. The Costate Network proved to be an improvement from the previously used Value Network [17]. Note that this comparison is applied lightly due to slight implementation differences. The development of the Costate Network provides a basis for using characteristics to decompose functions and create a simplified learning problem.<br/><br/>This paper is successful in creating new opportunities to develop physics models, in which the sample cases should be used as a guide for modeling other real and pseudo physics. Although the focused models in this paper are not generalizable, it is important to note that these cases provide direction for future research.

ContributorsMerry, Tanner (Author) / Ren, Yi (Thesis director) / Zhang, Wenlong (Committee member) / Mechanical and Aerospace Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2021-05
148001-Thumbnail Image.png
Description

High-entropy alloys possessing mechanical, chemical, and electrical properties that far exceed those of conventional alloys have the potential to make a significant impact on many areas of engineering. Identifying element combinations and configurations to form these alloys, however, is a difficult, time-consuming, computationally intensive task. Machine learning has revolutionized many

High-entropy alloys possessing mechanical, chemical, and electrical properties that far exceed those of conventional alloys have the potential to make a significant impact on many areas of engineering. Identifying element combinations and configurations to form these alloys, however, is a difficult, time-consuming, computationally intensive task. Machine learning has revolutionized many different fields due to its ability to generalize well to different problems and produce computationally efficient, accurate predictions regarding the system of interest. In this thesis, we demonstrate the effectiveness of machine learning models applied to toy cases representative of simplified physics that are relevant to high-entropy alloy simulation. We show these models are effective at learning nonlinear dynamics for single and multi-particle cases and that more work is needed to accurately represent complex cases in which the system dynamics are chaotic. This thesis serves as a demonstration of the potential benefits of machine learning applied to high-entropy alloy simulations to generate fast, accurate predictions of nonlinear dynamics.

ContributorsDaly, John H (Author) / Ren, Yi (Thesis director) / Zhuang, Houlong (Committee member) / Mechanical and Aerospace Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2021-05
150104-Thumbnail Image.png
Description
A full understanding of material behavior is important for the prediction of residual useful life of aerospace structures via computational modeling. In particular, the influence of rolling-induced anisotropy on fatigue properties has not been studied extensively and it is likely to have a meaningful effect. In this work, fatigue behavior

A full understanding of material behavior is important for the prediction of residual useful life of aerospace structures via computational modeling. In particular, the influence of rolling-induced anisotropy on fatigue properties has not been studied extensively and it is likely to have a meaningful effect. In this work, fatigue behavior of a wrought Al alloy (2024-T351) is studied using notched uniaxial samples with load axes along either the longitudinal or transverse direction, and center notched biaxial samples (cruciforms) with a uniaxial stress state of equivalent amplitude about the bore. Local composition and crystallography were quantified before testing using Energy Dispersive Spectroscopy and Electron Backscattering Diffraction. Interrupted fatigue testing at stresses close to yielding was performed on the samples to nucleate and propagate short cracks and nucleation sites were located and characterized using standard optical and Scanning Electron Microscopy. Results show that crack nucleation occurred due to fractured particles for longitudinal dogbone/cruciform samples; while transverse samples nucleated cracks by debonded and fractured particles. Change in crack nucleation mechanism is attributed to dimensional change of particles with respect to the material axes caused by global anisotropy. Crack nucleation from debonding reduced life till matrix fracture because debonded particles are sharper and generate matrix cracks sooner than their fractured counterparts. Longitudinal samples experienced multisite crack initiation because of reduced cross sectional areas of particles parallel to the loading direction. Conversely the favorable orientation of particles in transverse samples reduced instances of particle fracture eliminating multisite cracking and leading to increased fatigue life. Cyclic tests of cruciform samples showed that crack growth favors longitudinal and transverse directions with few instances of crack growth 45 degrees (diagonal) to the rolling direction. The diagonal crack growth is attributed to stronger influences of local anisotropy on crack nucleation. It was observed that majority of the time crack nucleation is governed by the mixed influences of global and local anisotropies. Measurements of crystal directions parallel to the load on main crack paths revealed directions clustered near the {110} planes and high index directions. This trend is attributed to environmental effects as a result of cyclic testing in air.
ContributorsMakaš, Admir (Author) / Peralta, Pedro D. (Thesis advisor) / Davidson, Joseph K. (Committee member) / Sieradzki, Karl (Committee member) / Arizona State University (Publisher)
Created2011
151321-Thumbnail Image.png
Description
This thesis concerns the role of geometric imperfections on assemblies in which the location of a target part is dependent on supports at two features. In some applications, such as a turbo-machine rotor that is supported by a series of parts at each bearing, it is the interference or clearance

This thesis concerns the role of geometric imperfections on assemblies in which the location of a target part is dependent on supports at two features. In some applications, such as a turbo-machine rotor that is supported by a series of parts at each bearing, it is the interference or clearance at a functional target feature, such as at the blades that must be controlled. The first part of this thesis relates the limits of location for the target part to geometric imperfections of other parts when stacked-up in parallel paths. In this section parts are considered to be rigid (non-deformable). By understanding how much of variation from the supporting parts contribute to variations of the target feature, a designer can better utilize the tolerance budget when assigning values to individual tolerances. In this work, the T-Map®, a spatial math model is used to model the tolerance accumulation in parallel assemblies. In other applications where parts are flexible, deformations are induced when parts in parallel are clamped together during assembly. Presuming that perfectly manufactured parts have been designed to fit perfectly together and produce zero deformations, the clamping-induced deformations result entirely from the imperfect geometry that is produced during manufacture. The magnitudes and types of these deformations are a function of part dimensions and material stiffnesses, and they are limited by design tolerances that control manufacturing variations. These manufacturing variations, if uncontrolled, may produce high enough stresses when the parts are assembled that premature failure can occur before the design life. The last part of the thesis relates the limits on the largest von Mises stress in one part to functional tolerance limits that must be set at the beginning of a tolerance analysis of parts in such an assembly.
ContributorsJaishankar, Lupin Niranjan (Author) / Davidson, Joseph K. (Thesis advisor) / Shah, Jami J. (Committee member) / Mignolet, Marc P (Committee member) / Arizona State University (Publisher)
Created2012
151510-Thumbnail Image.png
Description
Tolerances on line profiles are used to control cross-sectional shapes of parts, such as turbine blades. A full life cycle for many mechanical devices depends (i) on a wise assignment of tolerances during design and (ii) on careful quality control of the manufacturing process to ensure adherence to the specified

Tolerances on line profiles are used to control cross-sectional shapes of parts, such as turbine blades. A full life cycle for many mechanical devices depends (i) on a wise assignment of tolerances during design and (ii) on careful quality control of the manufacturing process to ensure adherence to the specified tolerances. This thesis describes a new method for quality control of a manufacturing process by improving the method used to convert measured points on a part to a geometric entity that can be compared directly with tolerance specifications. The focus of this paper is the development of a new computational method for obtaining the least-squares fit of a set of points that have been measured with a coordinate measurement machine along a line-profile. The pseudo-inverse of a rectangular matrix is used to convert the measured points to the least-squares fit of the profile. Numerical examples are included for convex and concave line-profiles, that are formed from line- and circular arc-segments.
ContributorsSavaliya, Samir (Author) / Davidson, Joseph K. (Thesis advisor) / Shah, Jami J. (Committee member) / Santos, Veronica J (Committee member) / Arizona State University (Publisher)
Created2013
135702-Thumbnail Image.png
Description
A method has been developed that employs both procedural and optimization algorithms to adaptively slice CAD models for large-scale additive manufacturing (AM) applications. AM, the process of joining material layer by layer to create parts based on 3D model data, has been shown to be an effective method for quickly

A method has been developed that employs both procedural and optimization algorithms to adaptively slice CAD models for large-scale additive manufacturing (AM) applications. AM, the process of joining material layer by layer to create parts based on 3D model data, has been shown to be an effective method for quickly producing parts of a high geometric complexity in small quantities. 3D printing, a popular and successful implementation of this method, is well-suited to creating small-scale parts that require a fine layer resolution. However, it starts to become impractical for large-scale objects due to build volume and print speed limitations. The proposed layered manufacturing technique builds up models from layers of much thicker sheets of material that can be cut on three-axis CNC machines and assembled manually. Adaptive slicing techniques were utilized to vary layer thickness based on surface complexity to minimize both the cost and error of the layered model. This was realized as a multi-objective optimization problem where the number of layers used represented the cost and the geometric difference between the sliced model and the CAD model defined the error. This problem was approached with two different methods, one of which was a procedural process of placing layers from a set of discrete thicknesses based on the Boolean Exclusive OR (XOR) area difference between adjacent layers. The other method implemented an optimization solver to calculate the precise thickness of each layer to minimize the overall volumetric XOR difference between the sliced and original models. Both methods produced results that help validate the efficiency and practicality of the proposed layered manufacturing technique over existing AM technologies for large-scale applications.
ContributorsStobinske, Paul Anthony (Author) / Ren, Yi (Thesis director) / Bucholz, Leonard (Committee member) / Mechanical and Aerospace Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2016-05
149542-Thumbnail Image.png
Description
The essence of this research is the reconciliation and standardization of feature fitting algorithms used in Coordinate Measuring Machine (CMM) software and the development of Inspection Maps (i-Maps) for representing geometric tolerances in the inspection stage based on these standardized algorithms. The i-Map is a hypothetical point-space that represents the

The essence of this research is the reconciliation and standardization of feature fitting algorithms used in Coordinate Measuring Machine (CMM) software and the development of Inspection Maps (i-Maps) for representing geometric tolerances in the inspection stage based on these standardized algorithms. The i-Map is a hypothetical point-space that represents the substitute feature evaluated for an actual part in the inspection stage. The first step in this research is to investigate the algorithms used for evaluating substitute features in current CMM software. For this, a survey of feature fitting algorithms available in the literature was performed and then a case study was done to reverse engineer the feature fitting algorithms used in commercial CMM software. The experiments proved that algorithms based on least squares technique are mostly used for GD&T; inspection and this wrong choice of fitting algorithm results in errors and deficiency in the inspection process. Based on the results, a standardization of fitting algorithms is proposed in light of the definition provided in the ASME Y14.5 standard and an interpretation of manual inspection practices. Standardized algorithms for evaluating substitute features from CMM data, consistent with the ASME Y14.5 standard and manual inspection practices for each tolerance type applicable to planar features are developed. Second, these standardized algorithms developed for substitute feature fitting are then used to develop i-Maps for size, orientation and flatness tolerances that apply to their respective feature types. Third, a methodology for Statistical Process Control (SPC) using the I-Maps is proposed by direct fitting of i-Maps into the parent T-Maps. Different methods of computing i-Maps, namely, finding mean, computing the convex hull and principal component analysis are explored. The control limits for the process are derived from inspection samples and a framework for statistical control of the process is developed. This also includes computation of basic SPC and process capability metrics.
ContributorsMani, Neelakantan (Author) / Shah, Jami J. (Thesis advisor) / Davidson, Joseph K. (Committee member) / Farin, Gerald (Committee member) / Arizona State University (Publisher)
Created2011
131002-Thumbnail Image.png
Description
This thesis presents a process by which a controller used for collective transport tasks is qualitatively studied and probed for presence of undesirable equilibrium states that could entrap the system and prevent it from converging to a target state. Fields of study relevant to this project include dynamic system modeling,

This thesis presents a process by which a controller used for collective transport tasks is qualitatively studied and probed for presence of undesirable equilibrium states that could entrap the system and prevent it from converging to a target state. Fields of study relevant to this project include dynamic system modeling, modern control theory, script-based system simulation, and autonomous systems design. Simulation and computational software MATLAB and Simulink® were used in this thesis.
To achieve this goal, a model of a swarm performing a collective transport task in a bounded domain featuring convex obstacles was simulated in MATLAB/ Simulink®. The closed-loop dynamic equations of this model were linearized about an equilibrium state with angular acceleration and linear acceleration set to zero. The simulation was run over 30 times to confirm system ability to successfully transport the payload to a goal point without colliding with obstacles and determine ideal operating conditions by testing various orientations of objects in the bounded domain. An additional purely MATLAB simulation was run to identify local minima of the Hessian of the navigation-like potential function. By calculating this Hessian periodically throughout the system’s progress and determining the signs of its eigenvalues, a system could check whether it is trapped in a local minimum, and potentially dislodge itself through implementation of a stochastic term in the robot controllers. The eigenvalues of the Hessian calculated in this research suggested the model local minima were degenerate, indicating an error in the mathematical model for this system, which likely incurred during linearization of this highly nonlinear system.
Created2020-12
132368-Thumbnail Image.png
Description
A defense-by-randomization framework is proposed as an effective defense mechanism against different types of adversarial attacks on neural networks. Experiments were conducted by selecting a combination of differently constructed image classification neural networks to observe which combinations applied to this framework were most effective in maximizing classification accuracy. Furthermore, the

A defense-by-randomization framework is proposed as an effective defense mechanism against different types of adversarial attacks on neural networks. Experiments were conducted by selecting a combination of differently constructed image classification neural networks to observe which combinations applied to this framework were most effective in maximizing classification accuracy. Furthermore, the reasons why particular combinations were more effective than others is explored.
ContributorsMazboudi, Yassine Ahmad (Author) / Yang, Yezhou (Thesis director) / Ren, Yi (Committee member) / School of Mathematical and Statistical Sciences (Contributor) / Economics Program in CLAS (Contributor) / Barrett, The Honors College (Contributor)
Created2019-05