Matching Items (62)
150388-Thumbnail Image.png
Description
The main objective of this project was to create a framework for holistic ideation and research about the technical issues involved in creating a holistic approach. Towards that goal, we explored different components of ideation (both logical and intuitive), characterized ideation states, and found new ideation blocks with strategies used

The main objective of this project was to create a framework for holistic ideation and research about the technical issues involved in creating a holistic approach. Towards that goal, we explored different components of ideation (both logical and intuitive), characterized ideation states, and found new ideation blocks with strategies used to overcome them. One of the major contributions of this research is the method by which easy traversal between different ideation methods with different components were facilitated, to support both creativity and functional quality. Another important part of the framework is the sensing of ideation states (blocks/ unfettered ideation) and investigation of matching ideation strategies most likely to facilitate progress. Some of the ideation methods embedded in the initial holistic test bed are Physical effects catalog, working principles catalog, TRIZ, Bio-TRIZ and Artifacts catalog. Repositories were created for each of those. This framework will also be used as a research tool to collect large amount of data from designers about their choice of ideation strategies used, and their effectiveness. Effective documentation of design ideation paths is also facilitated using this holistic approach. A computer tool facilitating holistic ideation was developed. Case studies were run on different designers to document their ideation states and their choice of ideation strategies to come up with a good solution to solve the same design problem.
ContributorsMohan, Manikandan (Author) / Shah, Jami J. (Thesis advisor) / Huebner, Kenneth (Committee member) / Burleson, Winslow (Committee member) / Arizona State University (Publisher)
Created2011
147992-Thumbnail Image.png
Description

The research presented in this Honors Thesis provides development in machine learning models which predict future states of a system with unknown dynamics, based on observations of the system. Two case studies are presented for (1) a non-conservative pendulum and (2) a differential game dictating a two-car uncontrolled intersection scenario.

The research presented in this Honors Thesis provides development in machine learning models which predict future states of a system with unknown dynamics, based on observations of the system. Two case studies are presented for (1) a non-conservative pendulum and (2) a differential game dictating a two-car uncontrolled intersection scenario. In the paper we investigate how learning architectures can be manipulated for problem specific geometry. The result of this research provides that these problem specific models are valuable for accurate learning and predicting the dynamics of physics systems.<br/><br/>In order to properly model the physics of a real pendulum, modifications were made to a prior architecture which was sufficient in modeling an ideal pendulum. The necessary modifications to the previous network [13] were problem specific and not transferrable to all other non-conservative physics scenarios. The modified architecture successfully models real pendulum dynamics. This case study provides a basis for future research in augmenting the symplectic gradient of a Hamiltonian energy function to provide a generalized, non-conservative physics model.<br/><br/>A problem specific architecture was also utilized to create an accurate model for the two-car intersection case. The Costate Network proved to be an improvement from the previously used Value Network [17]. Note that this comparison is applied lightly due to slight implementation differences. The development of the Costate Network provides a basis for using characteristics to decompose functions and create a simplified learning problem.<br/><br/>This paper is successful in creating new opportunities to develop physics models, in which the sample cases should be used as a guide for modeling other real and pseudo physics. Although the focused models in this paper are not generalizable, it is important to note that these cases provide direction for future research.

ContributorsMerry, Tanner (Author) / Ren, Yi (Thesis director) / Zhang, Wenlong (Committee member) / Mechanical and Aerospace Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2021-05
148001-Thumbnail Image.png
Description

High-entropy alloys possessing mechanical, chemical, and electrical properties that far exceed those of conventional alloys have the potential to make a significant impact on many areas of engineering. Identifying element combinations and configurations to form these alloys, however, is a difficult, time-consuming, computationally intensive task. Machine learning has revolutionized many

High-entropy alloys possessing mechanical, chemical, and electrical properties that far exceed those of conventional alloys have the potential to make a significant impact on many areas of engineering. Identifying element combinations and configurations to form these alloys, however, is a difficult, time-consuming, computationally intensive task. Machine learning has revolutionized many different fields due to its ability to generalize well to different problems and produce computationally efficient, accurate predictions regarding the system of interest. In this thesis, we demonstrate the effectiveness of machine learning models applied to toy cases representative of simplified physics that are relevant to high-entropy alloy simulation. We show these models are effective at learning nonlinear dynamics for single and multi-particle cases and that more work is needed to accurately represent complex cases in which the system dynamics are chaotic. This thesis serves as a demonstration of the potential benefits of machine learning applied to high-entropy alloy simulations to generate fast, accurate predictions of nonlinear dynamics.

ContributorsDaly, John H (Author) / Ren, Yi (Thesis director) / Zhuang, Houlong (Committee member) / Mechanical and Aerospace Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2021-05
151321-Thumbnail Image.png
Description
This thesis concerns the role of geometric imperfections on assemblies in which the location of a target part is dependent on supports at two features. In some applications, such as a turbo-machine rotor that is supported by a series of parts at each bearing, it is the interference or clearance

This thesis concerns the role of geometric imperfections on assemblies in which the location of a target part is dependent on supports at two features. In some applications, such as a turbo-machine rotor that is supported by a series of parts at each bearing, it is the interference or clearance at a functional target feature, such as at the blades that must be controlled. The first part of this thesis relates the limits of location for the target part to geometric imperfections of other parts when stacked-up in parallel paths. In this section parts are considered to be rigid (non-deformable). By understanding how much of variation from the supporting parts contribute to variations of the target feature, a designer can better utilize the tolerance budget when assigning values to individual tolerances. In this work, the T-Map®, a spatial math model is used to model the tolerance accumulation in parallel assemblies. In other applications where parts are flexible, deformations are induced when parts in parallel are clamped together during assembly. Presuming that perfectly manufactured parts have been designed to fit perfectly together and produce zero deformations, the clamping-induced deformations result entirely from the imperfect geometry that is produced during manufacture. The magnitudes and types of these deformations are a function of part dimensions and material stiffnesses, and they are limited by design tolerances that control manufacturing variations. These manufacturing variations, if uncontrolled, may produce high enough stresses when the parts are assembled that premature failure can occur before the design life. The last part of the thesis relates the limits on the largest von Mises stress in one part to functional tolerance limits that must be set at the beginning of a tolerance analysis of parts in such an assembly.
ContributorsJaishankar, Lupin Niranjan (Author) / Davidson, Joseph K. (Thesis advisor) / Shah, Jami J. (Committee member) / Mignolet, Marc P (Committee member) / Arizona State University (Publisher)
Created2012
151510-Thumbnail Image.png
Description
Tolerances on line profiles are used to control cross-sectional shapes of parts, such as turbine blades. A full life cycle for many mechanical devices depends (i) on a wise assignment of tolerances during design and (ii) on careful quality control of the manufacturing process to ensure adherence to the specified

Tolerances on line profiles are used to control cross-sectional shapes of parts, such as turbine blades. A full life cycle for many mechanical devices depends (i) on a wise assignment of tolerances during design and (ii) on careful quality control of the manufacturing process to ensure adherence to the specified tolerances. This thesis describes a new method for quality control of a manufacturing process by improving the method used to convert measured points on a part to a geometric entity that can be compared directly with tolerance specifications. The focus of this paper is the development of a new computational method for obtaining the least-squares fit of a set of points that have been measured with a coordinate measurement machine along a line-profile. The pseudo-inverse of a rectangular matrix is used to convert the measured points to the least-squares fit of the profile. Numerical examples are included for convex and concave line-profiles, that are formed from line- and circular arc-segments.
ContributorsSavaliya, Samir (Author) / Davidson, Joseph K. (Thesis advisor) / Shah, Jami J. (Committee member) / Santos, Veronica J (Committee member) / Arizona State University (Publisher)
Created2013
152414-Thumbnail Image.png
Description
Creative design lies at the intersection of novelty and technical feasibility. These objectives can be achieved through cycles of divergence (idea generation) and convergence (idea evaluation) in conceptual design. The focus of this thesis is on the latter aspect. The evaluation may involve any aspect of technical feasibility and may

Creative design lies at the intersection of novelty and technical feasibility. These objectives can be achieved through cycles of divergence (idea generation) and convergence (idea evaluation) in conceptual design. The focus of this thesis is on the latter aspect. The evaluation may involve any aspect of technical feasibility and may be desired at component, sub-system or full system level. Two issues that are considered in this work are: 1. Information about design ideas is incomplete, informal and sketchy 2. Designers often work at multiple levels; different aspects or subsystems may be at different levels of abstraction Thus, high fidelity analysis and simulation tools are not appropriate for this purpose. This thesis looks at the requirements for a simulation tool and how it could facilitate concept evaluation. The specific tasks reported in this thesis are: 1. The typical types of information available after an ideation session 2. The typical types of technical evaluations done in early stages 3. How to conduct low fidelity design evaluation given a well-defined feasibility question A computational tool for supporting idea evaluation was designed and implemented. It was assumed that the results of the ideation session are represented as a morphological chart and each entry is expressed as some combination of a sketch, text and references to physical effects and machine components. Approximately 110 physical effects were identified and represented in terms of algebraic equations, physical variables and a textual description. A common ontology of physical variables was created so that physical effects could be networked together when variables are shared. This allows users to synthesize complex behaviors from simple ones, without assuming any solution sequence. A library of 16 machine elements was also created and users were given instructions about incorporating them. To support quick analysis, differential equations are transformed to algebraic equations by replacing differential terms with steady state differences), only steady state behavior is considered and interval arithmetic was used for modeling. The tool implementation is done by MATLAB; and a number of case studies are also done to show how the tool works. textual description. A common ontology of physical variables was created so that physical effects could be networked together when variables are shared. This allows users to synthesize complex behaviors from simple ones, without assuming any solution sequence. A library of 15 machine elements was also created and users were given instructions about incorporating them. To support quick analysis, differential equations are transformed to algebraic equations by replacing differential terms with steady state differences), only steady state behavior is considered and interval arithmetic was used for modeling. The tool implementation is done by MATLAB; and a number of case studies are also done to show how the tool works.
ContributorsKhorshidi, Maryam (Author) / Shah, Jami J. (Thesis advisor) / Wu, Teresa (Committee member) / Gel, Esma (Committee member) / Arizona State University (Publisher)
Created2014
135702-Thumbnail Image.png
Description
A method has been developed that employs both procedural and optimization algorithms to adaptively slice CAD models for large-scale additive manufacturing (AM) applications. AM, the process of joining material layer by layer to create parts based on 3D model data, has been shown to be an effective method for quickly

A method has been developed that employs both procedural and optimization algorithms to adaptively slice CAD models for large-scale additive manufacturing (AM) applications. AM, the process of joining material layer by layer to create parts based on 3D model data, has been shown to be an effective method for quickly producing parts of a high geometric complexity in small quantities. 3D printing, a popular and successful implementation of this method, is well-suited to creating small-scale parts that require a fine layer resolution. However, it starts to become impractical for large-scale objects due to build volume and print speed limitations. The proposed layered manufacturing technique builds up models from layers of much thicker sheets of material that can be cut on three-axis CNC machines and assembled manually. Adaptive slicing techniques were utilized to vary layer thickness based on surface complexity to minimize both the cost and error of the layered model. This was realized as a multi-objective optimization problem where the number of layers used represented the cost and the geometric difference between the sliced model and the CAD model defined the error. This problem was approached with two different methods, one of which was a procedural process of placing layers from a set of discrete thicknesses based on the Boolean Exclusive OR (XOR) area difference between adjacent layers. The other method implemented an optimization solver to calculate the precise thickness of each layer to minimize the overall volumetric XOR difference between the sliced and original models. Both methods produced results that help validate the efficiency and practicality of the proposed layered manufacturing technique over existing AM technologies for large-scale applications.
ContributorsStobinske, Paul Anthony (Author) / Ren, Yi (Thesis director) / Bucholz, Leonard (Committee member) / Mechanical and Aerospace Engineering Program (Contributor) / Barrett, The Honors College (Contributor)
Created2016-05
149487-Thumbnail Image.png
Description
Current trends in the Computer Aided Engineering (CAE) involve the integration of legacy mesh-based finite element software with newer solid-modeling kernels or full CAD systems in order to simplify laborious or highly specialized tasks in engineering analysis. In particular, mesh generation is becoming increasingly automated. In addition, emphasis is increasingly

Current trends in the Computer Aided Engineering (CAE) involve the integration of legacy mesh-based finite element software with newer solid-modeling kernels or full CAD systems in order to simplify laborious or highly specialized tasks in engineering analysis. In particular, mesh generation is becoming increasingly automated. In addition, emphasis is increasingly placed on full assembly (multi-part) models, which in turn necessitates an automated approach to contact analysis. This task is challenging due to increases in algebraic system size, as well as increases in the number of distorted elements - both of which necessitate manual intervention to maintain accuracy and conserve computer resources. In this investigation, it is demonstrated that the use of a mesh-free B-Spline finite element basis for structural contact problems results in significantly smaller algebraic systems than mesh-based approaches for similar grid spacings. The relative error in calculated contact pressure is evaluated for simple two dimensional smooth domains at discrete points within the contact zone and compared to the analytical Hertz solution, as well as traditional mesh-based finite element solutions for similar grid spacings. For smooth curved domains, the relative error in contact pressure is shown to be less than for bi-quadratic Serendipity elements. The finite element formulation draws on some recent innovations, in which the domain to be analyzed is integrated with the use of transformed Gauss points within the domain, and boundary conditions are applied via distance functions (R-functions). However, the basis is stabilized through a novel selective normalization procedure. In addition, a novel contact algorithm is presented in which the B-Spline support grid is re-used for contact detection. The algorithm is demonstrated for two simple 2-dimensional assemblies. Finally, a modified Penalty Method is demonstrated for connecting elements with incompatible bases.
ContributorsGrishin, Alexander (Author) / Shah, Jami J. (Thesis advisor) / Davidson, Joe (Committee member) / Hjelmstad, Keith (Committee member) / Huebner, Ken (Committee member) / Farin, Gerald (Committee member) / Peralta, Pedro (Committee member) / Arizona State University (Publisher)
Created2010
149542-Thumbnail Image.png
Description
The essence of this research is the reconciliation and standardization of feature fitting algorithms used in Coordinate Measuring Machine (CMM) software and the development of Inspection Maps (i-Maps) for representing geometric tolerances in the inspection stage based on these standardized algorithms. The i-Map is a hypothetical point-space that represents the

The essence of this research is the reconciliation and standardization of feature fitting algorithms used in Coordinate Measuring Machine (CMM) software and the development of Inspection Maps (i-Maps) for representing geometric tolerances in the inspection stage based on these standardized algorithms. The i-Map is a hypothetical point-space that represents the substitute feature evaluated for an actual part in the inspection stage. The first step in this research is to investigate the algorithms used for evaluating substitute features in current CMM software. For this, a survey of feature fitting algorithms available in the literature was performed and then a case study was done to reverse engineer the feature fitting algorithms used in commercial CMM software. The experiments proved that algorithms based on least squares technique are mostly used for GD&T; inspection and this wrong choice of fitting algorithm results in errors and deficiency in the inspection process. Based on the results, a standardization of fitting algorithms is proposed in light of the definition provided in the ASME Y14.5 standard and an interpretation of manual inspection practices. Standardized algorithms for evaluating substitute features from CMM data, consistent with the ASME Y14.5 standard and manual inspection practices for each tolerance type applicable to planar features are developed. Second, these standardized algorithms developed for substitute feature fitting are then used to develop i-Maps for size, orientation and flatness tolerances that apply to their respective feature types. Third, a methodology for Statistical Process Control (SPC) using the I-Maps is proposed by direct fitting of i-Maps into the parent T-Maps. Different methods of computing i-Maps, namely, finding mean, computing the convex hull and principal component analysis are explored. The control limits for the process are derived from inspection samples and a framework for statistical control of the process is developed. This also includes computation of basic SPC and process capability metrics.
ContributorsMani, Neelakantan (Author) / Shah, Jami J. (Thesis advisor) / Davidson, Joseph K. (Committee member) / Farin, Gerald (Committee member) / Arizona State University (Publisher)
Created2011
132368-Thumbnail Image.png
Description
A defense-by-randomization framework is proposed as an effective defense mechanism against different types of adversarial attacks on neural networks. Experiments were conducted by selecting a combination of differently constructed image classification neural networks to observe which combinations applied to this framework were most effective in maximizing classification accuracy. Furthermore, the

A defense-by-randomization framework is proposed as an effective defense mechanism against different types of adversarial attacks on neural networks. Experiments were conducted by selecting a combination of differently constructed image classification neural networks to observe which combinations applied to this framework were most effective in maximizing classification accuracy. Furthermore, the reasons why particular combinations were more effective than others is explored.
ContributorsMazboudi, Yassine Ahmad (Author) / Yang, Yezhou (Thesis director) / Ren, Yi (Committee member) / School of Mathematical and Statistical Sciences (Contributor) / Economics Program in CLAS (Contributor) / Barrett, The Honors College (Contributor)
Created2019-05