Matching Items (17)
Filtering by

Clear all filters

150388-Thumbnail Image.png
Description
The main objective of this project was to create a framework for holistic ideation and research about the technical issues involved in creating a holistic approach. Towards that goal, we explored different components of ideation (both logical and intuitive), characterized ideation states, and found new ideation blocks with strategies used

The main objective of this project was to create a framework for holistic ideation and research about the technical issues involved in creating a holistic approach. Towards that goal, we explored different components of ideation (both logical and intuitive), characterized ideation states, and found new ideation blocks with strategies used to overcome them. One of the major contributions of this research is the method by which easy traversal between different ideation methods with different components were facilitated, to support both creativity and functional quality. Another important part of the framework is the sensing of ideation states (blocks/ unfettered ideation) and investigation of matching ideation strategies most likely to facilitate progress. Some of the ideation methods embedded in the initial holistic test bed are Physical effects catalog, working principles catalog, TRIZ, Bio-TRIZ and Artifacts catalog. Repositories were created for each of those. This framework will also be used as a research tool to collect large amount of data from designers about their choice of ideation strategies used, and their effectiveness. Effective documentation of design ideation paths is also facilitated using this holistic approach. A computer tool facilitating holistic ideation was developed. Case studies were run on different designers to document their ideation states and their choice of ideation strategies to come up with a good solution to solve the same design problem.
ContributorsMohan, Manikandan (Author) / Shah, Jami J. (Thesis advisor) / Huebner, Kenneth (Committee member) / Burleson, Winslow (Committee member) / Arizona State University (Publisher)
Created2011
149829-Thumbnail Image.png
Description
Mostly, manufacturing tolerance charts are used these days for manufacturing tolerance transfer but these have the limitation of being one dimensional only. Some research has been undertaken for the three dimensional geometric tolerances but it is too theoretical and yet to be ready for operator level usage. In this research,

Mostly, manufacturing tolerance charts are used these days for manufacturing tolerance transfer but these have the limitation of being one dimensional only. Some research has been undertaken for the three dimensional geometric tolerances but it is too theoretical and yet to be ready for operator level usage. In this research, a new three dimensional model for tolerance transfer in manufacturing process planning is presented that is user friendly in the sense that it is built upon the Coordinate Measuring Machine (CMM) readings that are readily available in any decent manufacturing facility. This model can take care of datum reference change between non orthogonal datums (squeezed datums), non-linearly oriented datums (twisted datums) etc. Graph theoretic approach based upon ACIS, C++ and MFC is laid out to facilitate its implementation for automation of the model. A totally new approach to determining dimensions and tolerances for the manufacturing process plan is also presented. Secondly, a new statistical model for the statistical tolerance analysis based upon joint probability distribution of the trivariate normal distributed variables is presented. 4-D probability Maps have been developed in which the probability value of a point in space is represented by the size of the marker and the associated color. Points inside the part map represent the pass percentage for parts manufactured. The effect of refinement with form and orientation tolerance is highlighted by calculating the change in pass percentage with the pass percentage for size tolerance only. Delaunay triangulation and ray tracing algorithms have been used to automate the process of identifying the points inside and outside the part map. Proof of concept software has been implemented to demonstrate this model and to determine pass percentages for various cases. The model is further extended to assemblies by employing convolution algorithms on two trivariate statistical distributions to arrive at the statistical distribution of the assembly. Map generated by using Minkowski Sum techniques on the individual part maps is superimposed on the probability point cloud resulting from convolution. Delaunay triangulation and ray tracing algorithms are employed to determine the assembleability percentages for the assembly.
ContributorsKhan, M Nadeem Shafi (Author) / Phelan, Patrick E (Thesis advisor) / Montgomery, Douglas C. (Committee member) / Farin, Gerald (Committee member) / Roberts, Chell (Committee member) / Henderson, Mark (Committee member) / Arizona State University (Publisher)
Created2011
151321-Thumbnail Image.png
Description
This thesis concerns the role of geometric imperfections on assemblies in which the location of a target part is dependent on supports at two features. In some applications, such as a turbo-machine rotor that is supported by a series of parts at each bearing, it is the interference or clearance

This thesis concerns the role of geometric imperfections on assemblies in which the location of a target part is dependent on supports at two features. In some applications, such as a turbo-machine rotor that is supported by a series of parts at each bearing, it is the interference or clearance at a functional target feature, such as at the blades that must be controlled. The first part of this thesis relates the limits of location for the target part to geometric imperfections of other parts when stacked-up in parallel paths. In this section parts are considered to be rigid (non-deformable). By understanding how much of variation from the supporting parts contribute to variations of the target feature, a designer can better utilize the tolerance budget when assigning values to individual tolerances. In this work, the T-Map®, a spatial math model is used to model the tolerance accumulation in parallel assemblies. In other applications where parts are flexible, deformations are induced when parts in parallel are clamped together during assembly. Presuming that perfectly manufactured parts have been designed to fit perfectly together and produce zero deformations, the clamping-induced deformations result entirely from the imperfect geometry that is produced during manufacture. The magnitudes and types of these deformations are a function of part dimensions and material stiffnesses, and they are limited by design tolerances that control manufacturing variations. These manufacturing variations, if uncontrolled, may produce high enough stresses when the parts are assembled that premature failure can occur before the design life. The last part of the thesis relates the limits on the largest von Mises stress in one part to functional tolerance limits that must be set at the beginning of a tolerance analysis of parts in such an assembly.
ContributorsJaishankar, Lupin Niranjan (Author) / Davidson, Joseph K. (Thesis advisor) / Shah, Jami J. (Committee member) / Mignolet, Marc P (Committee member) / Arizona State University (Publisher)
Created2012
151510-Thumbnail Image.png
Description
Tolerances on line profiles are used to control cross-sectional shapes of parts, such as turbine blades. A full life cycle for many mechanical devices depends (i) on a wise assignment of tolerances during design and (ii) on careful quality control of the manufacturing process to ensure adherence to the specified

Tolerances on line profiles are used to control cross-sectional shapes of parts, such as turbine blades. A full life cycle for many mechanical devices depends (i) on a wise assignment of tolerances during design and (ii) on careful quality control of the manufacturing process to ensure adherence to the specified tolerances. This thesis describes a new method for quality control of a manufacturing process by improving the method used to convert measured points on a part to a geometric entity that can be compared directly with tolerance specifications. The focus of this paper is the development of a new computational method for obtaining the least-squares fit of a set of points that have been measured with a coordinate measurement machine along a line-profile. The pseudo-inverse of a rectangular matrix is used to convert the measured points to the least-squares fit of the profile. Numerical examples are included for convex and concave line-profiles, that are formed from line- and circular arc-segments.
ContributorsSavaliya, Samir (Author) / Davidson, Joseph K. (Thesis advisor) / Shah, Jami J. (Committee member) / Santos, Veronica J (Committee member) / Arizona State University (Publisher)
Created2013
152414-Thumbnail Image.png
Description
Creative design lies at the intersection of novelty and technical feasibility. These objectives can be achieved through cycles of divergence (idea generation) and convergence (idea evaluation) in conceptual design. The focus of this thesis is on the latter aspect. The evaluation may involve any aspect of technical feasibility and may

Creative design lies at the intersection of novelty and technical feasibility. These objectives can be achieved through cycles of divergence (idea generation) and convergence (idea evaluation) in conceptual design. The focus of this thesis is on the latter aspect. The evaluation may involve any aspect of technical feasibility and may be desired at component, sub-system or full system level. Two issues that are considered in this work are: 1. Information about design ideas is incomplete, informal and sketchy 2. Designers often work at multiple levels; different aspects or subsystems may be at different levels of abstraction Thus, high fidelity analysis and simulation tools are not appropriate for this purpose. This thesis looks at the requirements for a simulation tool and how it could facilitate concept evaluation. The specific tasks reported in this thesis are: 1. The typical types of information available after an ideation session 2. The typical types of technical evaluations done in early stages 3. How to conduct low fidelity design evaluation given a well-defined feasibility question A computational tool for supporting idea evaluation was designed and implemented. It was assumed that the results of the ideation session are represented as a morphological chart and each entry is expressed as some combination of a sketch, text and references to physical effects and machine components. Approximately 110 physical effects were identified and represented in terms of algebraic equations, physical variables and a textual description. A common ontology of physical variables was created so that physical effects could be networked together when variables are shared. This allows users to synthesize complex behaviors from simple ones, without assuming any solution sequence. A library of 16 machine elements was also created and users were given instructions about incorporating them. To support quick analysis, differential equations are transformed to algebraic equations by replacing differential terms with steady state differences), only steady state behavior is considered and interval arithmetic was used for modeling. The tool implementation is done by MATLAB; and a number of case studies are also done to show how the tool works. textual description. A common ontology of physical variables was created so that physical effects could be networked together when variables are shared. This allows users to synthesize complex behaviors from simple ones, without assuming any solution sequence. A library of 15 machine elements was also created and users were given instructions about incorporating them. To support quick analysis, differential equations are transformed to algebraic equations by replacing differential terms with steady state differences), only steady state behavior is considered and interval arithmetic was used for modeling. The tool implementation is done by MATLAB; and a number of case studies are also done to show how the tool works.
ContributorsKhorshidi, Maryam (Author) / Shah, Jami J. (Thesis advisor) / Wu, Teresa (Committee member) / Gel, Esma (Committee member) / Arizona State University (Publisher)
Created2014
149487-Thumbnail Image.png
Description
Current trends in the Computer Aided Engineering (CAE) involve the integration of legacy mesh-based finite element software with newer solid-modeling kernels or full CAD systems in order to simplify laborious or highly specialized tasks in engineering analysis. In particular, mesh generation is becoming increasingly automated. In addition, emphasis is increasingly

Current trends in the Computer Aided Engineering (CAE) involve the integration of legacy mesh-based finite element software with newer solid-modeling kernels or full CAD systems in order to simplify laborious or highly specialized tasks in engineering analysis. In particular, mesh generation is becoming increasingly automated. In addition, emphasis is increasingly placed on full assembly (multi-part) models, which in turn necessitates an automated approach to contact analysis. This task is challenging due to increases in algebraic system size, as well as increases in the number of distorted elements - both of which necessitate manual intervention to maintain accuracy and conserve computer resources. In this investigation, it is demonstrated that the use of a mesh-free B-Spline finite element basis for structural contact problems results in significantly smaller algebraic systems than mesh-based approaches for similar grid spacings. The relative error in calculated contact pressure is evaluated for simple two dimensional smooth domains at discrete points within the contact zone and compared to the analytical Hertz solution, as well as traditional mesh-based finite element solutions for similar grid spacings. For smooth curved domains, the relative error in contact pressure is shown to be less than for bi-quadratic Serendipity elements. The finite element formulation draws on some recent innovations, in which the domain to be analyzed is integrated with the use of transformed Gauss points within the domain, and boundary conditions are applied via distance functions (R-functions). However, the basis is stabilized through a novel selective normalization procedure. In addition, a novel contact algorithm is presented in which the B-Spline support grid is re-used for contact detection. The algorithm is demonstrated for two simple 2-dimensional assemblies. Finally, a modified Penalty Method is demonstrated for connecting elements with incompatible bases.
ContributorsGrishin, Alexander (Author) / Shah, Jami J. (Thesis advisor) / Davidson, Joe (Committee member) / Hjelmstad, Keith (Committee member) / Huebner, Ken (Committee member) / Farin, Gerald (Committee member) / Peralta, Pedro (Committee member) / Arizona State University (Publisher)
Created2010
149542-Thumbnail Image.png
Description
The essence of this research is the reconciliation and standardization of feature fitting algorithms used in Coordinate Measuring Machine (CMM) software and the development of Inspection Maps (i-Maps) for representing geometric tolerances in the inspection stage based on these standardized algorithms. The i-Map is a hypothetical point-space that represents the

The essence of this research is the reconciliation and standardization of feature fitting algorithms used in Coordinate Measuring Machine (CMM) software and the development of Inspection Maps (i-Maps) for representing geometric tolerances in the inspection stage based on these standardized algorithms. The i-Map is a hypothetical point-space that represents the substitute feature evaluated for an actual part in the inspection stage. The first step in this research is to investigate the algorithms used for evaluating substitute features in current CMM software. For this, a survey of feature fitting algorithms available in the literature was performed and then a case study was done to reverse engineer the feature fitting algorithms used in commercial CMM software. The experiments proved that algorithms based on least squares technique are mostly used for GD&T; inspection and this wrong choice of fitting algorithm results in errors and deficiency in the inspection process. Based on the results, a standardization of fitting algorithms is proposed in light of the definition provided in the ASME Y14.5 standard and an interpretation of manual inspection practices. Standardized algorithms for evaluating substitute features from CMM data, consistent with the ASME Y14.5 standard and manual inspection practices for each tolerance type applicable to planar features are developed. Second, these standardized algorithms developed for substitute feature fitting are then used to develop i-Maps for size, orientation and flatness tolerances that apply to their respective feature types. Third, a methodology for Statistical Process Control (SPC) using the I-Maps is proposed by direct fitting of i-Maps into the parent T-Maps. Different methods of computing i-Maps, namely, finding mean, computing the convex hull and principal component analysis are explored. The control limits for the process are derived from inspection samples and a framework for statistical control of the process is developed. This also includes computation of basic SPC and process capability metrics.
ContributorsMani, Neelakantan (Author) / Shah, Jami J. (Thesis advisor) / Davidson, Joseph K. (Committee member) / Farin, Gerald (Committee member) / Arizona State University (Publisher)
Created2011
154869-Thumbnail Image.png
Description
There is very little in the way of prescriptive procedures to guide designers in tolerance specification. This shortcoming motivated the group at Design Automation Lab to automate tolerancing of mechanical assemblies. GD&T data generated by the Auto-Tolerancing software is semantically represented using a neutral Constraint Tolerance Feature (CTF) graph file

There is very little in the way of prescriptive procedures to guide designers in tolerance specification. This shortcoming motivated the group at Design Automation Lab to automate tolerancing of mechanical assemblies. GD&T data generated by the Auto-Tolerancing software is semantically represented using a neutral Constraint Tolerance Feature (CTF) graph file format that is consistent with the ASME Y14.5 standard and the ISO STEP Part 21 file. The primary objective of this research is to communicate GD&T information from the CTF file to a neutral machine readable format. The latest STEP AP 242 (ISO 10303-242) “Managed model based 3D engineering“ aims to support smart manufacturing by capturing semantic Product Manufacturing Information (PMI) within the 3D model and also helping with long-term archiving of the product information. In line with the recommended practices published by CAx Implementor Forum, this research discusses the implementation of CTF to AP 242 translator. The input geometry available in STEP AP 203 format is pre-processed using STEP-NC DLL and 3D InterOp. While the former is initially used to attach persistent IDs to the topological entities in STEP, the latter retains the IDs during translation to ACIS entities for consumption by other modules in the Auto-tolerancing module. The associativity of GD&T available in CTF file to the input geometry is through persistent IDs. C++ libraries used for the translation to STEP AP 242 is provided by StepTools Inc through the STEP-NC DLL. Finally, the output STEP file is tested using available AP 242 readers and shows full conformance with the STEP standard. Using the output AP 242 file, semantic GDT data can now be automatically consumed by downstream applications such as Computer Aided Process Planning (CAPP), Computer Aided Inspection (CAI), Computer Aided Tolerance Systems (CATS) and Coordinate Measuring Machines (CMM).
ContributorsVenkiteswaran, Adarsh (Author) / Shah, Jami J. (Thesis advisor) / Hardwick, Martin (Committee member) / Davidson, Joseph K. (Committee member) / Arizona State University (Publisher)
Created2016
153927-Thumbnail Image.png
Description
A process plan is an instruction set for the manufacture of parts generated from detailed design drawings or CAD models. While these plans are highly detailed about machines, tools, fixtures and operation parameters; tolerances typically show up in less formal manner in such plans, if at all. It is not

A process plan is an instruction set for the manufacture of parts generated from detailed design drawings or CAD models. While these plans are highly detailed about machines, tools, fixtures and operation parameters; tolerances typically show up in less formal manner in such plans, if at all. It is not uncommon to see only dimensional plus/minus values on rough sketches accompanying the instructions. On the other hand, design drawings use standard GD&T (Geometrical Dimensioning and tolerancing) symbols with datums and DRFs (Datum Reference Frames) clearly specified. This is not to say that process planners do not consider tolerances; they are implied by way of choices of fixtures, tools, machines, and operations. When converting design tolerances to the manufacturing datum flow, process planners do tolerance charting, that is based on operation sequence but the resulting plans cannot be audited for conformance to design specification.

In this thesis, I will present a framework for explicating the GD&T schema implied by machining process plans. The first step is to derive the DRFs from the fixturing method in each set-up. Then basic dimensions for the features to be machined in each set up are determined with respect to the extracted DRF. Using shop data for the machines and operations involved, the range of possible geometric variations are estimated for each type of tolerances (form, size, orientation, and position). The sequence of manufacturing operations determines the datum flow chain. Once we have a formal manufacturing GD&T schema, we can analyze and compare it to tolerance specifications from design using the T-map math model. Since the model is based on the manufacturing process plan, it is called resulting T-map or m-map. Then the process plan can be validated by adjusting parameters so that the m-map lies within the T-map created for the design drawing. How the m-map is created to be compared with the T-map is the focus of this research.
ContributorsHaghighi, Payam (Author) / Shah, Jami J. (Thesis advisor) / Davidson, Joseph K. (Committee member) / Ren, Yi (Committee member) / Arizona State University (Publisher)
Created2015
154976-Thumbnail Image.png
Description
Metal castings are selectively machined-based on dimensional control requirements. To ensure that all the finished surfaces are fully machined, each as-cast part needs to be measured and then adjusted optimally in its fixture. The topics of this thesis address two parts of this process: data translations and feature-fitting clouds of

Metal castings are selectively machined-based on dimensional control requirements. To ensure that all the finished surfaces are fully machined, each as-cast part needs to be measured and then adjusted optimally in its fixture. The topics of this thesis address two parts of this process: data translations and feature-fitting clouds of points measured on each cast part. For the first, a CAD model of the finished part is required to be communicated to the machine shop for performing various machining operations on the metal casting. The data flow must include GD&T specifications along with other special notes that may be required to communicate to the machinist. Current data exchange, among various digital applications, is limited to translation of only CAD geometry via STEP AP203. Therefore, an algorithm is developed in order to read, store and translate the data from a CAD file (for example SolidWorks, CREO) to a standard and machine readable format (ACIS format - *.sat). Second, the geometry of cast parts varies from piece to piece and hence fixture set-up parameters for each part must be adjusted individually. To predictively determine these adjustments, the datum surfaces, and to-be-machined surfaces are scanned individually and the point clouds reduced to feature fits. The scanned data are stored as separate point cloud files. The labels associated with the datum and to-be-machined (TBM) features are extracted from the *.sat file. These labels are further matched with the file name of the point cloud data to identify data for the respective features. The point cloud data and the CAD model are then used to fit the appropriate features (features at maximum material condition (MMC) for datums and features at least material condition (LMC) for TBM features) using the existing normative feature fitting (nFF) algorithm. Once the feature fitting is complete, a global datum reference frame (GDRF) is constructed based on the locating method that will be used to machine the part. The locating method is extracted from a fixture library that specifies the type of fixturing used to machine the part. All entities are transformed from its local coordinate system into the GDRF. The nominal geometry, fitted features, and the GD&T information are then stored in a neutral file format called the Constraint Tolerance Feature (CTF) Graph. The final outputs are then used to identify the locations of the critical features on each part and these are used to establish the adjustments for its setup prior to machining, in another module, not part of this thesis.
ContributorsRamnath, Satchit (Author) / Shah, Jami J. (Thesis advisor) / Davidson, Joseph (Committee member) / Hansford, Dianne (Committee member) / Arizona State University (Publisher)
Created2016