Matching Items (14)
Filtering by

Clear all filters

152254-Thumbnail Image.png
Description
The friction condition is an important factor in controlling the compressing process in metalforming. The friction calibration maps (FCM) are widely used in estimating friction factors between the workpiece and die. However, in standard FEA, the friction condition is defined by friction coefficient factor (µ), while the FCM is used

The friction condition is an important factor in controlling the compressing process in metalforming. The friction calibration maps (FCM) are widely used in estimating friction factors between the workpiece and die. However, in standard FEA, the friction condition is defined by friction coefficient factor (µ), while the FCM is used to a constant shear friction factors (m) to describe the friction condition. The purpose of this research is to find a method to convert the m factor to u factor, so that FEA can be used to simulate ring tests with µ. The research is carried out with FEA and Design of Experiment (DOE). FEA is used to simulate the ring compression test. A 2D quarter model is adopted as geometry model. A bilinear material model is used in nonlinear FEA. After the model is established, validation tests are conducted via the influence of Poisson's ratio on the ring compression test. It is shown that the established FEA model is valid especially if the Poisson's ratio is close to 0.5 in the setting of FEA. Material folding phenomena is present in this model, and µ factors are applied at all surfaces of the ring respectively. It is also found that the reduction ratio of the ring and the slopes of the FCM can be used to describe the deformation of the ring specimen. With the baseline FEA model, some formulas between the deformation parameters, material mechanical properties and µ factors are generated through the statistical analysis to the simulating results of the ring compression test. A method to substitute the m factor with µ factors for particular material by selecting and applying the µ factor in time sequence is found based on these formulas. By converting the m factor into µ factor, the cold forging can be simulated.
ContributorsKexiang (Author) / Shah, Jami (Thesis advisor) / Davidson, Joseph (Committee member) / Trimble, Steve (Committee member) / Arizona State University (Publisher)
Created2013
152005-Thumbnail Image.png
Description
The goal of this research project is to develop a DOF (degree of freedom) algebra for entity clusters to support tolerance specification, validation, and tolerance automation. This representation is required to capture the relation between geometric entities, metric constraints and tolerance specification. This research project is a part of an

The goal of this research project is to develop a DOF (degree of freedom) algebra for entity clusters to support tolerance specification, validation, and tolerance automation. This representation is required to capture the relation between geometric entities, metric constraints and tolerance specification. This research project is a part of an on-going project on creating a bi-level model of GD&T; (Geometric Dimensioning and Tolerancing). This thesis presents the systematic derivation of degree of freedoms of entity clusters corresponding to tolerance classes. The clusters can be datum reference frames (DRFs) or targets. A binary vector representation of degree of freedom and operations for combining them are proposed. An algebraic method is developed by using DOF representation. The ASME Y14.5.1 companion to the Geometric Dimensioning and Tolerancing (GD&T;) standard gives an exhaustive tabulation of active and invariant degrees of freedom (DOF) for Datum Reference Frames (DRF). This algebra is validated by checking it against all cases in the Y14.5.1 tabulation. This algebra allows the derivation of the general rules for tolerance specification and validation. A computer tool is implemented to support GD&T; specification and validation. The computer implementation outputs the geometric and tolerance information in the form of a CTF (Constraint-Tolerance-Feature) file which can be used for tolerance stack analysis.
ContributorsShen, Yadong (Author) / Shah, Jami (Thesis advisor) / Davidson, Joseph (Committee member) / Huebner, Kenneth (Committee member) / Arizona State University (Publisher)
Created2013
151838-Thumbnail Image.png
Description
The objective of this research is to develop methods for generating the Tolerance-Map for a line-profile that is specified by a designer to control the geometric profile shape of a surface. After development, the aim is to find one that can be easily implemented in computer software using existing libraries.

The objective of this research is to develop methods for generating the Tolerance-Map for a line-profile that is specified by a designer to control the geometric profile shape of a surface. After development, the aim is to find one that can be easily implemented in computer software using existing libraries. Two methods were explored: the parametric modeling method and the decomposed modeling method. The Tolerance-Map (T-Map) is a hypothetical point-space, each point of which represents one geometric variation of a feature in its tolerance-zone. T-Maps have been produced for most of the tolerance classes that are used by designers, but, prior to the work of this project, the method of construction required considerable intuitive input, rather than being based primarily on automated computer tools. Tolerances on line-profiles are used to control cross-sectional shapes of parts, such as every cross-section of a mildly twisted compressor blade. Such tolerances constrain geometric manufacturing variations within a specified two-dimensional tolerance-zone. A single profile tolerance may be used to control position, orientation, and form of the cross-section. Four independent variables capture all of the profile deviations: two independent translations in the plane of the profile, one rotation in that plane, and the size-increment necessary to identify one of the allowable parallel profiles. For the selected method of generation, the line profile is decomposed into three types of segments, a primitive T-Map is produced for each segment, and finally the T-Maps from all the segments are combined to obtain the T-Map for the given profile. The types of segments are the (straight) line-segment, circular arc-segment, and the freeform-curve segment. The primitive T-Maps are generated analytically, and, for freeform-curves, they are built approximately with the aid of the computer. A deformation matrix is used to transform the primitive T-Maps to a single coordinate system for the whole profile. The T-Map for the whole line profile is generated by the Boolean intersection of the primitive T-Maps for the individual profile segments. This computer-implemented method can generate T-Maps for open profiles, closed ones, and those containing concave shapes.
ContributorsHe, Yifei (Author) / Davidson, Joseph (Thesis advisor) / Shah, Jami (Committee member) / Herrmann, Marcus (Committee member) / Arizona State University (Publisher)
Created2013
Description
The focus of this research is to investigate methods for material substitution for the purpose of re-engineering legacy systems that involves incomplete information about form, fit and function of replacement parts. The primary motive is to extract as much useful information about a failed legacy part as possible and use

The focus of this research is to investigate methods for material substitution for the purpose of re-engineering legacy systems that involves incomplete information about form, fit and function of replacement parts. The primary motive is to extract as much useful information about a failed legacy part as possible and use fuzzy logic rules for identifying the unknown parameter values. Machine elements can fail by any number of failure modes but the most probable failure modes based on the service condition are considered critical failure modes. Three main parameters are of key interest in identifying the critical failure mode of the part. Critical failure modes are then directly mapped to material properties. Target material property values are calculated from material property values obtained from the originally used material and from the design goals. The material database is searched for new candidate materials that satisfy the goals and constraints in manufacturing and raw stock availability. Uncertainty in the extracted data is modeled using fuzzy logic. Fuzzy member functions model the imprecise nature of data in each available parameter and rule sets characterize the imprecise dependencies between the parameters and makes decisions in identifying the unknown parameter value based on the incompleteness. A final confidence level for each material in a pool of candidate material is a direct indication of uncertainty. All the candidates satisfy the goals and constraints to varying degrees and the final selection is left to the designer's discretion. The process is automated by software that inputs incomplete data; uses fuzzy logic to extract more information and queries the material database with a constrained search for finding candidate alternatives.
ContributorsBalaji, Srinath (Author) / Shah, Jami (Thesis advisor) / Davidson, Joseph (Committee member) / Huebner, Kenneth (Committee member) / Arizona State University (Publisher)
Created2011
150159-Thumbnail Image.png
Description
The focus of this investigation is on the renewed assessment of nonlinear reduced order models (ROM) for the accurate prediction of the geometrically nonlinear response of a curved beam. In light of difficulties encountered in an earlier modeling effort, the various steps involved in the construction of the reduced order

The focus of this investigation is on the renewed assessment of nonlinear reduced order models (ROM) for the accurate prediction of the geometrically nonlinear response of a curved beam. In light of difficulties encountered in an earlier modeling effort, the various steps involved in the construction of the reduced order model are carefully reassessed. The selection of the basis functions is first addressed by comparison with the results of proper orthogonal decomposition (POD) analysis. The normal basis functions suggested earlier, i.e. the transverse linear modes of the corresponding flat beam, are shown in fact to be very close to the POD eigenvectors of the normal displacements and thus retained in the present effort. A strong connection is similarly established between the POD eigenvectors of the tangential displacements and the dual modes which are accordingly selected to complement the normal basis functions. The identification of the parameters of the reduced order model is revisited next and it is observed that the standard approach for their identification does not capture well the occurrence of snap-throughs. On this basis, a revised approach is proposed which is assessed first on the static, symmetric response of the beam to a uniform load. A very good to excellent matching between full finite element and ROM predicted responses validates the new identification procedure and motivates its application to the dynamic response of the beam which exhibits both symmetric and antisymmetric motions. While not quite as accurate as in the static case, the reduced order model predictions match well their full Nastran counterparts and support the reduced order model development strategy.
ContributorsZhang, Yaowen (Author) / Mignolet, Marc P (Thesis advisor) / Davidson, Joseph (Committee member) / Spottswood, Stephen M (Committee member) / Arizona State University (Publisher)
Created2011
150339-Thumbnail Image.png
Description
A low cost expander, combustor device that takes compressed air, adds thermal energy and then expands the gas to drive an electrical generator is to be designed by modifying an existing reciprocating spark ignition engine. The engine used is the 6.5 hp Briggs and Stratton series 122600 engine. Compressed air

A low cost expander, combustor device that takes compressed air, adds thermal energy and then expands the gas to drive an electrical generator is to be designed by modifying an existing reciprocating spark ignition engine. The engine used is the 6.5 hp Briggs and Stratton series 122600 engine. Compressed air that is stored in a tank at a particular pressure will be introduced during the compression stage of the engine cycle to reduce pump work. In the modified design the intake and exhaust valve timings are modified to achieve this process. The time required to fill the combustion chamber with compressed air to the storage pressure immediately before spark and the state of the air with respect to crank angle is modeled numerically using a crank step energy and mass balance model. The results are used to complete the engine cycle analysis based on air standard assumptions and air to fuel ratio of 15 for gasoline. It is found that at the baseline storage conditions (280 psi, 70OF) the modified engine does not meet the imposed constraints of staying below the maximum pressure of the unmodified engine. A new storage pressure of 235 psi is recommended. This only provides a 7.7% increase in thermal efficiency for the same work output. The modification of this engine for this low efficiency gain is not recommended.
ContributorsJoy, Lijin (Author) / Trimble, Steve (Thesis advisor) / Davidson, Joseph (Committee member) / Phelan, Patrick (Committee member) / Arizona State University (Publisher)
Created2011
150489-Thumbnail Image.png
Description
Modern automotive and aerospace products are large cyber-physical system involving both software and hardware, composed of mechanical, electrical and electronic components. The increasing complexity of such systems is a major concern as it impacts development time and effort, as well as, initial and operational costs. Towards the goal of measuring

Modern automotive and aerospace products are large cyber-physical system involving both software and hardware, composed of mechanical, electrical and electronic components. The increasing complexity of such systems is a major concern as it impacts development time and effort, as well as, initial and operational costs. Towards the goal of measuring complexity, the first step is to determine factors that contribute to it and metrics to qualify it. These complexity components can be further use to (a) estimate the cost of cyber-physical system, (b) develop methods that can reduce the cost of cyber-physical system and (c) make decision such as selecting one design from a set of possible solutions or variants. To determine the contributions to complexity we conducted survey at an aerospace company. We found out three types of contribution to the complexity of the system: Artifact complexity, Design process complexity and Manufacturing complexity. In all three domains, we found three types of metrics: size complexity, numeric complexity (degree of coupling) and technological complexity (solvability).We propose a formal representation for all three domains as graphs, but with different interpretations of entity (node) and relation (link) corresponding to the above three aspects. Complexities of these components are measured using algorithms defined in graph theory. Two experiments were conducted to check the meaningfulness and feasibility of the complexity metrics. First experiment was mechanical transmission and the scope of this experiment was component level. All the design stages, from concept to manufacturing, were considered in this experiment. The second experiment was conducted on hybrid powertrains. The scope of this experiment was assembly level and only artifact complexity is considered because of the limited resources. Finally the calibration of these complexity measures was conducted at an aerospace company but the results cannot be included in this thesis.
ContributorsGurpreet Singh (Author) / Shah, Jami (Thesis advisor) / Runger, George C. (Committee member) / Davidson, Joseph (Committee member) / Arizona State University (Publisher)
Created2011
151245-Thumbnail Image.png
Description
The main objective of this project was to create a framework for holistic ideation and investigate the technical issues involved in its implementation. In previous research, logical ideation methods were explored, ideation states were identified, and tentative set of ideation blocks with strategies were incorporated in an interactive software testbed.

The main objective of this project was to create a framework for holistic ideation and investigate the technical issues involved in its implementation. In previous research, logical ideation methods were explored, ideation states were identified, and tentative set of ideation blocks with strategies were incorporated in an interactive software testbed. As a subsequent study, in this research, intuitive methods and their strategies were investigated and characterized, a framework to organize the components of ideation (both logical and intuitive) was devised, and different ideation methods were implemented based on the framework. One of the major contributions of this research is the method by which information passes between different ideation methods. Another important part of the research is that a framework to organize ideas found by different methods. The intuitive ideation strategies added to the holistic test bed are reframing, restructuring, random connection, force connection, and analogical reasoning. A computer tool facilitating holistic ideation was developed. This framework can also be used as a research tool to collect large amounts of data from designers about their choice of ideation strategies, and assessment of their effectiveness.
ContributorsChen, Ying (Author) / Shah, Jami (Thesis advisor) / Huebner, Kenneth (Committee member) / Davidson, Joseph (Committee member) / Arizona State University (Publisher)
Created2012
156763-Thumbnail Image.png
Description
Geometrical tolerances define allowable manufacturing variations in the features of mechanical parts. For a given feature (planar face, cylindrical hole) the variations may be modeled with a T-Map, a hyper solid in 6D small displacement coordinate space. A general method for constructing T-Maps is to decompose a feature into points,

Geometrical tolerances define allowable manufacturing variations in the features of mechanical parts. For a given feature (planar face, cylindrical hole) the variations may be modeled with a T-Map, a hyper solid in 6D small displacement coordinate space. A general method for constructing T-Maps is to decompose a feature into points, identify the variational limits to these points allowed by the feature tolerance zone, represent these limits using linear halfspaces, transform these to the central local reference frame and intersect these to form the T-Map for the entire feature. The method is explained and validated for existing T-Map models. The method is further used to model manufacturing variations for the positions of axes in patterns of cylindrical features.

When parts are assembled together, feature level manufacturing variations accumulate (stack up) to cause variations in one or more critical dimensions, e.g. one or more clearances. When the T-Maps model is applied to complex assemblies it is possible to obtain as many as six dimensional stack up relation, instead of the one or two typical of 1D or 2D charts. The sensitivity of the critical assembly dimension to the manufacturing variations at each feature can be evaluated by fitting a functional T-Map over a kinematically transformed T-Map of the feature. By considering individual features and the tolerance specifications, one by one, the sensitivity of each tolerance on variations of a critical assembly level dimension can be evaluated. The sum of products of tolerance values and respective sensitivities gives value of worst case functional variation. The same sensitivity equation can be used for statistical tolerance analysis by fitting a Gaussian normal distribution function to each tolerance range and forming an equation of variances from all the contributors. The method for evaluating sensitivities and variances for each contributing feature is explained with engineering examples.

The overall objective of this research is to develop method for automation friendly and efficient T-Map generation and statistical tolerance analysis.
ContributorsChitale, Aniket (Author) / Davidson, Joseph (Thesis advisor) / Sugar, Thomas (Thesis advisor) / Shah, Jami (Committee member) / Arizona State University (Publisher)
Created2018
154942-Thumbnail Image.png
Description
Tolerance specification for manufacturing components from 3D models is a tedious task and often requires expertise of “detailers”. The work presented here is a part of a larger ongoing project aimed at automating tolerance specification to aid less experienced designers by producing consistent geometric dimensioning and tolerancing (GD&T). Tolerance specification

Tolerance specification for manufacturing components from 3D models is a tedious task and often requires expertise of “detailers”. The work presented here is a part of a larger ongoing project aimed at automating tolerance specification to aid less experienced designers by producing consistent geometric dimensioning and tolerancing (GD&T). Tolerance specification can be separated into two major tasks; tolerance schema generation and tolerance value specification. This thesis will focus on the latter part of automated tolerance specification, namely tolerance value allocation and analysis. The tolerance schema (sans values) required prior to these tasks have already been generated by the auto-tolerancing software. This information is communicated through a constraint tolerance feature graph file developed previously at Design Automation Lab (DAL) and is consistent with ASME Y14.5 standard.

The objective of this research is to allocate tolerance values to ensure that the assemblability conditions are satisfied. Assemblability refers to “the ability to assemble/fit a set of parts in specified configuration given a nominal geometry and its corresponding tolerances”. Assemblability is determined by the clearances between the mating features. These clearances are affected by accumulation of tolerances in tolerance loops and hence, the tolerance loops are extracted first. Once tolerance loops have been identified initial tolerance values are allocated to the contributors in these loops. It is highly unlikely that the initial allocation would satisfice assemblability requirements. Overlapping loops have to be simultaneously satisfied progressively. Hence, tolerances will need to be re-allocated iteratively. This is done with the help of tolerance analysis module.

The tolerance allocation and analysis module receives the constraint graph which contains all basic dimensions and mating constraints from the generated schema. The tolerance loops are detected by traversing the constraint graph. The initial allocation distributes the tolerance budget computed from clearance available in the loop, among its contributors in proportion to the associated nominal dimensions. The analysis module subjects the loops to 3D parametric variation analysis and estimates the variation parameters for the clearances. The re-allocation module uses hill climbing heuristics derived from the distribution parameters to select a loop. Re-allocation Of the tolerance values is done using sensitivities and the weights associated with the contributors in the stack.

Several test cases have been run with this software and the desired user input acceptance rates are achieved. Three test cases are presented and output of each module is discussed.
ContributorsBiswas, Deepanjan (Author) / Shah, Jami J. (Thesis advisor) / Davidson, Joseph (Committee member) / Ren, Yi (Committee member) / Arizona State University (Publisher)
Created2016