Matching Items (9)

152254-Thumbnail Image.png

FE simulation based friction coefficient factors for metal forming

Description

The friction condition is an important factor in controlling the compressing process in metalforming. The friction calibration maps (FCM) are widely used in estimating friction factors between the workpiece and

The friction condition is an important factor in controlling the compressing process in metalforming. The friction calibration maps (FCM) are widely used in estimating friction factors between the workpiece and die. However, in standard FEA, the friction condition is defined by friction coefficient factor (µ), while the FCM is used to a constant shear friction factors (m) to describe the friction condition. The purpose of this research is to find a method to convert the m factor to u factor, so that FEA can be used to simulate ring tests with µ. The research is carried out with FEA and Design of Experiment (DOE). FEA is used to simulate the ring compression test. A 2D quarter model is adopted as geometry model. A bilinear material model is used in nonlinear FEA. After the model is established, validation tests are conducted via the influence of Poisson's ratio on the ring compression test. It is shown that the established FEA model is valid especially if the Poisson's ratio is close to 0.5 in the setting of FEA. Material folding phenomena is present in this model, and µ factors are applied at all surfaces of the ring respectively. It is also found that the reduction ratio of the ring and the slopes of the FCM can be used to describe the deformation of the ring specimen. With the baseline FEA model, some formulas between the deformation parameters, material mechanical properties and µ factors are generated through the statistical analysis to the simulating results of the ring compression test. A method to substitute the m factor with µ factors for particular material by selecting and applying the µ factor in time sequence is found based on these formulas. By converting the m factor into µ factor, the cold forging can be simulated.

Contributors

Agent

Created

Date Created
  • 2013

151245-Thumbnail Image.png

Cascading evolutionary morphological charts for holistic ideation framework

Description

The main objective of this project was to create a framework for holistic ideation and investigate the technical issues involved in its implementation. In previous research, logical ideation methods were

The main objective of this project was to create a framework for holistic ideation and investigate the technical issues involved in its implementation. In previous research, logical ideation methods were explored, ideation states were identified, and tentative set of ideation blocks with strategies were incorporated in an interactive software testbed. As a subsequent study, in this research, intuitive methods and their strategies were investigated and characterized, a framework to organize the components of ideation (both logical and intuitive) was devised, and different ideation methods were implemented based on the framework. One of the major contributions of this research is the method by which information passes between different ideation methods. Another important part of the research is that a framework to organize ideas found by different methods. The intuitive ideation strategies added to the holistic test bed are reframing, restructuring, random connection, force connection, and analogical reasoning. A computer tool facilitating holistic ideation was developed. This framework can also be used as a research tool to collect large amounts of data from designers about their choice of ideation strategies, and assessment of their effectiveness.

Contributors

Agent

Created

Date Created
  • 2012

158735-Thumbnail Image.png

Quantifying Deformations in Flexible Assemblies Using Least Square Fit and Capture Zone Techniques

Description

Almost all mechanical and electro-mechanical products are assemblies of multiple parts, either because of requirements for relative motion, or use of different materials, shape/size differences. Thus, assembly design is the

Almost all mechanical and electro-mechanical products are assemblies of multiple parts, either because of requirements for relative motion, or use of different materials, shape/size differences. Thus, assembly design is the very crux of engineering design. In addition to nominal design of an assembly, there is also tolerance design to determine allowable manufacturing variations to ensure proper functioning and assemblability. Most of the flexible assemblies are made by stamping sheet metal. Sheet metal stamping process involves plastically deforming sheet metals using dies. Sub-assemblies of two or more components are made with either spot-welding or riveting operations. Various sub-assemblies are finally joined, using spot-welds or rivets, to create the desired assembly. When two components are brought together for assembly, they do not align exactly; this causes gaps and irregularities in assemblies. As multiple parts are stacked, errors accumulate further. Stamping leads to variable deformations due to residual stresses and elastic recovery from plastic strain of metals; this is called as the ‘spring-back’ effect. When multiple components are stacked or assembled using spot welds, input parameters variations, such as sheet metal thickness, number and order of spot welds, cause variations in the exact shape of the final assembly in its free state. It is essential to understand the influence of these input parameters on the geometric variations of both the individual components and the assembly created using these components. Design of Experiment is used to generate principal effect study which evaluates the influence of input parameters on output parameters. The scope of this study is to quantify the geometric variations for a flexible assembly and evaluate their dependence on specific input variables. The 3 input variables considered are the thickness of the sheet material, the number of spot welds used and the spot-welding order to create the assembly. To quantify the geometric variations, sprung-back nodal points along lines, circular arcs, a combination of these, and a specific profile are reduced to metrologically simulated features.

Contributors

Agent

Created

Date Created
  • 2020

150489-Thumbnail Image.png

Complexity measurement of cyber physical systems

Description

Modern automotive and aerospace products are large cyber-physical system involving both software and hardware, composed of mechanical, electrical and electronic components. The increasing complexity of such systems is a major

Modern automotive and aerospace products are large cyber-physical system involving both software and hardware, composed of mechanical, electrical and electronic components. The increasing complexity of such systems is a major concern as it impacts development time and effort, as well as, initial and operational costs. Towards the goal of measuring complexity, the first step is to determine factors that contribute to it and metrics to qualify it. These complexity components can be further use to (a) estimate the cost of cyber-physical system, (b) develop methods that can reduce the cost of cyber-physical system and (c) make decision such as selecting one design from a set of possible solutions or variants. To determine the contributions to complexity we conducted survey at an aerospace company. We found out three types of contribution to the complexity of the system: Artifact complexity, Design process complexity and Manufacturing complexity. In all three domains, we found three types of metrics: size complexity, numeric complexity (degree of coupling) and technological complexity (solvability).We propose a formal representation for all three domains as graphs, but with different interpretations of entity (node) and relation (link) corresponding to the above three aspects. Complexities of these components are measured using algorithms defined in graph theory. Two experiments were conducted to check the meaningfulness and feasibility of the complexity metrics. First experiment was mechanical transmission and the scope of this experiment was component level. All the design stages, from concept to manufacturing, were considered in this experiment. The second experiment was conducted on hybrid powertrains. The scope of this experiment was assembly level and only artifact complexity is considered because of the limited resources. Finally the calibration of these complexity measures was conducted at an aerospace company but the results cannot be included in this thesis.

Contributors

Agent

Created

Date Created
  • 2011

Material substitution in legacy system engineering (LSE) with fuzzy logic principles

Description

The focus of this research is to investigate methods for material substitution for the purpose of re-engineering legacy systems that involves incomplete information about form, fit and function of replacement

The focus of this research is to investigate methods for material substitution for the purpose of re-engineering legacy systems that involves incomplete information about form, fit and function of replacement parts. The primary motive is to extract as much useful information about a failed legacy part as possible and use fuzzy logic rules for identifying the unknown parameter values. Machine elements can fail by any number of failure modes but the most probable failure modes based on the service condition are considered critical failure modes. Three main parameters are of key interest in identifying the critical failure mode of the part. Critical failure modes are then directly mapped to material properties. Target material property values are calculated from material property values obtained from the originally used material and from the design goals. The material database is searched for new candidate materials that satisfy the goals and constraints in manufacturing and raw stock availability. Uncertainty in the extracted data is modeled using fuzzy logic. Fuzzy member functions model the imprecise nature of data in each available parameter and rule sets characterize the imprecise dependencies between the parameters and makes decisions in identifying the unknown parameter value based on the incompleteness. A final confidence level for each material in a pool of candidate material is a direct indication of uncertainty. All the candidates satisfy the goals and constraints to varying degrees and the final selection is left to the designer's discretion. The process is automated by software that inputs incomplete data; uses fuzzy logic to extract more information and queries the material database with a constrained search for finding candidate alternatives.

Contributors

Agent

Created

Date Created
  • 2011

152562-Thumbnail Image.png

Development and verification of a library of future fitting algorithms for CMMs

Description

Conformance of a manufactured feature to the applied geometric tolerances is done by analyzing the point cloud that is measured on the feature. To that end, a geometric feature is

Conformance of a manufactured feature to the applied geometric tolerances is done by analyzing the point cloud that is measured on the feature. To that end, a geometric feature is fitted to the point cloud and the results are assessed to see whether the fitted feature lies within the specified tolerance limits or not. Coordinate Measuring Machines (CMMs) use feature fitting algorithms that incorporate least square estimates as a basis for obtaining minimum, maximum, and zone fits. However, a comprehensive set of algorithms addressing the fitting procedure (all datums, targets) for every tolerance class is not available. Therefore, a Library of algorithms is developed to aid the process of feature fitting, and tolerance verification. This paper addresses linear, planar, circular, and cylindrical features only. This set of algorithms described conforms to the international Standards for GD&T.; In order to reduce the number of points to be analyzed, and to identify the possible candidate points for linear, circular and planar features, 2D and 3D convex hulls are used. For minimum, maximum, and Chebyshev cylinders, geometric search algorithms are used. Algorithms are divided into three major categories: least square, unconstrained, and constrained fits. Primary datums require one sided unconstrained fits for their verification. Secondary datums require one sided constrained fits for their verification. For size and other tolerance verifications, we require both unconstrained and constrained fits

Contributors

Agent

Created

Date Created
  • 2014

156763-Thumbnail Image.png

Generalized T-Map Modelling Procedure & Tolerance Sensitivity Analysis Using T-Maps

Description

Geometrical tolerances define allowable manufacturing variations in the features of mechanical parts. For a given feature (planar face, cylindrical hole) the variations may be modeled with a T-Map, a hyper

Geometrical tolerances define allowable manufacturing variations in the features of mechanical parts. For a given feature (planar face, cylindrical hole) the variations may be modeled with a T-Map, a hyper solid in 6D small displacement coordinate space. A general method for constructing T-Maps is to decompose a feature into points, identify the variational limits to these points allowed by the feature tolerance zone, represent these limits using linear halfspaces, transform these to the central local reference frame and intersect these to form the T-Map for the entire feature. The method is explained and validated for existing T-Map models. The method is further used to model manufacturing variations for the positions of axes in patterns of cylindrical features.

When parts are assembled together, feature level manufacturing variations accumulate (stack up) to cause variations in one or more critical dimensions, e.g. one or more clearances. When the T-Maps model is applied to complex assemblies it is possible to obtain as many as six dimensional stack up relation, instead of the one or two typical of 1D or 2D charts. The sensitivity of the critical assembly dimension to the manufacturing variations at each feature can be evaluated by fitting a functional T-Map over a kinematically transformed T-Map of the feature. By considering individual features and the tolerance specifications, one by one, the sensitivity of each tolerance on variations of a critical assembly level dimension can be evaluated. The sum of products of tolerance values and respective sensitivities gives value of worst case functional variation. The same sensitivity equation can be used for statistical tolerance analysis by fitting a Gaussian normal distribution function to each tolerance range and forming an equation of variances from all the contributors. The method for evaluating sensitivities and variances for each contributing feature is explained with engineering examples.

The overall objective of this research is to develop method for automation friendly and efficient T-Map generation and statistical tolerance analysis.

Contributors

Agent

Created

Date Created
  • 2018

152005-Thumbnail Image.png

Feature cluster algebra and its application for geometric tolerancing

Description

The goal of this research project is to develop a DOF (degree of freedom) algebra for entity clusters to support tolerance specification, validation, and tolerance automation. This representation is required

The goal of this research project is to develop a DOF (degree of freedom) algebra for entity clusters to support tolerance specification, validation, and tolerance automation. This representation is required to capture the relation between geometric entities, metric constraints and tolerance specification. This research project is a part of an on-going project on creating a bi-level model of GD&T; (Geometric Dimensioning and Tolerancing). This thesis presents the systematic derivation of degree of freedoms of entity clusters corresponding to tolerance classes. The clusters can be datum reference frames (DRFs) or targets. A binary vector representation of degree of freedom and operations for combining them are proposed. An algebraic method is developed by using DOF representation. The ASME Y14.5.1 companion to the Geometric Dimensioning and Tolerancing (GD&T;) standard gives an exhaustive tabulation of active and invariant degrees of freedom (DOF) for Datum Reference Frames (DRF). This algebra is validated by checking it against all cases in the Y14.5.1 tabulation. This algebra allows the derivation of the general rules for tolerance specification and validation. A computer tool is implemented to support GD&T; specification and validation. The computer implementation outputs the geometric and tolerance information in the form of a CTF (Constraint-Tolerance-Feature) file which can be used for tolerance stack analysis.

Contributors

Agent

Created

Date Created
  • 2013

151838-Thumbnail Image.png

Generation of tolerance maps for line pofile by primitive T-map elements

Description

The objective of this research is to develop methods for generating the Tolerance-Map for a line-profile that is specified by a designer to control the geometric profile shape of a

The objective of this research is to develop methods for generating the Tolerance-Map for a line-profile that is specified by a designer to control the geometric profile shape of a surface. After development, the aim is to find one that can be easily implemented in computer software using existing libraries. Two methods were explored: the parametric modeling method and the decomposed modeling method. The Tolerance-Map (T-Map) is a hypothetical point-space, each point of which represents one geometric variation of a feature in its tolerance-zone. T-Maps have been produced for most of the tolerance classes that are used by designers, but, prior to the work of this project, the method of construction required considerable intuitive input, rather than being based primarily on automated computer tools. Tolerances on line-profiles are used to control cross-sectional shapes of parts, such as every cross-section of a mildly twisted compressor blade. Such tolerances constrain geometric manufacturing variations within a specified two-dimensional tolerance-zone. A single profile tolerance may be used to control position, orientation, and form of the cross-section. Four independent variables capture all of the profile deviations: two independent translations in the plane of the profile, one rotation in that plane, and the size-increment necessary to identify one of the allowable parallel profiles. For the selected method of generation, the line profile is decomposed into three types of segments, a primitive T-Map is produced for each segment, and finally the T-Maps from all the segments are combined to obtain the T-Map for the given profile. The types of segments are the (straight) line-segment, circular arc-segment, and the freeform-curve segment. The primitive T-Maps are generated analytically, and, for freeform-curves, they are built approximately with the aid of the computer. A deformation matrix is used to transform the primitive T-Maps to a single coordinate system for the whole profile. The T-Map for the whole line profile is generated by the Boolean intersection of the primitive T-Maps for the individual profile segments. This computer-implemented method can generate T-Maps for open profiles, closed ones, and those containing concave shapes.

Contributors

Agent

Created

Date Created
  • 2013