Statics with Robotics to Get the Least-squares Fit of Profiles for Evaluating FEA Simulations of Flexible Components and Assemblies

187873-Thumbnail Image.png
Description
Least squares fitting in 3D is applied to produce higher level geometric parameters that describe the optimum location of a line-profile through many nodal points that are derived from Finite Element Analysis (FEA) simulations of elastic spring-back of features both

Least squares fitting in 3D is applied to produce higher level geometric parameters that describe the optimum location of a line-profile through many nodal points that are derived from Finite Element Analysis (FEA) simulations of elastic spring-back of features both on stamped sheet metal components after they have been plasticly deformed in a press and released, and on simple assemblies made from them. Although the traditional Moore-Penrose inverse was used to solve the superabundant linear equations, the formulation of these equations was distinct and based on virtual work and statics applied to parallel-actuated robots in order to allow for both more complex profiles and a change in profile size. The output, a small displacement torsor (SDT) is used to describe the displacement of the profile from its nominal location. It may be regarded as a generalization of the slope and intercept parameters of a line which result from a Gauss-Markov regression fit of points in a plane. Additionally, minimum zone-magnitudes were computed that just capture the points along the profile. And finally, algorithms were created to compute simple parameters for cross-sectional shapes of components were also computed from sprung-back data points according to the protocol of simulations and benchmark experiments conducted by the metal forming community 30 years ago, although it was necessary to modify their protocol for some geometries that differed from the benchmark.
Date Created
2023
Agent

Quantifying Deformations in Flexible Assemblies Using Least Square Fit and Capture Zone Techniques

158735-Thumbnail Image.png
Description
Almost all mechanical and electro-mechanical products are assemblies of multiple parts, either because of requirements for relative motion, or use of different materials, shape/size differences. Thus, assembly design is the very crux of engineering design. In addition to nominal design

Almost all mechanical and electro-mechanical products are assemblies of multiple parts, either because of requirements for relative motion, or use of different materials, shape/size differences. Thus, assembly design is the very crux of engineering design. In addition to nominal design of an assembly, there is also tolerance design to determine allowable manufacturing variations to ensure proper functioning and assemblability. Most of the flexible assemblies are made by stamping sheet metal. Sheet metal stamping process involves plastically deforming sheet metals using dies. Sub-assemblies of two or more components are made with either spot-welding or riveting operations. Various sub-assemblies are finally joined, using spot-welds or rivets, to create the desired assembly. When two components are brought together for assembly, they do not align exactly; this causes gaps and irregularities in assemblies. As multiple parts are stacked, errors accumulate further. Stamping leads to variable deformations due to residual stresses and elastic recovery from plastic strain of metals; this is called as the ‘spring-back’ effect. When multiple components are stacked or assembled using spot welds, input parameters variations, such as sheet metal thickness, number and order of spot welds, cause variations in the exact shape of the final assembly in its free state. It is essential to understand the influence of these input parameters on the geometric variations of both the individual components and the assembly created using these components. Design of Experiment is used to generate principal effect study which evaluates the influence of input parameters on output parameters. The scope of this study is to quantify the geometric variations for a flexible assembly and evaluate their dependence on specific input variables. The 3 input variables considered are the thickness of the sheet material, the number of spot welds used and the spot-welding order to create the assembly. To quantify the geometric variations, sprung-back nodal points along lines, circular arcs, a combination of these, and a specific profile are reduced to metrologically simulated features.
Date Created
2020
Agent

Generalized T-Map Modelling Procedure & Tolerance Sensitivity Analysis Using T-Maps

156763-Thumbnail Image.png
Description
Geometrical tolerances define allowable manufacturing variations in the features of mechanical parts. For a given feature (planar face, cylindrical hole) the variations may be modeled with a T-Map, a hyper solid in 6D small displacement coordinate space. A general method

Geometrical tolerances define allowable manufacturing variations in the features of mechanical parts. For a given feature (planar face, cylindrical hole) the variations may be modeled with a T-Map, a hyper solid in 6D small displacement coordinate space. A general method for constructing T-Maps is to decompose a feature into points, identify the variational limits to these points allowed by the feature tolerance zone, represent these limits using linear halfspaces, transform these to the central local reference frame and intersect these to form the T-Map for the entire feature. The method is explained and validated for existing T-Map models. The method is further used to model manufacturing variations for the positions of axes in patterns of cylindrical features.

When parts are assembled together, feature level manufacturing variations accumulate (stack up) to cause variations in one or more critical dimensions, e.g. one or more clearances. When the T-Maps model is applied to complex assemblies it is possible to obtain as many as six dimensional stack up relation, instead of the one or two typical of 1D or 2D charts. The sensitivity of the critical assembly dimension to the manufacturing variations at each feature can be evaluated by fitting a functional T-Map over a kinematically transformed T-Map of the feature. By considering individual features and the tolerance specifications, one by one, the sensitivity of each tolerance on variations of a critical assembly level dimension can be evaluated. The sum of products of tolerance values and respective sensitivities gives value of worst case functional variation. The same sensitivity equation can be used for statistical tolerance analysis by fitting a Gaussian normal distribution function to each tolerance range and forming an equation of variances from all the contributors. The method for evaluating sensitivities and variances for each contributing feature is explained with engineering examples.

The overall objective of this research is to develop method for automation friendly and efficient T-Map generation and statistical tolerance analysis.
Date Created
2018
Agent

Development and verification of a library of future fitting algorithms for CMMs

152562-Thumbnail Image.png
Description
Conformance of a manufactured feature to the applied geometric tolerances is done by analyzing the point cloud that is measured on the feature. To that end, a geometric feature is fitted to the point cloud and the results are assessed

Conformance of a manufactured feature to the applied geometric tolerances is done by analyzing the point cloud that is measured on the feature. To that end, a geometric feature is fitted to the point cloud and the results are assessed to see whether the fitted feature lies within the specified tolerance limits or not. Coordinate Measuring Machines (CMMs) use feature fitting algorithms that incorporate least square estimates as a basis for obtaining minimum, maximum, and zone fits. However, a comprehensive set of algorithms addressing the fitting procedure (all datums, targets) for every tolerance class is not available. Therefore, a Library of algorithms is developed to aid the process of feature fitting, and tolerance verification. This paper addresses linear, planar, circular, and cylindrical features only. This set of algorithms described conforms to the international Standards for GD&T.; In order to reduce the number of points to be analyzed, and to identify the possible candidate points for linear, circular and planar features, 2D and 3D convex hulls are used. For minimum, maximum, and Chebyshev cylinders, geometric search algorithms are used. Algorithms are divided into three major categories: least square, unconstrained, and constrained fits. Primary datums require one sided unconstrained fits for their verification. Secondary datums require one sided constrained fits for their verification. For size and other tolerance verifications, we require both unconstrained and constrained fits
Date Created
2014
Agent

FE simulation based friction coefficient factors for metal forming

152254-Thumbnail Image.png
Description
The friction condition is an important factor in controlling the compressing process in metalforming. The friction calibration maps (FCM) are widely used in estimating friction factors between the workpiece and die. However, in standard FEA, the friction condition is defined

The friction condition is an important factor in controlling the compressing process in metalforming. The friction calibration maps (FCM) are widely used in estimating friction factors between the workpiece and die. However, in standard FEA, the friction condition is defined by friction coefficient factor (µ), while the FCM is used to a constant shear friction factors (m) to describe the friction condition. The purpose of this research is to find a method to convert the m factor to u factor, so that FEA can be used to simulate ring tests with µ. The research is carried out with FEA and Design of Experiment (DOE). FEA is used to simulate the ring compression test. A 2D quarter model is adopted as geometry model. A bilinear material model is used in nonlinear FEA. After the model is established, validation tests are conducted via the influence of Poisson's ratio on the ring compression test. It is shown that the established FEA model is valid especially if the Poisson's ratio is close to 0.5 in the setting of FEA. Material folding phenomena is present in this model, and µ factors are applied at all surfaces of the ring respectively. It is also found that the reduction ratio of the ring and the slopes of the FCM can be used to describe the deformation of the ring specimen. With the baseline FEA model, some formulas between the deformation parameters, material mechanical properties and µ factors are generated through the statistical analysis to the simulating results of the ring compression test. A method to substitute the m factor with µ factors for particular material by selecting and applying the µ factor in time sequence is found based on these formulas. By converting the m factor into µ factor, the cold forging can be simulated.
Date Created
2013
Agent

Feature cluster algebra and its application for geometric tolerancing

152005-Thumbnail Image.png
Description
The goal of this research project is to develop a DOF (degree of freedom) algebra for entity clusters to support tolerance specification, validation, and tolerance automation. This representation is required to capture the relation between geometric entities, metric constraints and

The goal of this research project is to develop a DOF (degree of freedom) algebra for entity clusters to support tolerance specification, validation, and tolerance automation. This representation is required to capture the relation between geometric entities, metric constraints and tolerance specification. This research project is a part of an on-going project on creating a bi-level model of GD&T; (Geometric Dimensioning and Tolerancing). This thesis presents the systematic derivation of degree of freedoms of entity clusters corresponding to tolerance classes. The clusters can be datum reference frames (DRFs) or targets. A binary vector representation of degree of freedom and operations for combining them are proposed. An algebraic method is developed by using DOF representation. The ASME Y14.5.1 companion to the Geometric Dimensioning and Tolerancing (GD&T;) standard gives an exhaustive tabulation of active and invariant degrees of freedom (DOF) for Datum Reference Frames (DRF). This algebra is validated by checking it against all cases in the Y14.5.1 tabulation. This algebra allows the derivation of the general rules for tolerance specification and validation. A computer tool is implemented to support GD&T; specification and validation. The computer implementation outputs the geometric and tolerance information in the form of a CTF (Constraint-Tolerance-Feature) file which can be used for tolerance stack analysis.
Date Created
2013
Agent

Generation of tolerance maps for line pofile by primitive T-map elements

151838-Thumbnail Image.png
Description
The objective of this research is to develop methods for generating the Tolerance-Map for a line-profile that is specified by a designer to control the geometric profile shape of a surface. After development, the aim is to find one that

The objective of this research is to develop methods for generating the Tolerance-Map for a line-profile that is specified by a designer to control the geometric profile shape of a surface. After development, the aim is to find one that can be easily implemented in computer software using existing libraries. Two methods were explored: the parametric modeling method and the decomposed modeling method. The Tolerance-Map (T-Map) is a hypothetical point-space, each point of which represents one geometric variation of a feature in its tolerance-zone. T-Maps have been produced for most of the tolerance classes that are used by designers, but, prior to the work of this project, the method of construction required considerable intuitive input, rather than being based primarily on automated computer tools. Tolerances on line-profiles are used to control cross-sectional shapes of parts, such as every cross-section of a mildly twisted compressor blade. Such tolerances constrain geometric manufacturing variations within a specified two-dimensional tolerance-zone. A single profile tolerance may be used to control position, orientation, and form of the cross-section. Four independent variables capture all of the profile deviations: two independent translations in the plane of the profile, one rotation in that plane, and the size-increment necessary to identify one of the allowable parallel profiles. For the selected method of generation, the line profile is decomposed into three types of segments, a primitive T-Map is produced for each segment, and finally the T-Maps from all the segments are combined to obtain the T-Map for the given profile. The types of segments are the (straight) line-segment, circular arc-segment, and the freeform-curve segment. The primitive T-Maps are generated analytically, and, for freeform-curves, they are built approximately with the aid of the computer. A deformation matrix is used to transform the primitive T-Maps to a single coordinate system for the whole profile. The T-Map for the whole line profile is generated by the Boolean intersection of the primitive T-Maps for the individual profile segments. This computer-implemented method can generate T-Maps for open profiles, closed ones, and those containing concave shapes.
Date Created
2013
Agent

Cascading evolutionary morphological charts for holistic ideation framework

151245-Thumbnail Image.png
Description
The main objective of this project was to create a framework for holistic ideation and investigate the technical issues involved in its implementation. In previous research, logical ideation methods were explored, ideation states were identified, and tentative set of ideation

The main objective of this project was to create a framework for holistic ideation and investigate the technical issues involved in its implementation. In previous research, logical ideation methods were explored, ideation states were identified, and tentative set of ideation blocks with strategies were incorporated in an interactive software testbed. As a subsequent study, in this research, intuitive methods and their strategies were investigated and characterized, a framework to organize the components of ideation (both logical and intuitive) was devised, and different ideation methods were implemented based on the framework. One of the major contributions of this research is the method by which information passes between different ideation methods. Another important part of the research is that a framework to organize ideas found by different methods. The intuitive ideation strategies added to the holistic test bed are reframing, restructuring, random connection, force connection, and analogical reasoning. A computer tool facilitating holistic ideation was developed. This framework can also be used as a research tool to collect large amounts of data from designers about their choice of ideation strategies, and assessment of their effectiveness.
Date Created
2012
Agent

Complexity measurement of cyber physical systems

150489-Thumbnail Image.png
Description
Modern automotive and aerospace products are large cyber-physical system involving both software and hardware, composed of mechanical, electrical and electronic components. The increasing complexity of such systems is a major concern as it impacts development time and effort, as well

Modern automotive and aerospace products are large cyber-physical system involving both software and hardware, composed of mechanical, electrical and electronic components. The increasing complexity of such systems is a major concern as it impacts development time and effort, as well as, initial and operational costs. Towards the goal of measuring complexity, the first step is to determine factors that contribute to it and metrics to qualify it. These complexity components can be further use to (a) estimate the cost of cyber-physical system, (b) develop methods that can reduce the cost of cyber-physical system and (c) make decision such as selecting one design from a set of possible solutions or variants. To determine the contributions to complexity we conducted survey at an aerospace company. We found out three types of contribution to the complexity of the system: Artifact complexity, Design process complexity and Manufacturing complexity. In all three domains, we found three types of metrics: size complexity, numeric complexity (degree of coupling) and technological complexity (solvability).We propose a formal representation for all three domains as graphs, but with different interpretations of entity (node) and relation (link) corresponding to the above three aspects. Complexities of these components are measured using algorithms defined in graph theory. Two experiments were conducted to check the meaningfulness and feasibility of the complexity metrics. First experiment was mechanical transmission and the scope of this experiment was component level. All the design stages, from concept to manufacturing, were considered in this experiment. The second experiment was conducted on hybrid powertrains. The scope of this experiment was assembly level and only artifact complexity is considered because of the limited resources. Finally the calibration of these complexity measures was conducted at an aerospace company but the results cannot be included in this thesis.
Date Created
2011
Agent

Material substitution in legacy system engineering (LSE) with fuzzy logic principles

Description
The focus of this research is to investigate methods for material substitution for the purpose of re-engineering legacy systems that involves incomplete information about form, fit and function of replacement parts. The primary motive is to extract as much useful

The focus of this research is to investigate methods for material substitution for the purpose of re-engineering legacy systems that involves incomplete information about form, fit and function of replacement parts. The primary motive is to extract as much useful information about a failed legacy part as possible and use fuzzy logic rules for identifying the unknown parameter values. Machine elements can fail by any number of failure modes but the most probable failure modes based on the service condition are considered critical failure modes. Three main parameters are of key interest in identifying the critical failure mode of the part. Critical failure modes are then directly mapped to material properties. Target material property values are calculated from material property values obtained from the originally used material and from the design goals. The material database is searched for new candidate materials that satisfy the goals and constraints in manufacturing and raw stock availability. Uncertainty in the extracted data is modeled using fuzzy logic. Fuzzy member functions model the imprecise nature of data in each available parameter and rule sets characterize the imprecise dependencies between the parameters and makes decisions in identifying the unknown parameter value based on the incompleteness. A final confidence level for each material in a pool of candidate material is a direct indication of uncertainty. All the candidates satisfy the goals and constraints to varying degrees and the final selection is left to the designer's discretion. The process is automated by software that inputs incomplete data; uses fuzzy logic to extract more information and queries the material database with a constrained search for finding candidate alternatives.
Date Created
2011
Agent