Matching Items (13)

152254-Thumbnail Image.png

FE simulation based friction coefficient factors for metal forming

Description

The friction condition is an important factor in controlling the compressing process in metalforming. The friction calibration maps (FCM) are widely used in estimating friction factors between the workpiece and

The friction condition is an important factor in controlling the compressing process in metalforming. The friction calibration maps (FCM) are widely used in estimating friction factors between the workpiece and die. However, in standard FEA, the friction condition is defined by friction coefficient factor (µ), while the FCM is used to a constant shear friction factors (m) to describe the friction condition. The purpose of this research is to find a method to convert the m factor to u factor, so that FEA can be used to simulate ring tests with µ. The research is carried out with FEA and Design of Experiment (DOE). FEA is used to simulate the ring compression test. A 2D quarter model is adopted as geometry model. A bilinear material model is used in nonlinear FEA. After the model is established, validation tests are conducted via the influence of Poisson's ratio on the ring compression test. It is shown that the established FEA model is valid especially if the Poisson's ratio is close to 0.5 in the setting of FEA. Material folding phenomena is present in this model, and µ factors are applied at all surfaces of the ring respectively. It is also found that the reduction ratio of the ring and the slopes of the FCM can be used to describe the deformation of the ring specimen. With the baseline FEA model, some formulas between the deformation parameters, material mechanical properties and µ factors are generated through the statistical analysis to the simulating results of the ring compression test. A method to substitute the m factor with µ factors for particular material by selecting and applying the µ factor in time sequence is found based on these formulas. By converting the m factor into µ factor, the cold forging can be simulated.

Contributors

Agent

Created

Date Created
  • 2013

154976-Thumbnail Image.png

Automating fixture setups based on point cloud data & CAD model

Description

Metal castings are selectively machined-based on dimensional control requirements. To ensure that all the finished surfaces are fully machined, each as-cast part needs to be measured and then adjusted optimally

Metal castings are selectively machined-based on dimensional control requirements. To ensure that all the finished surfaces are fully machined, each as-cast part needs to be measured and then adjusted optimally in its fixture. The topics of this thesis address two parts of this process: data translations and feature-fitting clouds of points measured on each cast part. For the first, a CAD model of the finished part is required to be communicated to the machine shop for performing various machining operations on the metal casting. The data flow must include GD&T specifications along with other special notes that may be required to communicate to the machinist. Current data exchange, among various digital applications, is limited to translation of only CAD geometry via STEP AP203. Therefore, an algorithm is developed in order to read, store and translate the data from a CAD file (for example SolidWorks, CREO) to a standard and machine readable format (ACIS format - *.sat). Second, the geometry of cast parts varies from piece to piece and hence fixture set-up parameters for each part must be adjusted individually. To predictively determine these adjustments, the datum surfaces, and to-be-machined surfaces are scanned individually and the point clouds reduced to feature fits. The scanned data are stored as separate point cloud files. The labels associated with the datum and to-be-machined (TBM) features are extracted from the *.sat file. These labels are further matched with the file name of the point cloud data to identify data for the respective features. The point cloud data and the CAD model are then used to fit the appropriate features (features at maximum material condition (MMC) for datums and features at least material condition (LMC) for TBM features) using the existing normative feature fitting (nFF) algorithm. Once the feature fitting is complete, a global datum reference frame (GDRF) is constructed based on the locating method that will be used to machine the part. The locating method is extracted from a fixture library that specifies the type of fixturing used to machine the part. All entities are transformed from its local coordinate system into the GDRF. The nominal geometry, fitted features, and the GD&T information are then stored in a neutral file format called the Constraint Tolerance Feature (CTF) Graph. The final outputs are then used to identify the locations of the critical features on each part and these are used to establish the adjustments for its setup prior to machining, in another module, not part of this thesis.

Contributors

Agent

Created

Date Created
  • 2016

151245-Thumbnail Image.png

Cascading evolutionary morphological charts for holistic ideation framework

Description

The main objective of this project was to create a framework for holistic ideation and investigate the technical issues involved in its implementation. In previous research, logical ideation methods were

The main objective of this project was to create a framework for holistic ideation and investigate the technical issues involved in its implementation. In previous research, logical ideation methods were explored, ideation states were identified, and tentative set of ideation blocks with strategies were incorporated in an interactive software testbed. As a subsequent study, in this research, intuitive methods and their strategies were investigated and characterized, a framework to organize the components of ideation (both logical and intuitive) was devised, and different ideation methods were implemented based on the framework. One of the major contributions of this research is the method by which information passes between different ideation methods. Another important part of the research is that a framework to organize ideas found by different methods. The intuitive ideation strategies added to the holistic test bed are reframing, restructuring, random connection, force connection, and analogical reasoning. A computer tool facilitating holistic ideation was developed. This framework can also be used as a research tool to collect large amounts of data from designers about their choice of ideation strategies, and assessment of their effectiveness.

Contributors

Agent

Created

Date Created
  • 2012

158735-Thumbnail Image.png

Quantifying Deformations in Flexible Assemblies Using Least Square Fit and Capture Zone Techniques

Description

Almost all mechanical and electro-mechanical products are assemblies of multiple parts, either because of requirements for relative motion, or use of different materials, shape/size differences. Thus, assembly design is the

Almost all mechanical and electro-mechanical products are assemblies of multiple parts, either because of requirements for relative motion, or use of different materials, shape/size differences. Thus, assembly design is the very crux of engineering design. In addition to nominal design of an assembly, there is also tolerance design to determine allowable manufacturing variations to ensure proper functioning and assemblability. Most of the flexible assemblies are made by stamping sheet metal. Sheet metal stamping process involves plastically deforming sheet metals using dies. Sub-assemblies of two or more components are made with either spot-welding or riveting operations. Various sub-assemblies are finally joined, using spot-welds or rivets, to create the desired assembly. When two components are brought together for assembly, they do not align exactly; this causes gaps and irregularities in assemblies. As multiple parts are stacked, errors accumulate further. Stamping leads to variable deformations due to residual stresses and elastic recovery from plastic strain of metals; this is called as the ‘spring-back’ effect. When multiple components are stacked or assembled using spot welds, input parameters variations, such as sheet metal thickness, number and order of spot welds, cause variations in the exact shape of the final assembly in its free state. It is essential to understand the influence of these input parameters on the geometric variations of both the individual components and the assembly created using these components. Design of Experiment is used to generate principal effect study which evaluates the influence of input parameters on output parameters. The scope of this study is to quantify the geometric variations for a flexible assembly and evaluate their dependence on specific input variables. The 3 input variables considered are the thickness of the sheet material, the number of spot welds used and the spot-welding order to create the assembly. To quantify the geometric variations, sprung-back nodal points along lines, circular arcs, a combination of these, and a specific profile are reduced to metrologically simulated features.

Contributors

Agent

Created

Date Created
  • 2020

155798-Thumbnail Image.png

Advancements in Prosthetics and Joint Mechanisms

Description

Robotic joints can be either powered or passive. This work will discuss the creation of a passive and a powered joint system as well as the combination system being

Robotic joints can be either powered or passive. This work will discuss the creation of a passive and a powered joint system as well as the combination system being both powered and passive along with its benefits. A novel approach of analysis and control of the combination system is presented.

A passive and a powered ankle joint system is developed and fit to the field of prosthetics, specifically ankle joint replacement for able bodied gait. The general 1 DOF robotic joint designs are examined and the results from testing are discussed. Achievements in this area include the able bodied gait like behavior of passive systems for slow walking speeds. For higher walking speeds the powered ankle system is capable of adding the necessary energy to propel the user forward and remain similar to able bodied gait, effectively replacing the calf muscle. While running has not fully been achieved through past powered ankle devices the full power necessary is reached in this work for running and sprinting while achieving 4x’s power amplification through the powered ankle mechanism.

A theoretical approach to robotic joints is then analyzed in order to combine the advantages of both passive and powered systems. Energy methods are shown to provide a correct behavioral analysis of any robotic joint system. Manipulation of the energy curves and mechanism coupler curves allows real time joint behavioral adjustment. Such a powered joint can be adjusted to passively achieve desired behavior for different speeds and environmental needs. The effects on joint moment and stiffness from adjusting one type of mechanism is presented.

Contributors

Agent

Created

Date Created
  • 2017

154942-Thumbnail Image.png

Automated iterative tolerance value allocation and analysis

Description

Tolerance specification for manufacturing components from 3D models is a tedious task and often requires expertise of “detailers”. The work presented here is a part of a larger ongoing project

Tolerance specification for manufacturing components from 3D models is a tedious task and often requires expertise of “detailers”. The work presented here is a part of a larger ongoing project aimed at automating tolerance specification to aid less experienced designers by producing consistent geometric dimensioning and tolerancing (GD&T). Tolerance specification can be separated into two major tasks; tolerance schema generation and tolerance value specification. This thesis will focus on the latter part of automated tolerance specification, namely tolerance value allocation and analysis. The tolerance schema (sans values) required prior to these tasks have already been generated by the auto-tolerancing software. This information is communicated through a constraint tolerance feature graph file developed previously at Design Automation Lab (DAL) and is consistent with ASME Y14.5 standard.

The objective of this research is to allocate tolerance values to ensure that the assemblability conditions are satisfied. Assemblability refers to “the ability to assemble/fit a set of parts in specified configuration given a nominal geometry and its corresponding tolerances”. Assemblability is determined by the clearances between the mating features. These clearances are affected by accumulation of tolerances in tolerance loops and hence, the tolerance loops are extracted first. Once tolerance loops have been identified initial tolerance values are allocated to the contributors in these loops. It is highly unlikely that the initial allocation would satisfice assemblability requirements. Overlapping loops have to be simultaneously satisfied progressively. Hence, tolerances will need to be re-allocated iteratively. This is done with the help of tolerance analysis module.

The tolerance allocation and analysis module receives the constraint graph which contains all basic dimensions and mating constraints from the generated schema. The tolerance loops are detected by traversing the constraint graph. The initial allocation distributes the tolerance budget computed from clearance available in the loop, among its contributors in proportion to the associated nominal dimensions. The analysis module subjects the loops to 3D parametric variation analysis and estimates the variation parameters for the clearances. The re-allocation module uses hill climbing heuristics derived from the distribution parameters to select a loop. Re-allocation Of the tolerance values is done using sensitivities and the weights associated with the contributors in the stack.

Several test cases have been run with this software and the desired user input acceptance rates are achieved. Three test cases are presented and output of each module is discussed.

Contributors

Agent

Created

Date Created
  • 2016

150489-Thumbnail Image.png

Complexity measurement of cyber physical systems

Description

Modern automotive and aerospace products are large cyber-physical system involving both software and hardware, composed of mechanical, electrical and electronic components. The increasing complexity of such systems is a major

Modern automotive and aerospace products are large cyber-physical system involving both software and hardware, composed of mechanical, electrical and electronic components. The increasing complexity of such systems is a major concern as it impacts development time and effort, as well as, initial and operational costs. Towards the goal of measuring complexity, the first step is to determine factors that contribute to it and metrics to qualify it. These complexity components can be further use to (a) estimate the cost of cyber-physical system, (b) develop methods that can reduce the cost of cyber-physical system and (c) make decision such as selecting one design from a set of possible solutions or variants. To determine the contributions to complexity we conducted survey at an aerospace company. We found out three types of contribution to the complexity of the system: Artifact complexity, Design process complexity and Manufacturing complexity. In all three domains, we found three types of metrics: size complexity, numeric complexity (degree of coupling) and technological complexity (solvability).We propose a formal representation for all three domains as graphs, but with different interpretations of entity (node) and relation (link) corresponding to the above three aspects. Complexities of these components are measured using algorithms defined in graph theory. Two experiments were conducted to check the meaningfulness and feasibility of the complexity metrics. First experiment was mechanical transmission and the scope of this experiment was component level. All the design stages, from concept to manufacturing, were considered in this experiment. The second experiment was conducted on hybrid powertrains. The scope of this experiment was assembly level and only artifact complexity is considered because of the limited resources. Finally the calibration of these complexity measures was conducted at an aerospace company but the results cannot be included in this thesis.

Contributors

Agent

Created

Date Created
  • 2011

Material substitution in legacy system engineering (LSE) with fuzzy logic principles

Description

The focus of this research is to investigate methods for material substitution for the purpose of re-engineering legacy systems that involves incomplete information about form, fit and function of replacement

The focus of this research is to investigate methods for material substitution for the purpose of re-engineering legacy systems that involves incomplete information about form, fit and function of replacement parts. The primary motive is to extract as much useful information about a failed legacy part as possible and use fuzzy logic rules for identifying the unknown parameter values. Machine elements can fail by any number of failure modes but the most probable failure modes based on the service condition are considered critical failure modes. Three main parameters are of key interest in identifying the critical failure mode of the part. Critical failure modes are then directly mapped to material properties. Target material property values are calculated from material property values obtained from the originally used material and from the design goals. The material database is searched for new candidate materials that satisfy the goals and constraints in manufacturing and raw stock availability. Uncertainty in the extracted data is modeled using fuzzy logic. Fuzzy member functions model the imprecise nature of data in each available parameter and rule sets characterize the imprecise dependencies between the parameters and makes decisions in identifying the unknown parameter value based on the incompleteness. A final confidence level for each material in a pool of candidate material is a direct indication of uncertainty. All the candidates satisfy the goals and constraints to varying degrees and the final selection is left to the designer's discretion. The process is automated by software that inputs incomplete data; uses fuzzy logic to extract more information and queries the material database with a constrained search for finding candidate alternatives.

Contributors

Agent

Created

Date Created
  • 2011

156763-Thumbnail Image.png

Generalized T-Map Modelling Procedure & Tolerance Sensitivity Analysis Using T-Maps

Description

Geometrical tolerances define allowable manufacturing variations in the features of mechanical parts. For a given feature (planar face, cylindrical hole) the variations may be modeled with a T-Map, a hyper

Geometrical tolerances define allowable manufacturing variations in the features of mechanical parts. For a given feature (planar face, cylindrical hole) the variations may be modeled with a T-Map, a hyper solid in 6D small displacement coordinate space. A general method for constructing T-Maps is to decompose a feature into points, identify the variational limits to these points allowed by the feature tolerance zone, represent these limits using linear halfspaces, transform these to the central local reference frame and intersect these to form the T-Map for the entire feature. The method is explained and validated for existing T-Map models. The method is further used to model manufacturing variations for the positions of axes in patterns of cylindrical features.

When parts are assembled together, feature level manufacturing variations accumulate (stack up) to cause variations in one or more critical dimensions, e.g. one or more clearances. When the T-Maps model is applied to complex assemblies it is possible to obtain as many as six dimensional stack up relation, instead of the one or two typical of 1D or 2D charts. The sensitivity of the critical assembly dimension to the manufacturing variations at each feature can be evaluated by fitting a functional T-Map over a kinematically transformed T-Map of the feature. By considering individual features and the tolerance specifications, one by one, the sensitivity of each tolerance on variations of a critical assembly level dimension can be evaluated. The sum of products of tolerance values and respective sensitivities gives value of worst case functional variation. The same sensitivity equation can be used for statistical tolerance analysis by fitting a Gaussian normal distribution function to each tolerance range and forming an equation of variances from all the contributors. The method for evaluating sensitivities and variances for each contributing feature is explained with engineering examples.

The overall objective of this research is to develop method for automation friendly and efficient T-Map generation and statistical tolerance analysis.

Contributors

Agent

Created

Date Created
  • 2018

152005-Thumbnail Image.png

Feature cluster algebra and its application for geometric tolerancing

Description

The goal of this research project is to develop a DOF (degree of freedom) algebra for entity clusters to support tolerance specification, validation, and tolerance automation. This representation is required

The goal of this research project is to develop a DOF (degree of freedom) algebra for entity clusters to support tolerance specification, validation, and tolerance automation. This representation is required to capture the relation between geometric entities, metric constraints and tolerance specification. This research project is a part of an on-going project on creating a bi-level model of GD&T; (Geometric Dimensioning and Tolerancing). This thesis presents the systematic derivation of degree of freedoms of entity clusters corresponding to tolerance classes. The clusters can be datum reference frames (DRFs) or targets. A binary vector representation of degree of freedom and operations for combining them are proposed. An algebraic method is developed by using DOF representation. The ASME Y14.5.1 companion to the Geometric Dimensioning and Tolerancing (GD&T;) standard gives an exhaustive tabulation of active and invariant degrees of freedom (DOF) for Datum Reference Frames (DRF). This algebra is validated by checking it against all cases in the Y14.5.1 tabulation. This algebra allows the derivation of the general rules for tolerance specification and validation. A computer tool is implemented to support GD&T; specification and validation. The computer implementation outputs the geometric and tolerance information in the form of a CTF (Constraint-Tolerance-Feature) file which can be used for tolerance stack analysis.

Contributors

Agent

Created

Date Created
  • 2013