Matching Items (45)
Filtering by

Clear all filters

Description
The focus of this research is to investigate methods for material substitution for the purpose of re-engineering legacy systems that involves incomplete information about form, fit and function of replacement parts. The primary motive is to extract as much useful information about a failed legacy part as possible and use

The focus of this research is to investigate methods for material substitution for the purpose of re-engineering legacy systems that involves incomplete information about form, fit and function of replacement parts. The primary motive is to extract as much useful information about a failed legacy part as possible and use fuzzy logic rules for identifying the unknown parameter values. Machine elements can fail by any number of failure modes but the most probable failure modes based on the service condition are considered critical failure modes. Three main parameters are of key interest in identifying the critical failure mode of the part. Critical failure modes are then directly mapped to material properties. Target material property values are calculated from material property values obtained from the originally used material and from the design goals. The material database is searched for new candidate materials that satisfy the goals and constraints in manufacturing and raw stock availability. Uncertainty in the extracted data is modeled using fuzzy logic. Fuzzy member functions model the imprecise nature of data in each available parameter and rule sets characterize the imprecise dependencies between the parameters and makes decisions in identifying the unknown parameter value based on the incompleteness. A final confidence level for each material in a pool of candidate material is a direct indication of uncertainty. All the candidates satisfy the goals and constraints to varying degrees and the final selection is left to the designer's discretion. The process is automated by software that inputs incomplete data; uses fuzzy logic to extract more information and queries the material database with a constrained search for finding candidate alternatives.
ContributorsBalaji, Srinath (Author) / Shah, Jami (Thesis advisor) / Davidson, Joseph (Committee member) / Huebner, Kenneth (Committee member) / Arizona State University (Publisher)
Created2011
151838-Thumbnail Image.png
Description
The objective of this research is to develop methods for generating the Tolerance-Map for a line-profile that is specified by a designer to control the geometric profile shape of a surface. After development, the aim is to find one that can be easily implemented in computer software using existing libraries.

The objective of this research is to develop methods for generating the Tolerance-Map for a line-profile that is specified by a designer to control the geometric profile shape of a surface. After development, the aim is to find one that can be easily implemented in computer software using existing libraries. Two methods were explored: the parametric modeling method and the decomposed modeling method. The Tolerance-Map (T-Map) is a hypothetical point-space, each point of which represents one geometric variation of a feature in its tolerance-zone. T-Maps have been produced for most of the tolerance classes that are used by designers, but, prior to the work of this project, the method of construction required considerable intuitive input, rather than being based primarily on automated computer tools. Tolerances on line-profiles are used to control cross-sectional shapes of parts, such as every cross-section of a mildly twisted compressor blade. Such tolerances constrain geometric manufacturing variations within a specified two-dimensional tolerance-zone. A single profile tolerance may be used to control position, orientation, and form of the cross-section. Four independent variables capture all of the profile deviations: two independent translations in the plane of the profile, one rotation in that plane, and the size-increment necessary to identify one of the allowable parallel profiles. For the selected method of generation, the line profile is decomposed into three types of segments, a primitive T-Map is produced for each segment, and finally the T-Maps from all the segments are combined to obtain the T-Map for the given profile. The types of segments are the (straight) line-segment, circular arc-segment, and the freeform-curve segment. The primitive T-Maps are generated analytically, and, for freeform-curves, they are built approximately with the aid of the computer. A deformation matrix is used to transform the primitive T-Maps to a single coordinate system for the whole profile. The T-Map for the whole line profile is generated by the Boolean intersection of the primitive T-Maps for the individual profile segments. This computer-implemented method can generate T-Maps for open profiles, closed ones, and those containing concave shapes.
ContributorsHe, Yifei (Author) / Davidson, Joseph (Thesis advisor) / Shah, Jami (Committee member) / Herrmann, Marcus (Committee member) / Arizona State University (Publisher)
Created2013
152254-Thumbnail Image.png
Description
The friction condition is an important factor in controlling the compressing process in metalforming. The friction calibration maps (FCM) are widely used in estimating friction factors between the workpiece and die. However, in standard FEA, the friction condition is defined by friction coefficient factor (µ), while the FCM is used

The friction condition is an important factor in controlling the compressing process in metalforming. The friction calibration maps (FCM) are widely used in estimating friction factors between the workpiece and die. However, in standard FEA, the friction condition is defined by friction coefficient factor (µ), while the FCM is used to a constant shear friction factors (m) to describe the friction condition. The purpose of this research is to find a method to convert the m factor to u factor, so that FEA can be used to simulate ring tests with µ. The research is carried out with FEA and Design of Experiment (DOE). FEA is used to simulate the ring compression test. A 2D quarter model is adopted as geometry model. A bilinear material model is used in nonlinear FEA. After the model is established, validation tests are conducted via the influence of Poisson's ratio on the ring compression test. It is shown that the established FEA model is valid especially if the Poisson's ratio is close to 0.5 in the setting of FEA. Material folding phenomena is present in this model, and µ factors are applied at all surfaces of the ring respectively. It is also found that the reduction ratio of the ring and the slopes of the FCM can be used to describe the deformation of the ring specimen. With the baseline FEA model, some formulas between the deformation parameters, material mechanical properties and µ factors are generated through the statistical analysis to the simulating results of the ring compression test. A method to substitute the m factor with µ factors for particular material by selecting and applying the µ factor in time sequence is found based on these formulas. By converting the m factor into µ factor, the cold forging can be simulated.
ContributorsKexiang (Author) / Shah, Jami (Thesis advisor) / Davidson, Joseph (Committee member) / Trimble, Steve (Committee member) / Arizona State University (Publisher)
Created2013
152005-Thumbnail Image.png
Description
The goal of this research project is to develop a DOF (degree of freedom) algebra for entity clusters to support tolerance specification, validation, and tolerance automation. This representation is required to capture the relation between geometric entities, metric constraints and tolerance specification. This research project is a part of an

The goal of this research project is to develop a DOF (degree of freedom) algebra for entity clusters to support tolerance specification, validation, and tolerance automation. This representation is required to capture the relation between geometric entities, metric constraints and tolerance specification. This research project is a part of an on-going project on creating a bi-level model of GD&T; (Geometric Dimensioning and Tolerancing). This thesis presents the systematic derivation of degree of freedoms of entity clusters corresponding to tolerance classes. The clusters can be datum reference frames (DRFs) or targets. A binary vector representation of degree of freedom and operations for combining them are proposed. An algebraic method is developed by using DOF representation. The ASME Y14.5.1 companion to the Geometric Dimensioning and Tolerancing (GD&T;) standard gives an exhaustive tabulation of active and invariant degrees of freedom (DOF) for Datum Reference Frames (DRF). This algebra is validated by checking it against all cases in the Y14.5.1 tabulation. This algebra allows the derivation of the general rules for tolerance specification and validation. A computer tool is implemented to support GD&T; specification and validation. The computer implementation outputs the geometric and tolerance information in the form of a CTF (Constraint-Tolerance-Feature) file which can be used for tolerance stack analysis.
ContributorsShen, Yadong (Author) / Shah, Jami (Thesis advisor) / Davidson, Joseph (Committee member) / Huebner, Kenneth (Committee member) / Arizona State University (Publisher)
Created2013
150489-Thumbnail Image.png
Description
Modern automotive and aerospace products are large cyber-physical system involving both software and hardware, composed of mechanical, electrical and electronic components. The increasing complexity of such systems is a major concern as it impacts development time and effort, as well as, initial and operational costs. Towards the goal of measuring

Modern automotive and aerospace products are large cyber-physical system involving both software and hardware, composed of mechanical, electrical and electronic components. The increasing complexity of such systems is a major concern as it impacts development time and effort, as well as, initial and operational costs. Towards the goal of measuring complexity, the first step is to determine factors that contribute to it and metrics to qualify it. These complexity components can be further use to (a) estimate the cost of cyber-physical system, (b) develop methods that can reduce the cost of cyber-physical system and (c) make decision such as selecting one design from a set of possible solutions or variants. To determine the contributions to complexity we conducted survey at an aerospace company. We found out three types of contribution to the complexity of the system: Artifact complexity, Design process complexity and Manufacturing complexity. In all three domains, we found three types of metrics: size complexity, numeric complexity (degree of coupling) and technological complexity (solvability).We propose a formal representation for all three domains as graphs, but with different interpretations of entity (node) and relation (link) corresponding to the above three aspects. Complexities of these components are measured using algorithms defined in graph theory. Two experiments were conducted to check the meaningfulness and feasibility of the complexity metrics. First experiment was mechanical transmission and the scope of this experiment was component level. All the design stages, from concept to manufacturing, were considered in this experiment. The second experiment was conducted on hybrid powertrains. The scope of this experiment was assembly level and only artifact complexity is considered because of the limited resources. Finally the calibration of these complexity measures was conducted at an aerospace company but the results cannot be included in this thesis.
ContributorsGurpreet Singh (Author) / Shah, Jami (Thesis advisor) / Runger, George C. (Committee member) / Davidson, Joseph (Committee member) / Arizona State University (Publisher)
Created2011
151245-Thumbnail Image.png
Description
The main objective of this project was to create a framework for holistic ideation and investigate the technical issues involved in its implementation. In previous research, logical ideation methods were explored, ideation states were identified, and tentative set of ideation blocks with strategies were incorporated in an interactive software testbed.

The main objective of this project was to create a framework for holistic ideation and investigate the technical issues involved in its implementation. In previous research, logical ideation methods were explored, ideation states were identified, and tentative set of ideation blocks with strategies were incorporated in an interactive software testbed. As a subsequent study, in this research, intuitive methods and their strategies were investigated and characterized, a framework to organize the components of ideation (both logical and intuitive) was devised, and different ideation methods were implemented based on the framework. One of the major contributions of this research is the method by which information passes between different ideation methods. Another important part of the research is that a framework to organize ideas found by different methods. The intuitive ideation strategies added to the holistic test bed are reframing, restructuring, random connection, force connection, and analogical reasoning. A computer tool facilitating holistic ideation was developed. This framework can also be used as a research tool to collect large amounts of data from designers about their choice of ideation strategies, and assessment of their effectiveness.
ContributorsChen, Ying (Author) / Shah, Jami (Thesis advisor) / Huebner, Kenneth (Committee member) / Davidson, Joseph (Committee member) / Arizona State University (Publisher)
Created2012
157667-Thumbnail Image.png
Description
In nature, it is commonly observed that animals and birds perform movement-based thermoregulation activities to regulate their body temperatures. For example, flapping of elephant ears or plumage fluffing in birds. Taking inspiration from nature and to explore the possibilities of such heat transfer enhancements, augmentation of heat transfer rates induced

In nature, it is commonly observed that animals and birds perform movement-based thermoregulation activities to regulate their body temperatures. For example, flapping of elephant ears or plumage fluffing in birds. Taking inspiration from nature and to explore the possibilities of such heat transfer enhancements, augmentation of heat transfer rates induced by the vibration of solid and well as novel flexible pinned heatsinks were studied in this research project. Enhancement of natural convection has always been very important in improving the performance of the cooling mechanisms. In this research, flexible heatsinks were developed and they were characterized based on natural convection cooling with moderately vibrating conditions. The vibration of heated surfaces such as motor surfaces, condenser surfaces, robotic arms and exoskeletons led to the motivation of the development of heat sinks having flexible fins with an improved heat transfer capacity. The performance of an inflexible, solid copper pin fin heat sink was considered as the baseline, current industry standard for the thermal performance. It is expected to obtain maximum convective heat transfer at the resonance frequency of the flexible pin fins. Current experimental results with fixed input frequency and varying amplitudes indicate that the vibration provides a moderate improvement in convective heat transfer, however, the flexibility of fins had negligible effects.
ContributorsPrabhu, Saurabh (Author) / Rykaczewski, Konrad (Thesis advisor) / Phelan, Patrick (Committee member) / Wang, Robert (Committee member) / Arizona State University (Publisher)
Created2019
168682-Thumbnail Image.png
Description
In convective heat transfer processes, heat transfer rate increases generally with a large fluid velocity, which leads to complex flow patterns. However, numerically analyzing the complex transport process and conjugated heat transfer requires extensive time and computing resources. Recently, data-driven approach has risen as an alternative method to solve physical

In convective heat transfer processes, heat transfer rate increases generally with a large fluid velocity, which leads to complex flow patterns. However, numerically analyzing the complex transport process and conjugated heat transfer requires extensive time and computing resources. Recently, data-driven approach has risen as an alternative method to solve physical problems in a computational efficient manner without necessitating the iterative computations of the governing physical equations. However, the research on data-driven approach for convective heat transfer is still in nascent stage. This study aims to introduce data-driven approaches for modeling heat and mass convection phenomena. As the first step, this research explores a deep learning approach for modeling the internal forced convection heat transfer problems. Conditional generative adversarial networks (cGAN) are trained to predict the solution based on a graphical input describing fluid channel geometries and initial flow conditions. A trained cGAN model rapidly approximates the flow temperature, Nusselt number (Nu) and friction factor (f) of a flow in a heated channel over Reynolds number (Re) ranging from 100 to 27750. The optimized cGAN model exhibited an accuracy up to 97.6% when predicting the local distributions of Nu and f. Next, this research introduces a deep learning based surrogate model for three-dimensional (3D) transient mixed convention in a horizontal channel with a heated bottom surface. Conditional generative adversarial networks (cGAN) are trained to approximate the temperature maps at arbitrary channel locations and time steps. The model is developed for a mixed convection occurring at the Re of 100, Rayleigh number of 3.9E6, and Richardson number of 88.8. The cGAN with the PatchGAN based classifier without the strided convolutions infers the temperature map with the best clarity and accuracy. Finally, this study investigates how machine learning analyzes the mass transfer in 3D printed fluidic devices. Random forests algorithm is hired to classify the flow images taken from semi-transparent 3D printed tubes. Particularly, this work focuses on laminar-turbulent transition process occurring in a 3D wavy tube and a straight tube visualized by dye injection. The machine learning model automatically classifies experimentally obtained flow images with an accuracy > 0.95.
ContributorsKang, Munku (Author) / Kwon, Beomjin (Thesis advisor) / Phelan, Patrick (Committee member) / Ren, Yi (Committee member) / Rykaczewski, Konrad (Committee member) / Sohn, SungMin (Committee member) / Arizona State University (Publisher)
Created2022
171541-Thumbnail Image.png
Description
The thermal conductivity of cadmium sulfide (CdS) colloidal nanocrystals (NCs) and magic-sized clusters (MSCs) have been investigated in this work. It is well documented in the literature that the thermal conductivity of colloidal nanocrystal assemblies decreases as diameter decreases. However, the extrapolation of this size dependence does not apply to

The thermal conductivity of cadmium sulfide (CdS) colloidal nanocrystals (NCs) and magic-sized clusters (MSCs) have been investigated in this work. It is well documented in the literature that the thermal conductivity of colloidal nanocrystal assemblies decreases as diameter decreases. However, the extrapolation of this size dependence does not apply to magic-sized clusters. Magic-sized clusters have an anomalously high thermal conductivity relative to the extrapolated size-dependence trend line for the colloidal nanocrystals. This anomalously high thermal conductivity could probably result from the monodispersity of magic-sized clusters. To support this conjecture, a method of deliberately eliminating the monodispersity of MSCs by mixing them with colloidal nanocrystals was performed. Experiment results showed that mixtures of nanocrystals and MSCs have a lower thermal conductivity that falls approximately on the extrapolated trendline for colloidal nanocrystal thermal conductivity as a function of size.
ContributorsSun, Ming-Hsien (Author) / Wang, Robert (Thesis advisor) / Rykaczewski, Konrad (Committee member) / Wang, Liping (Committee member) / Arizona State University (Publisher)
Created2022
171605-Thumbnail Image.png
Description
Windows are one of the most significant locations of heat transfer through a building envelope. In warm climates, it is important that heat gain through windows is minimized. Heat transfer through a window glazing occurs by all major forms of heat transfer (convection, conduction, and radiation). Convection and conduction

Windows are one of the most significant locations of heat transfer through a building envelope. In warm climates, it is important that heat gain through windows is minimized. Heat transfer through a window glazing occurs by all major forms of heat transfer (convection, conduction, and radiation). Convection and conduction effects can be limited by manipulating the thermal properties of a window’s construction. However, radiation heat transfer into a building will always occur if a window glazing is visibly transparent. In an effort to reduce heat gain through the building envelope, a window glazing can be designed with spectrally selective properties. These spectrally selective glazings would possess high reflectivity in the near-infrared (NIR) regime (to prevent solar heat gain) and high emissivity in the atmospheric window, 8-13μm (to take advantage of the radiative sky cooling effect). The objective of this thesis is to provide a comprehensive study of the thermal performance of a visibly transparent, high-emissivity glass window. This research proposes a window constructed by coating soda lime glass in a dual layer consisting of Indium Tin Oxide (ITO) and Polyvinyl Fluoride (PVF) film. The optical properties of this experimental glazing were measured and demonstrated high reflectivity in the NIR regime and high emissivity in the atmospheric window. Outdoor field tests were performed to experimentally evaluate the glazing’s thermal performance. The thermal performance was assessed by utilizing an experimental setup intended to mimic a building with a skylight. The proposed glazing experimentally demonstrated reduced indoor air temperatures compared to bare glass, ITO coated glass, and PVF coated glass. A theoretical heat transfer model was developed to validate the experimental results. The results of the theoretical and experimental models showed good agreement. On average, the theoretical model demonstrated 0.44% percent error during the daytime and 0.52% percent error during the nighttime when compared to the experimentally measured temperature values.
ContributorsTrujillo, Antonio Jose (Author) / Phelan, Patrick (Thesis advisor) / Wang, Liping (Thesis advisor) / Rykaczewski, Konrad (Committee member) / Arizona State University (Publisher)
Created2022