Matching Items (52)
Filtering by

Clear all filters

150388-Thumbnail Image.png
Description
The main objective of this project was to create a framework for holistic ideation and research about the technical issues involved in creating a holistic approach. Towards that goal, we explored different components of ideation (both logical and intuitive), characterized ideation states, and found new ideation blocks with strategies used

The main objective of this project was to create a framework for holistic ideation and research about the technical issues involved in creating a holistic approach. Towards that goal, we explored different components of ideation (both logical and intuitive), characterized ideation states, and found new ideation blocks with strategies used to overcome them. One of the major contributions of this research is the method by which easy traversal between different ideation methods with different components were facilitated, to support both creativity and functional quality. Another important part of the framework is the sensing of ideation states (blocks/ unfettered ideation) and investigation of matching ideation strategies most likely to facilitate progress. Some of the ideation methods embedded in the initial holistic test bed are Physical effects catalog, working principles catalog, TRIZ, Bio-TRIZ and Artifacts catalog. Repositories were created for each of those. This framework will also be used as a research tool to collect large amount of data from designers about their choice of ideation strategies used, and their effectiveness. Effective documentation of design ideation paths is also facilitated using this holistic approach. A computer tool facilitating holistic ideation was developed. Case studies were run on different designers to document their ideation states and their choice of ideation strategies to come up with a good solution to solve the same design problem.
ContributorsMohan, Manikandan (Author) / Shah, Jami J. (Thesis advisor) / Huebner, Kenneth (Committee member) / Burleson, Winslow (Committee member) / Arizona State University (Publisher)
Created2011
151321-Thumbnail Image.png
Description
This thesis concerns the role of geometric imperfections on assemblies in which the location of a target part is dependent on supports at two features. In some applications, such as a turbo-machine rotor that is supported by a series of parts at each bearing, it is the interference or clearance

This thesis concerns the role of geometric imperfections on assemblies in which the location of a target part is dependent on supports at two features. In some applications, such as a turbo-machine rotor that is supported by a series of parts at each bearing, it is the interference or clearance at a functional target feature, such as at the blades that must be controlled. The first part of this thesis relates the limits of location for the target part to geometric imperfections of other parts when stacked-up in parallel paths. In this section parts are considered to be rigid (non-deformable). By understanding how much of variation from the supporting parts contribute to variations of the target feature, a designer can better utilize the tolerance budget when assigning values to individual tolerances. In this work, the T-Map®, a spatial math model is used to model the tolerance accumulation in parallel assemblies. In other applications where parts are flexible, deformations are induced when parts in parallel are clamped together during assembly. Presuming that perfectly manufactured parts have been designed to fit perfectly together and produce zero deformations, the clamping-induced deformations result entirely from the imperfect geometry that is produced during manufacture. The magnitudes and types of these deformations are a function of part dimensions and material stiffnesses, and they are limited by design tolerances that control manufacturing variations. These manufacturing variations, if uncontrolled, may produce high enough stresses when the parts are assembled that premature failure can occur before the design life. The last part of the thesis relates the limits on the largest von Mises stress in one part to functional tolerance limits that must be set at the beginning of a tolerance analysis of parts in such an assembly.
ContributorsJaishankar, Lupin Niranjan (Author) / Davidson, Joseph K. (Thesis advisor) / Shah, Jami J. (Committee member) / Mignolet, Marc P (Committee member) / Arizona State University (Publisher)
Created2012
151510-Thumbnail Image.png
Description
Tolerances on line profiles are used to control cross-sectional shapes of parts, such as turbine blades. A full life cycle for many mechanical devices depends (i) on a wise assignment of tolerances during design and (ii) on careful quality control of the manufacturing process to ensure adherence to the specified

Tolerances on line profiles are used to control cross-sectional shapes of parts, such as turbine blades. A full life cycle for many mechanical devices depends (i) on a wise assignment of tolerances during design and (ii) on careful quality control of the manufacturing process to ensure adherence to the specified tolerances. This thesis describes a new method for quality control of a manufacturing process by improving the method used to convert measured points on a part to a geometric entity that can be compared directly with tolerance specifications. The focus of this paper is the development of a new computational method for obtaining the least-squares fit of a set of points that have been measured with a coordinate measurement machine along a line-profile. The pseudo-inverse of a rectangular matrix is used to convert the measured points to the least-squares fit of the profile. Numerical examples are included for convex and concave line-profiles, that are formed from line- and circular arc-segments.
ContributorsSavaliya, Samir (Author) / Davidson, Joseph K. (Thesis advisor) / Shah, Jami J. (Committee member) / Santos, Veronica J (Committee member) / Arizona State University (Publisher)
Created2013
152414-Thumbnail Image.png
Description
Creative design lies at the intersection of novelty and technical feasibility. These objectives can be achieved through cycles of divergence (idea generation) and convergence (idea evaluation) in conceptual design. The focus of this thesis is on the latter aspect. The evaluation may involve any aspect of technical feasibility and may

Creative design lies at the intersection of novelty and technical feasibility. These objectives can be achieved through cycles of divergence (idea generation) and convergence (idea evaluation) in conceptual design. The focus of this thesis is on the latter aspect. The evaluation may involve any aspect of technical feasibility and may be desired at component, sub-system or full system level. Two issues that are considered in this work are: 1. Information about design ideas is incomplete, informal and sketchy 2. Designers often work at multiple levels; different aspects or subsystems may be at different levels of abstraction Thus, high fidelity analysis and simulation tools are not appropriate for this purpose. This thesis looks at the requirements for a simulation tool and how it could facilitate concept evaluation. The specific tasks reported in this thesis are: 1. The typical types of information available after an ideation session 2. The typical types of technical evaluations done in early stages 3. How to conduct low fidelity design evaluation given a well-defined feasibility question A computational tool for supporting idea evaluation was designed and implemented. It was assumed that the results of the ideation session are represented as a morphological chart and each entry is expressed as some combination of a sketch, text and references to physical effects and machine components. Approximately 110 physical effects were identified and represented in terms of algebraic equations, physical variables and a textual description. A common ontology of physical variables was created so that physical effects could be networked together when variables are shared. This allows users to synthesize complex behaviors from simple ones, without assuming any solution sequence. A library of 16 machine elements was also created and users were given instructions about incorporating them. To support quick analysis, differential equations are transformed to algebraic equations by replacing differential terms with steady state differences), only steady state behavior is considered and interval arithmetic was used for modeling. The tool implementation is done by MATLAB; and a number of case studies are also done to show how the tool works. textual description. A common ontology of physical variables was created so that physical effects could be networked together when variables are shared. This allows users to synthesize complex behaviors from simple ones, without assuming any solution sequence. A library of 15 machine elements was also created and users were given instructions about incorporating them. To support quick analysis, differential equations are transformed to algebraic equations by replacing differential terms with steady state differences), only steady state behavior is considered and interval arithmetic was used for modeling. The tool implementation is done by MATLAB; and a number of case studies are also done to show how the tool works.
ContributorsKhorshidi, Maryam (Author) / Shah, Jami J. (Thesis advisor) / Wu, Teresa (Committee member) / Gel, Esma (Committee member) / Arizona State University (Publisher)
Created2014
149487-Thumbnail Image.png
Description
Current trends in the Computer Aided Engineering (CAE) involve the integration of legacy mesh-based finite element software with newer solid-modeling kernels or full CAD systems in order to simplify laborious or highly specialized tasks in engineering analysis. In particular, mesh generation is becoming increasingly automated. In addition, emphasis is increasingly

Current trends in the Computer Aided Engineering (CAE) involve the integration of legacy mesh-based finite element software with newer solid-modeling kernels or full CAD systems in order to simplify laborious or highly specialized tasks in engineering analysis. In particular, mesh generation is becoming increasingly automated. In addition, emphasis is increasingly placed on full assembly (multi-part) models, which in turn necessitates an automated approach to contact analysis. This task is challenging due to increases in algebraic system size, as well as increases in the number of distorted elements - both of which necessitate manual intervention to maintain accuracy and conserve computer resources. In this investigation, it is demonstrated that the use of a mesh-free B-Spline finite element basis for structural contact problems results in significantly smaller algebraic systems than mesh-based approaches for similar grid spacings. The relative error in calculated contact pressure is evaluated for simple two dimensional smooth domains at discrete points within the contact zone and compared to the analytical Hertz solution, as well as traditional mesh-based finite element solutions for similar grid spacings. For smooth curved domains, the relative error in contact pressure is shown to be less than for bi-quadratic Serendipity elements. The finite element formulation draws on some recent innovations, in which the domain to be analyzed is integrated with the use of transformed Gauss points within the domain, and boundary conditions are applied via distance functions (R-functions). However, the basis is stabilized through a novel selective normalization procedure. In addition, a novel contact algorithm is presented in which the B-Spline support grid is re-used for contact detection. The algorithm is demonstrated for two simple 2-dimensional assemblies. Finally, a modified Penalty Method is demonstrated for connecting elements with incompatible bases.
ContributorsGrishin, Alexander (Author) / Shah, Jami J. (Thesis advisor) / Davidson, Joe (Committee member) / Hjelmstad, Keith (Committee member) / Huebner, Ken (Committee member) / Farin, Gerald (Committee member) / Peralta, Pedro (Committee member) / Arizona State University (Publisher)
Created2010
149542-Thumbnail Image.png
Description
The essence of this research is the reconciliation and standardization of feature fitting algorithms used in Coordinate Measuring Machine (CMM) software and the development of Inspection Maps (i-Maps) for representing geometric tolerances in the inspection stage based on these standardized algorithms. The i-Map is a hypothetical point-space that represents the

The essence of this research is the reconciliation and standardization of feature fitting algorithms used in Coordinate Measuring Machine (CMM) software and the development of Inspection Maps (i-Maps) for representing geometric tolerances in the inspection stage based on these standardized algorithms. The i-Map is a hypothetical point-space that represents the substitute feature evaluated for an actual part in the inspection stage. The first step in this research is to investigate the algorithms used for evaluating substitute features in current CMM software. For this, a survey of feature fitting algorithms available in the literature was performed and then a case study was done to reverse engineer the feature fitting algorithms used in commercial CMM software. The experiments proved that algorithms based on least squares technique are mostly used for GD&T; inspection and this wrong choice of fitting algorithm results in errors and deficiency in the inspection process. Based on the results, a standardization of fitting algorithms is proposed in light of the definition provided in the ASME Y14.5 standard and an interpretation of manual inspection practices. Standardized algorithms for evaluating substitute features from CMM data, consistent with the ASME Y14.5 standard and manual inspection practices for each tolerance type applicable to planar features are developed. Second, these standardized algorithms developed for substitute feature fitting are then used to develop i-Maps for size, orientation and flatness tolerances that apply to their respective feature types. Third, a methodology for Statistical Process Control (SPC) using the I-Maps is proposed by direct fitting of i-Maps into the parent T-Maps. Different methods of computing i-Maps, namely, finding mean, computing the convex hull and principal component analysis are explored. The control limits for the process are derived from inspection samples and a framework for statistical control of the process is developed. This also includes computation of basic SPC and process capability metrics.
ContributorsMani, Neelakantan (Author) / Shah, Jami J. (Thesis advisor) / Davidson, Joseph K. (Committee member) / Farin, Gerald (Committee member) / Arizona State University (Publisher)
Created2011
161595-Thumbnail Image.png
Description
With the substantial development of intelligent robots, human-robot interaction (HRI) has become ubiquitous in applications such as collaborative manufacturing, surgical robotic operations, and autonomous driving. In all these applications, a human behavior model, which can provide predictions of human actions, is a helpful reference that helps robots to achieve intelligent

With the substantial development of intelligent robots, human-robot interaction (HRI) has become ubiquitous in applications such as collaborative manufacturing, surgical robotic operations, and autonomous driving. In all these applications, a human behavior model, which can provide predictions of human actions, is a helpful reference that helps robots to achieve intelligent interaction with humans. The requirement elicits an essential problem of how to properly model human behavior, especially when individuals are interacting or cooperating with each other. The major objective of this thesis is to utilize the human intention decoding method to help robots enhance their performance while interacting with humans. Preliminary work on integrating human intention estimation with an HRI scenario is shown to demonstrate the benefit. In order to achieve this goal, the research topic is divided into three phases. First, a novel method of an online measure of the human's reliance on the robot, which can be estimated through the intention decoding process from human actions,is described. An experiment that requires human participants to complete an object-moving task with a robot manipulator was conducted under different conditions of distractions. A relationship is discovered between human intention and trust while participants performed a familiar task with no distraction. This finding suggests a relationship between the psychological construct of trust and joint physical coordination, which bridges the human's action to its mental states. Then, a novel human collaborative dynamic model is introduced based on game theory and bounded rationality, which is a novel method to describe human dyadic behavior with the aforementioned theories. The mutual intention decoding process was also considered to inform this model. Through this model, the connection between the mental states of the individuals to their cooperative actions is indicated. A haptic interface is developed with a virtual environment and the experiments are conducted with 30 human subjects. The result suggests the existence of mutual intention decoding during the human dyadic cooperative behaviors. Last, the empirical results show that allowing agents to have empathy in inference, which lets the agents understand that others might have a false understanding of their intentions, can help to achieve correct intention inference. It has been verified that knowledge about vehicle dynamics was also important to correctly infer intentions. A new courteous policy is proposed that bounded the courteous motion using its inferred set of equilibrium motions. A simulation, which is set to reproduce an intersection passing case between an autonomous car and a human driving car, is conducted to demonstrate the benefit of the novel courteous control policy.
ContributorsWang, Yiwei (Author) / Zhang, Wenlong (Thesis advisor) / Berman, Spring (Committee member) / Lee, Hyunglae (Committee member) / Ren, Yi (Committee member) / Yang, Yezhou (Committee member) / Arizona State University (Publisher)
Created2021
161600-Thumbnail Image.png
Description
In the development of autonomous ground vehicles (AGVs), how to guarantee vehicle lateral stability is one of the most critical aspects. Based on nonlinear vehicle lateral and tire dynamics, new driving requirements of AGVs demand further studies and analyses of vehicle lateral stability control strategies. To achieve comprehensive analyses and

In the development of autonomous ground vehicles (AGVs), how to guarantee vehicle lateral stability is one of the most critical aspects. Based on nonlinear vehicle lateral and tire dynamics, new driving requirements of AGVs demand further studies and analyses of vehicle lateral stability control strategies. To achieve comprehensive analyses and stability-guaranteed vehicle lateral driving control, this dissertation presents three main contributions.First, a new method is proposed to estimate and analyze vehicle lateral driving stability regions, which provide a direct and intuitive demonstration for stability control of AGVs. Based on a four-wheel vehicle model and a nonlinear 2D analytical LuGre tire model, a local linearization method is applied to estimate vehicle lateral driving stability regions by analyzing vehicle local stability at each operation point on a phase plane. The obtained stability regions are conservative because both vehicle and tire stability are simultaneously considered. Such a conservative feature is specifically important for characterizing the stability properties of AGVs. Second, to analyze vehicle stability, two novel features of the estimated vehicle lateral driving stability regions are studied. First, a shifting vector is formulated to explicitly describe the shifting feature of the lateral stability regions with respect to the vehicle steering angles. Second, dynamic margins of the stability regions are formulated and applied to avoid the penetration of vehicle state trajectory with respect to the region boundaries. With these two features, the shiftable stability regions are feasible for real-time stability analysis. Third, to keep the vehicle states (lateral velocity and yaw rate) always stay in the shiftable stability regions, different control methods are developed and evaluated. Based on different vehicle control configurations, two dynamic sliding mode controllers (SMC) are designed. To better control vehicle stability without suffering chattering issues in SMC, a non-overshooting model predictive control is proposed and applied. To further save computational burden for real-time implementation, time-varying control-dependent invariant sets and time-varying control-dependent barrier functions are proposed and adopted in a stability-guaranteed vehicle control problem. Finally, to validate the correctness and effectiveness of the proposed theories, definitions, and control methods, illustrative simulations and experimental results are presented and discussed.
ContributorsHuang, Yiwen (Author) / Chen, Yan (Thesis advisor) / Lee, Hyunglae (Committee member) / Ren, Yi (Committee member) / Yong, Sze Zheng (Committee member) / Zhang, Wenlong (Committee member) / Arizona State University (Publisher)
Created2021
168682-Thumbnail Image.png
Description
In convective heat transfer processes, heat transfer rate increases generally with a large fluid velocity, which leads to complex flow patterns. However, numerically analyzing the complex transport process and conjugated heat transfer requires extensive time and computing resources. Recently, data-driven approach has risen as an alternative method to solve physical

In convective heat transfer processes, heat transfer rate increases generally with a large fluid velocity, which leads to complex flow patterns. However, numerically analyzing the complex transport process and conjugated heat transfer requires extensive time and computing resources. Recently, data-driven approach has risen as an alternative method to solve physical problems in a computational efficient manner without necessitating the iterative computations of the governing physical equations. However, the research on data-driven approach for convective heat transfer is still in nascent stage. This study aims to introduce data-driven approaches for modeling heat and mass convection phenomena. As the first step, this research explores a deep learning approach for modeling the internal forced convection heat transfer problems. Conditional generative adversarial networks (cGAN) are trained to predict the solution based on a graphical input describing fluid channel geometries and initial flow conditions. A trained cGAN model rapidly approximates the flow temperature, Nusselt number (Nu) and friction factor (f) of a flow in a heated channel over Reynolds number (Re) ranging from 100 to 27750. The optimized cGAN model exhibited an accuracy up to 97.6% when predicting the local distributions of Nu and f. Next, this research introduces a deep learning based surrogate model for three-dimensional (3D) transient mixed convention in a horizontal channel with a heated bottom surface. Conditional generative adversarial networks (cGAN) are trained to approximate the temperature maps at arbitrary channel locations and time steps. The model is developed for a mixed convection occurring at the Re of 100, Rayleigh number of 3.9E6, and Richardson number of 88.8. The cGAN with the PatchGAN based classifier without the strided convolutions infers the temperature map with the best clarity and accuracy. Finally, this study investigates how machine learning analyzes the mass transfer in 3D printed fluidic devices. Random forests algorithm is hired to classify the flow images taken from semi-transparent 3D printed tubes. Particularly, this work focuses on laminar-turbulent transition process occurring in a 3D wavy tube and a straight tube visualized by dye injection. The machine learning model automatically classifies experimentally obtained flow images with an accuracy > 0.95.
ContributorsKang, Munku (Author) / Kwon, Beomjin (Thesis advisor) / Phelan, Patrick (Committee member) / Ren, Yi (Committee member) / Rykaczewski, Konrad (Committee member) / Sohn, SungMin (Committee member) / Arizona State University (Publisher)
Created2022
171530-Thumbnail Image.png
Description
Autonomous systems inevitably must interact with other surrounding systems; thus, algorithms for intention/behavior estimation are of great interest. This thesis dissertation focuses on developing passive and active model discrimination algorithms (PMD and AMD) with applications to set-valued intention identification and fault detection for uncertain/bounded-error dynamical systems. PMD uses the obtained

Autonomous systems inevitably must interact with other surrounding systems; thus, algorithms for intention/behavior estimation are of great interest. This thesis dissertation focuses on developing passive and active model discrimination algorithms (PMD and AMD) with applications to set-valued intention identification and fault detection for uncertain/bounded-error dynamical systems. PMD uses the obtained input-output data to invalidate the models, while AMD designs an auxiliary input to assist the discrimination process. First, PMD algorithms are proposed for noisy switched nonlinear systems constrained by metric/signal temporal logic specifications, including systems with lossy data modeled by (m,k)-firm constraints. Specifically, optimization-based algorithms are introduced for analyzing the detectability/distinguishability of models and for ruling out models that are inconsistent with observations at run time. On the other hand, two AMD approaches are designed for noisy switched nonlinear models and piecewise affine inclusion models, which involve bilevel optimization with integer variables/constraints in the inner/lower level. The first approach solves the inner problem using mixed-integer parametric optimization, whose solution is included when solving the outer problem/higher level, while the second approach moves the integer variables/constraints to the outer problem in a manner that retains feasibility and recasts the problem as a tractable mixed-integer linear programming (MILP). Furthermore, AMD algorithms are proposed for noisy discrete-time affine time-invariant systems constrained by disjunctive and coupled safety constraints. To overcome the issues associated with generalized semi-infinite constraints due to state-dependent input constraints and disjunctive safety constraints, several constraint reformulations are proposed to recast the AMD problems as tractable MILPs. Finally, partition-based AMD approaches are proposed for noisy discrete-time affine time-invariant models with model-independent parameters and output measurement that are revealed at run time. Specifically, algorithms with fixed and adaptive partitions are proposed, where the latter improves on the performance of the former by allowing the partitions to be optimized. By partitioning the operation region, the problem is solved offline, and partition trees are constructed which can be used as a `look-up table' to determine the optimal input depending on revealed information at run time.
ContributorsNiu, Ruochen (Author) / Yong, Sze Zheng S.Z. (Thesis advisor) / Berman, Spring (Committee member) / Ren, Yi (Committee member) / Zhang, Wenlong (Committee member) / Zhuang, Houlong (Committee member) / Arizona State University (Publisher)
Created2022