Matching Items (503)
Filtering by

Clear all filters

151510-Thumbnail Image.png
Description
Tolerances on line profiles are used to control cross-sectional shapes of parts, such as turbine blades. A full life cycle for many mechanical devices depends (i) on a wise assignment of tolerances during design and (ii) on careful quality control of the manufacturing process to ensure adherence to the specified

Tolerances on line profiles are used to control cross-sectional shapes of parts, such as turbine blades. A full life cycle for many mechanical devices depends (i) on a wise assignment of tolerances during design and (ii) on careful quality control of the manufacturing process to ensure adherence to the specified tolerances. This thesis describes a new method for quality control of a manufacturing process by improving the method used to convert measured points on a part to a geometric entity that can be compared directly with tolerance specifications. The focus of this paper is the development of a new computational method for obtaining the least-squares fit of a set of points that have been measured with a coordinate measurement machine along a line-profile. The pseudo-inverse of a rectangular matrix is used to convert the measured points to the least-squares fit of the profile. Numerical examples are included for convex and concave line-profiles, that are formed from line- and circular arc-segments.
ContributorsSavaliya, Samir (Author) / Davidson, Joseph K. (Thesis advisor) / Shah, Jami J. (Committee member) / Santos, Veronica J (Committee member) / Arizona State University (Publisher)
Created2013
151523-Thumbnail Image.png
Description
Shock loading is a complex phenomenon that can lead to failure mechanisms such as strain localization, void nucleation and growth, and eventually spall fracture. Studying incipient stages of spall damage is of paramount importance to accurately determine initiation sites in the material microstructure where damage will nucleate and grow and

Shock loading is a complex phenomenon that can lead to failure mechanisms such as strain localization, void nucleation and growth, and eventually spall fracture. Studying incipient stages of spall damage is of paramount importance to accurately determine initiation sites in the material microstructure where damage will nucleate and grow and to formulate continuum models that account for the variability of the damage process due to microstructural heterogeneity. The length scale of damage with respect to that of the surrounding microstructure has proven to be a key aspect in determining sites of failure initiation. Correlations have been found between the damage sites and the surrounding microstructure to determine the preferred sites of spall damage, since it tends to localize at and around the regions of intrinsic defects such as grain boundaries and triple points. However, considerable amount of work still has to be done in this regard to determine the physics driving the damage at these intrinsic weak sites in the microstructure. The main focus of this research work is to understand the physical mechanisms behind the damage localization at these preferred sites. A crystal plasticity constitutive model is implemented with different damage criteria to study the effects of stress concentration and strain localization at the grain boundaries. A cohesive zone modeling technique is used to include the intrinsic strength of the grain boundaries in the simulations. The constitutive model is verified using single elements tests, calibrated using single crystal impact experiments and validated using bicrystal and multicrystal impact experiments. The results indicate that strain localization is the predominant driving force for damage initiation and evolution. The microstructural effects on theses damage sites are studied to attribute the extent of damage to microstructural features such as grain orientation, misorientation, Taylor factor and the grain boundary planes. The finite element simulations show good correlation with the experimental results and can be used as the preliminary step in developing accurate probabilistic models for damage nucleation.
ContributorsKrishnan, Kapil (Author) / Peralta, Pedro (Thesis advisor) / Mignolet, Marc (Committee member) / Sieradzki, Karl (Committee member) / Jiang, Hanqing (Committee member) / Oswald, Jay (Committee member) / Arizona State University (Publisher)
Created2013
151528-Thumbnail Image.png
Description
The heat transfer enhancements available from expanding the cross-section of a boiling microchannel are explored analytically and experimentally. Evaluation of the literature on critical heat flux in flow boiling and associated pressure drop behavior is presented with predictive critical heat flux (CHF) and pressure drop correlations. An optimum channel configuration

The heat transfer enhancements available from expanding the cross-section of a boiling microchannel are explored analytically and experimentally. Evaluation of the literature on critical heat flux in flow boiling and associated pressure drop behavior is presented with predictive critical heat flux (CHF) and pressure drop correlations. An optimum channel configuration allowing maximum CHF while reducing pressure drop is sought. A perturbation of the channel diameter is employed to examine CHF and pressure drop relationships from the literature with the aim of identifying those adequately general and suitable for use in a scenario with an expanding channel. Several CHF criteria are identified which predict an optimizable channel expansion, though many do not. Pressure drop relationships admit improvement with expansion, and no optimum presents itself. The relevant physical phenomena surrounding flow boiling pressure drop are considered, and a balance of dimensionless numbers is presented that may be of qualitative use. The design, fabrication, inspection, and experimental evaluation of four copper microchannel arrays of different channel expansion rates with R-134a refrigerant is presented. Optimum rates of expansion which maximize the critical heat flux are considered at multiple flow rates, and experimental results are presented demonstrating optima. The effect of expansion on the boiling number is considered, and experiments demonstrate that expansion produces a notable increase in the boiling number in the region explored, though no optima are observed. Significant decrease in the pressure drop across the evaporator is observed with the expanding channels, and no optima appear. Discussion of the significance of this finding is presented, along with possible avenues for future work.
ContributorsMiner, Mark (Author) / Phelan, Patrick E (Thesis advisor) / Baer, Steven (Committee member) / Chamberlin, Ralph (Committee member) / Chen, Kangping (Committee member) / Herrmann, Marcus (Committee member) / Arizona State University (Publisher)
Created2013
151532-Thumbnail Image.png
Description
Modern day gas turbine designers face the problem of hot mainstream gas ingestion into rotor-stator disk cavities. To counter this ingestion, seals are installed on the rotor and stator disk rims and purge air, bled off from the compressor, is injected into the cavities. It is desirable to reduce the

Modern day gas turbine designers face the problem of hot mainstream gas ingestion into rotor-stator disk cavities. To counter this ingestion, seals are installed on the rotor and stator disk rims and purge air, bled off from the compressor, is injected into the cavities. It is desirable to reduce the supply of purge air as this decreases the net power output as well as efficiency of the gas turbine. Since the purge air influences the disk cavity flow field and effectively the amount of ingestion, the aim of this work was to study the cavity velocity field experimentally using Particle Image Velocimetry (PIV). Experiments were carried out in a model single-stage axial flow turbine set-up that featured blades as well as vanes, with purge air supplied at the hub of the rotor-stator disk cavity. Along with the rotor and stator rim seals, an inner labyrinth seal was provided which split the disk cavity into a rim cavity and an inner cavity. First, static gage pressure distribution was measured to ensure that nominally steady flow conditions had been achieved. The PIV experiments were then performed to map the velocity field on the radial-tangential plane within the rim cavity at four axial locations. Instantaneous velocity maps obtained by PIV were analyzed sector-by-sector to understand the rim cavity flow field. It was observed that the tangential velocity dominated the cavity flow at low purge air flow rate, its dominance decreasing with increase in the purge air flow rate. Radially inboard of the rim cavity, negative radial velocity near the stator surface and positive radial velocity near the rotor surface indicated the presence of a recirculation region in the cavity whose radial extent increased with increase in the purge air flow rate. Qualitative flow streamline patterns are plotted within the rim cavity for different experimental conditions by combining the PIV map information with ingestion measurements within the cavity as reported in Thiagarajan (2013).
ContributorsPathak, Parag (Author) / Roy, Ramendra P (Thesis advisor) / Calhoun, Ronald (Committee member) / Lee, Taewoo (Committee member) / Arizona State University (Publisher)
Created2013
152536-Thumbnail Image.png
Description
As robotic systems are used in increasingly diverse applications, the interaction of humans and robots has become an important area of research. In many of the applications of physical human robot interaction (pHRI), the robot and the human can be seen as cooperating to complete a task with some object

As robotic systems are used in increasingly diverse applications, the interaction of humans and robots has become an important area of research. In many of the applications of physical human robot interaction (pHRI), the robot and the human can be seen as cooperating to complete a task with some object of interest. Often these applications are in unstructured environments where many paths can accomplish the goal. This creates a need for the ability to communicate a preferred direction of motion between both participants in order to move in coordinated way. This communication method should be bidirectional to be able to fully utilize both the robot and human capabilities. Moreover, often in cooperative tasks between two humans, one human will operate as the leader of the task and the other as the follower. These roles may switch during the task as needed. The need for communication extends into this area of leader-follower switching. Furthermore, not only is there a need to communicate the desire to switch roles but also to control this switching process. Impedance control has been used as a way of dealing with some of the complexities of pHRI. For this investigation, it was examined if impedance control can be utilized as a way of communicating a preferred direction between humans and robots. The first set of experiments tested to see if a human could detect a preferred direction of a robot by grasping and moving an object coupled to the robot. The second set tested the reverse case if the robot could detect the preferred direction of the human. The ability to detect the preferred direction was shown to be up to 99% effective. Using these results, a control method to allow a human and robot to switch leader and follower roles during a cooperative task was implemented and tested. This method proved successful 84% of the time. This control method was refined using adaptive control resulting in lower interaction forces and a success rate of 95%.
ContributorsWhitsell, Bryan (Author) / Artemiadis, Panagiotis (Thesis advisor) / Santello, Marco (Committee member) / Santos, Veronica (Committee member) / Arizona State University (Publisher)
Created2014
152539-Thumbnail Image.png
Description
The slider-crank mechanism is popularly used in internal combustion engines to convert the reciprocating motion of the piston into a rotary motion. This research discusses an alternate mechanism proposed by the Wiseman Technology Inc. which involves replacing the crankshaft with a hypocycloid gear assembly. The unique hypocycloid gear arrangement allows

The slider-crank mechanism is popularly used in internal combustion engines to convert the reciprocating motion of the piston into a rotary motion. This research discusses an alternate mechanism proposed by the Wiseman Technology Inc. which involves replacing the crankshaft with a hypocycloid gear assembly. The unique hypocycloid gear arrangement allows the piston and the connecting rod to move in a straight line, creating a perfect sinusoidal motion. To analyze the performance advantages of the Wiseman mechanism, engine simulation software was used. The Wiseman engine with the hypocycloid piston motion was modeled in the software and the engine's simulated output results were compared to those with a conventional engine of the same size. The software was also used to analyze the multi-fuel capabilities of the Wiseman engine using a contra piston. The engine's performance was studied while operating on diesel, ethanol and gasoline fuel. Further, a scaling analysis on the future Wiseman engine prototypes was carried out to understand how the performance of the engine is affected by increasing the output power and cylinder displacement. It was found that the existing Wiseman engine produced about 7% less power at peak speeds compared to the slider-crank engine of the same size. It also produced lower torque and was about 6% less fuel efficient than the slider-crank engine. These results were concurrent with the dynamometer tests performed in the past. The 4 stroke diesel variant of the same Wiseman engine performed better than the 2 stroke gasoline version as well as the slider-crank engine in all aspects. The Wiseman engine using contra piston showed poor fuel efficiency while operating on E85 fuel. But it produced higher torque and about 1.4% more power than while running on gasoline. While analyzing the effects of the engine size on the Wiseman prototypes, it was found that the engines performed better in terms of power, torque, fuel efficiency and cylinder BMEP as their displacements increased. The 30 horsepower (HP) prototype, while operating on E85, produced the most optimum results in all aspects and the diesel variant of the same engine proved to be the most fuel efficient.
ContributorsRay, Priyesh (Author) / Redkar, Sangram (Thesis advisor) / Mayyas, Abdel Ra'Ouf (Committee member) / Meitz, Robert (Committee member) / Arizona State University (Publisher)
Created2014
152414-Thumbnail Image.png
Description
Creative design lies at the intersection of novelty and technical feasibility. These objectives can be achieved through cycles of divergence (idea generation) and convergence (idea evaluation) in conceptual design. The focus of this thesis is on the latter aspect. The evaluation may involve any aspect of technical feasibility and may

Creative design lies at the intersection of novelty and technical feasibility. These objectives can be achieved through cycles of divergence (idea generation) and convergence (idea evaluation) in conceptual design. The focus of this thesis is on the latter aspect. The evaluation may involve any aspect of technical feasibility and may be desired at component, sub-system or full system level. Two issues that are considered in this work are: 1. Information about design ideas is incomplete, informal and sketchy 2. Designers often work at multiple levels; different aspects or subsystems may be at different levels of abstraction Thus, high fidelity analysis and simulation tools are not appropriate for this purpose. This thesis looks at the requirements for a simulation tool and how it could facilitate concept evaluation. The specific tasks reported in this thesis are: 1. The typical types of information available after an ideation session 2. The typical types of technical evaluations done in early stages 3. How to conduct low fidelity design evaluation given a well-defined feasibility question A computational tool for supporting idea evaluation was designed and implemented. It was assumed that the results of the ideation session are represented as a morphological chart and each entry is expressed as some combination of a sketch, text and references to physical effects and machine components. Approximately 110 physical effects were identified and represented in terms of algebraic equations, physical variables and a textual description. A common ontology of physical variables was created so that physical effects could be networked together when variables are shared. This allows users to synthesize complex behaviors from simple ones, without assuming any solution sequence. A library of 16 machine elements was also created and users were given instructions about incorporating them. To support quick analysis, differential equations are transformed to algebraic equations by replacing differential terms with steady state differences), only steady state behavior is considered and interval arithmetic was used for modeling. The tool implementation is done by MATLAB; and a number of case studies are also done to show how the tool works. textual description. A common ontology of physical variables was created so that physical effects could be networked together when variables are shared. This allows users to synthesize complex behaviors from simple ones, without assuming any solution sequence. A library of 15 machine elements was also created and users were given instructions about incorporating them. To support quick analysis, differential equations are transformed to algebraic equations by replacing differential terms with steady state differences), only steady state behavior is considered and interval arithmetic was used for modeling. The tool implementation is done by MATLAB; and a number of case studies are also done to show how the tool works.
ContributorsKhorshidi, Maryam (Author) / Shah, Jami J. (Thesis advisor) / Wu, Teresa (Committee member) / Gel, Esma (Committee member) / Arizona State University (Publisher)
Created2014
152600-Thumbnail Image.png
Description
This thesis contains the applications of the ASU mathematical model (Tolerance Maps, T-Maps) to the construction of T-Maps for patterns of line profiles. Previously, Tolerance Maps were developed for patterns of features such as holes, pins, slots and tabs to control their position. The T-Maps that are developed in this

This thesis contains the applications of the ASU mathematical model (Tolerance Maps, T-Maps) to the construction of T-Maps for patterns of line profiles. Previously, Tolerance Maps were developed for patterns of features such as holes, pins, slots and tabs to control their position. The T-Maps that are developed in this thesis are fully compatible with the ASME Y14.5 Standard. A pattern of square profiles, both linear and 2D, is used throughout this thesis to illustrate the idea of constructing the T-Maps for line profiles. The Standard defines two ways of tolerancing a pattern of profiles - Composite Tolerancing and Multiple Single Segment Tolerancing. Further, in the composite tolerancing scheme, there are two different ways to control the entire pattern - repeating a single datum or two datums in the secondary datum reference frame. T-Maps are constructed for all the different specifications. The Standard also describes a way to control the coplanarity of discontinuous surfaces using a profile tolerance and T-Maps have been developed. Since verification of manufactured parts relative to the tolerance specifications is crucial, a least squares fit approach, which was developed earlier for line profiles, has been extended to patterns of line profiles. For a pattern, two tolerances are specified, and the manufactured profile needs to lie within the tolerance zones established by both of these tolerances. An i-Map representation of the manufactured variation, located within the T-Map is also presented in this thesis.
ContributorsRao, Shyam Subramanya (Author) / Davidson, Joseph K. (Thesis advisor) / Arizona State University (Publisher)
Created2014
152562-Thumbnail Image.png
Description
Conformance of a manufactured feature to the applied geometric tolerances is done by analyzing the point cloud that is measured on the feature. To that end, a geometric feature is fitted to the point cloud and the results are assessed to see whether the fitted feature lies within the specified

Conformance of a manufactured feature to the applied geometric tolerances is done by analyzing the point cloud that is measured on the feature. To that end, a geometric feature is fitted to the point cloud and the results are assessed to see whether the fitted feature lies within the specified tolerance limits or not. Coordinate Measuring Machines (CMMs) use feature fitting algorithms that incorporate least square estimates as a basis for obtaining minimum, maximum, and zone fits. However, a comprehensive set of algorithms addressing the fitting procedure (all datums, targets) for every tolerance class is not available. Therefore, a Library of algorithms is developed to aid the process of feature fitting, and tolerance verification. This paper addresses linear, planar, circular, and cylindrical features only. This set of algorithms described conforms to the international Standards for GD&T.; In order to reduce the number of points to be analyzed, and to identify the possible candidate points for linear, circular and planar features, 2D and 3D convex hulls are used. For minimum, maximum, and Chebyshev cylinders, geometric search algorithms are used. Algorithms are divided into three major categories: least square, unconstrained, and constrained fits. Primary datums require one sided unconstrained fits for their verification. Secondary datums require one sided constrained fits for their verification. For size and other tolerance verifications, we require both unconstrained and constrained fits
ContributorsMohan, Prashant (Author) / Shah, Jami (Thesis advisor) / Davidson, Joseph K. (Committee member) / Farin, Gerald (Committee member) / Arizona State University (Publisher)
Created2014
Description
Increasing computational demands in data centers require facilities to operate at higher ambient temperatures and at higher power densities. Conventionally, data centers are cooled with electrically-driven vapor-compressor equipment. This paper proposes an alternative data center cooling architecture that is heat-driven. The source is heat produced by the computer equipment. This

Increasing computational demands in data centers require facilities to operate at higher ambient temperatures and at higher power densities. Conventionally, data centers are cooled with electrically-driven vapor-compressor equipment. This paper proposes an alternative data center cooling architecture that is heat-driven. The source is heat produced by the computer equipment. This dissertation details experiments investigating the quantity and quality of heat that can be captured from a liquid-cooled microprocessor on a computer server blade from a data center. The experiments involve four liquid-cooling setups and associated heat-extraction, including a radical approach using mineral oil. The trials examine the feasibility of using the thermal energy from a CPU to drive a cooling process. Uniquely, the investigation establishes an interesting and useful relationship simultaneously among CPU temperatures, power, and utilization levels. In response to the system data, this project explores the heat, temperature and power effects of adding insulation, varying water flow, CPU loading, and varying the cold plate-to-CPU clamping pressure. The idea is to provide an optimal and steady range of temperatures necessary for a chiller to operate. Results indicate an increasing relationship among CPU temperature, power and utilization. Since the dissipated heat can be captured and removed from the system for reuse elsewhere, the need for electricity-consuming computer fans is eliminated. Thermocouple readings of CPU temperatures as high as 93°C and a calculated CPU thermal energy up to 67Wth show a sufficiently high temperature and thermal energy to serve as the input temperature and heat medium input to an absorption chiller. This dissertation performs a detailed analysis of the exergy of a processor and determines the maximum amount of energy utilizable for work. Exergy as a source of realizable work is separated into its two contributing constituents: thermal exergy and informational exergy. The informational exergy is that usable form of work contained within the most fundamental unit of information output by a switching device within a CPU. Exergetic thermal, informational and efficiency values are calculated and plotted for our particular CPU, showing how the datasheet standards compare with experimental values. The dissertation concludes with a discussion of the work's significance.
ContributorsHaywood, Anna (Author) / Phelan, Patrick E (Thesis advisor) / Herrmann, Marcus (Committee member) / Gupta, Sandeep (Committee member) / Trimble, Steve (Committee member) / Myhajlenko, Stefan (Committee member) / Arizona State University (Publisher)
Created2014