Matching Items (136)
Filtering by

Clear all filters

152239-Thumbnail Image.png
Description
Production from a high pressure gas well at a high production-rate encounters the risk of operating near the choking condition for a compressible flow in porous media. The unbounded gas pressure gradient near the point of choking, which is located near the wellbore, generates an effective tensile stress on the

Production from a high pressure gas well at a high production-rate encounters the risk of operating near the choking condition for a compressible flow in porous media. The unbounded gas pressure gradient near the point of choking, which is located near the wellbore, generates an effective tensile stress on the porous rock frame. This tensile stress almost always exceeds the tensile strength of the rock and it causes a tensile failure of the rock, leading to wellbore instability. In a porous rock, not all pores are choked at the same flow rate, and when just one pore is choked, the flow through the entire porous medium should be considered choked as the gas pressure gradient at the point of choking becomes singular. This thesis investigates the choking condition for compressible gas flow in a single microscopic pore. Quasi-one-dimensional analysis and axisymmetric numerical simulations of compressible gas flow in a pore scale varicose tube with a number of bumps are carried out, and the local Mach number and pressure along the tube are computed for the flow near choking condition. The effects of tube length, inlet-to-outlet pressure ratio, the number of bumps and the amplitude of the bumps on the choking condition are obtained. These critical values provide guidance for avoiding the choking condition in practice.
ContributorsYuan, Jing (Author) / Chen, Kangping (Thesis advisor) / Wang, Liping (Committee member) / Huang, Huei-Ping (Committee member) / Arizona State University (Publisher)
Created2013
151944-Thumbnail Image.png
Description
The atomization of a liquid jet by a high speed cross-flowing gas has many applications such as gas turbines and augmentors. The mechanisms by which the liquid jet initially breaks up, however, are not well understood. Experimental studies suggest the dependence of spray properties on operating conditions and nozzle geom-

The atomization of a liquid jet by a high speed cross-flowing gas has many applications such as gas turbines and augmentors. The mechanisms by which the liquid jet initially breaks up, however, are not well understood. Experimental studies suggest the dependence of spray properties on operating conditions and nozzle geom- etry. Detailed numerical simulations can offer better understanding of the underlying physical mechanisms that lead to the breakup of the injected liquid jet. In this work, detailed numerical simulation results of turbulent liquid jets injected into turbulent gaseous cross flows for different density ratios is presented. A finite volume, balanced force fractional step flow solver to solve the Navier-Stokes equations is employed and coupled to a Refined Level Set Grid method to follow the phase interface. To enable the simulation of atomization of high density ratio fluids, we ensure discrete consistency between the solution of the conservative momentum equation and the level set based continuity equation by employing the Consistent Rescaled Momentum Transport (CRMT) method. The impact of different inflow jet boundary conditions on different jet properties including jet penetration is analyzed and results are compared to those obtained experimentally by Brown & McDonell(2006). In addition, instability analysis is performed to find the most dominant insta- bility mechanism that causes the liquid jet to breakup. Linear instability analysis is achieved using linear theories for Rayleigh-Taylor and Kelvin- Helmholtz instabilities and non-linear analysis is performed using our flow solver with different inflow jet boundary conditions.
ContributorsGhods, Sina (Author) / Herrmann, Marcus (Thesis advisor) / Squires, Kyle (Committee member) / Chen, Kangping (Committee member) / Huang, Huei-Ping (Committee member) / Tang, Wenbo (Committee member) / Arizona State University (Publisher)
Created2013
151771-Thumbnail Image.png
Description
This research examines the current challenges of using Lamb wave interrogation methods to localize fatigue crack damage in a complex metallic structural component subjected to unknown temperatures. The goal of this work is to improve damage localization results for a structural component interrogated at an unknown temperature, by developing a

This research examines the current challenges of using Lamb wave interrogation methods to localize fatigue crack damage in a complex metallic structural component subjected to unknown temperatures. The goal of this work is to improve damage localization results for a structural component interrogated at an unknown temperature, by developing a probabilistic and reference-free framework for estimating Lamb wave velocities and the damage location. The methodology for damage localization at unknown temperatures includes the following key elements: i) a model that can describe the change in Lamb wave velocities with temperature; ii) the extension of an advanced time-frequency based signal processing technique for enhanced time-of-flight feature extraction from a dispersive signal; iii) the development of a Bayesian damage localization framework incorporating data association and sensor fusion. The technique requires no additional transducers to be installed on a structure, and allows for the estimation of both the temperature and the wave velocity in the component. Additionally, the framework of the algorithm allows it to function completely in an unsupervised manner by probabilistically accounting for all measurement origin uncertainty. The novel algorithm was experimentally validated using an aluminum lug joint with a growing fatigue crack. The lug joint was interrogated using piezoelectric transducers at multiple fatigue crack lengths, and at temperatures between 20°C and 80°C. The results showed that the algorithm could accurately predict the temperature and wave speed of the lug joint. The localization results for the fatigue damage were found to correlate well with the true locations at long crack lengths, but loss of accuracy was observed in localizing small cracks due to time-of-flight measurement errors. To validate the algorithm across a wider range of temperatures the electromechanically coupled LISA/SIM model was used to simulate the effects of temperatures. The numerical results showed that this approach would be capable of experimentally estimating the temperature and velocity in the lug joint for temperatures from -60°C to 150°C. The velocity estimation algorithm was found to significantly increase the accuracy of localization at temperatures above 120°C when error due to incorrect velocity selection begins to outweigh the error due to time-of-flight measurements.
ContributorsHensberry, Kevin (Author) / Chattopadhyay, Aditi (Thesis advisor) / Liu, Yongming (Committee member) / Papandreou-Suppappola, Antonia (Committee member) / Arizona State University (Publisher)
Created2013
151485-Thumbnail Image.png
Description
Tesla turbo-machinery offers a robust, easily manufactured, extremely versatile prime mover with inherent capabilities making it perhaps the best, if not the only, solution for certain niche applications. The goal of this thesis is not to optimize the performance of the Tesla turbine, but to compare its performance with various

Tesla turbo-machinery offers a robust, easily manufactured, extremely versatile prime mover with inherent capabilities making it perhaps the best, if not the only, solution for certain niche applications. The goal of this thesis is not to optimize the performance of the Tesla turbine, but to compare its performance with various working fluids. Theoretical and experimental analyses of a turbine-generator assembly utilizing compressed air, saturated steam and water as the working fluids were performed and are presented in this work. A brief background and explanation of the technology is provided along with potential applications. A theoretical thermodynamic analysis is outlined, resulting in turbine and rotor efficiencies, power outputs and Reynolds numbers calculated for the turbine for various combinations of working fluids and inlet nozzles. The results indicate the turbine is capable of achieving a turbine efficiency of 31.17 ± 3.61% and an estimated rotor efficiency 95 ± 9.32%. These efficiencies are promising considering the numerous losses still present in the current design. Calculation of the Reynolds number provided some capability to determine the flow behavior and how that behavior impacts the performance and efficiency of the Tesla turbine. It was determined that turbulence in the flow is essential to achieving high power outputs and high efficiency. Although the efficiency, after peaking, begins to slightly taper off as the flow becomes increasingly turbulent, the power output maintains a steady linear increase.
ContributorsPeshlakai, Aaron (Author) / Phelan, Patrick (Thesis advisor) / Trimble, Steve (Committee member) / Wang, Liping (Committee member) / Arizona State University (Publisher)
Created2012
151528-Thumbnail Image.png
Description
The heat transfer enhancements available from expanding the cross-section of a boiling microchannel are explored analytically and experimentally. Evaluation of the literature on critical heat flux in flow boiling and associated pressure drop behavior is presented with predictive critical heat flux (CHF) and pressure drop correlations. An optimum channel configuration

The heat transfer enhancements available from expanding the cross-section of a boiling microchannel are explored analytically and experimentally. Evaluation of the literature on critical heat flux in flow boiling and associated pressure drop behavior is presented with predictive critical heat flux (CHF) and pressure drop correlations. An optimum channel configuration allowing maximum CHF while reducing pressure drop is sought. A perturbation of the channel diameter is employed to examine CHF and pressure drop relationships from the literature with the aim of identifying those adequately general and suitable for use in a scenario with an expanding channel. Several CHF criteria are identified which predict an optimizable channel expansion, though many do not. Pressure drop relationships admit improvement with expansion, and no optimum presents itself. The relevant physical phenomena surrounding flow boiling pressure drop are considered, and a balance of dimensionless numbers is presented that may be of qualitative use. The design, fabrication, inspection, and experimental evaluation of four copper microchannel arrays of different channel expansion rates with R-134a refrigerant is presented. Optimum rates of expansion which maximize the critical heat flux are considered at multiple flow rates, and experimental results are presented demonstrating optima. The effect of expansion on the boiling number is considered, and experiments demonstrate that expansion produces a notable increase in the boiling number in the region explored, though no optima are observed. Significant decrease in the pressure drop across the evaporator is observed with the expanding channels, and no optima appear. Discussion of the significance of this finding is presented, along with possible avenues for future work.
ContributorsMiner, Mark (Author) / Phelan, Patrick E (Thesis advisor) / Baer, Steven (Committee member) / Chamberlin, Ralph (Committee member) / Chen, Kangping (Committee member) / Herrmann, Marcus (Committee member) / Arizona State University (Publisher)
Created2013
Description
Increasing computational demands in data centers require facilities to operate at higher ambient temperatures and at higher power densities. Conventionally, data centers are cooled with electrically-driven vapor-compressor equipment. This paper proposes an alternative data center cooling architecture that is heat-driven. The source is heat produced by the computer equipment. This

Increasing computational demands in data centers require facilities to operate at higher ambient temperatures and at higher power densities. Conventionally, data centers are cooled with electrically-driven vapor-compressor equipment. This paper proposes an alternative data center cooling architecture that is heat-driven. The source is heat produced by the computer equipment. This dissertation details experiments investigating the quantity and quality of heat that can be captured from a liquid-cooled microprocessor on a computer server blade from a data center. The experiments involve four liquid-cooling setups and associated heat-extraction, including a radical approach using mineral oil. The trials examine the feasibility of using the thermal energy from a CPU to drive a cooling process. Uniquely, the investigation establishes an interesting and useful relationship simultaneously among CPU temperatures, power, and utilization levels. In response to the system data, this project explores the heat, temperature and power effects of adding insulation, varying water flow, CPU loading, and varying the cold plate-to-CPU clamping pressure. The idea is to provide an optimal and steady range of temperatures necessary for a chiller to operate. Results indicate an increasing relationship among CPU temperature, power and utilization. Since the dissipated heat can be captured and removed from the system for reuse elsewhere, the need for electricity-consuming computer fans is eliminated. Thermocouple readings of CPU temperatures as high as 93°C and a calculated CPU thermal energy up to 67Wth show a sufficiently high temperature and thermal energy to serve as the input temperature and heat medium input to an absorption chiller. This dissertation performs a detailed analysis of the exergy of a processor and determines the maximum amount of energy utilizable for work. Exergy as a source of realizable work is separated into its two contributing constituents: thermal exergy and informational exergy. The informational exergy is that usable form of work contained within the most fundamental unit of information output by a switching device within a CPU. Exergetic thermal, informational and efficiency values are calculated and plotted for our particular CPU, showing how the datasheet standards compare with experimental values. The dissertation concludes with a discussion of the work's significance.
ContributorsHaywood, Anna (Author) / Phelan, Patrick E (Thesis advisor) / Herrmann, Marcus (Committee member) / Gupta, Sandeep (Committee member) / Trimble, Steve (Committee member) / Myhajlenko, Stefan (Committee member) / Arizona State University (Publisher)
Created2014
152510-Thumbnail Image.png
Description
Aluminum alloys and their composites are attractive materials for applications requiring high strength-to-weight ratios and reasonable cost. Many of these applications, such as those in the aerospace industry, undergo fatigue loading. An understanding of the microstructural damage that occurs in these materials is critical in assessing their fatigue resistance. Two

Aluminum alloys and their composites are attractive materials for applications requiring high strength-to-weight ratios and reasonable cost. Many of these applications, such as those in the aerospace industry, undergo fatigue loading. An understanding of the microstructural damage that occurs in these materials is critical in assessing their fatigue resistance. Two distinct experimental studies were performed to further the understanding of fatigue damage mechanisms in aluminum alloys and their composites, specifically fracture and plasticity. Fatigue resistance of metal matrix composites (MMCs) depends on many aspects of composite microstructure. Fatigue crack growth behavior is particularly dependent on the reinforcement characteristics and matrix microstructure. The goal of this work was to obtain a fundamental understanding of fatigue crack growth behavior in SiC particle-reinforced 2080 Al alloy composites. In situ X-ray synchrotron tomography was performed on two samples at low (R=0.1) and at high (R=0.6) R-ratios. The resulting reconstructed images were used to obtain three-dimensional (3D) rendering of the particles and fatigue crack. Behaviors of the particles and crack, as well as their interaction, were analyzed and quantified. Four-dimensional (4D) visual representations were constructed to aid in the overall understanding of damage evolution. During fatigue crack growth in ductile materials, a plastic zone is created in the region surrounding the crack tip. Knowledge of the plastic zone is important for the understanding of fatigue crack formation as well as subsequent growth behavior. The goal of this work was to quantify the 3D size and shape of the plastic zone in 7075 Al alloys. X-ray synchrotron tomography and Laue microdiffraction were used to non-destructively characterize the volume surrounding a fatigue crack tip. The precise 3D crack profile was segmented from the reconstructed tomography data. Depth-resolved Laue patterns were obtained using differential-aperture X-ray structural microscopy (DAXM), from which peak-broadening characteristics were quantified. Plasticity, as determined by the broadening of diffracted peaks, was mapped in 3D. Two-dimensional (2D) maps of plasticity were directly compared to the corresponding tomography slices. A 3D representation of the plastic zone surrounding the fatigue crack was generated by superimposing the mapped plasticity on the 3D crack profile.
ContributorsHruby, Peter (Author) / Chawla, Nikhilesh (Thesis advisor) / Solanki, Kiran (Committee member) / Liu, Yongming (Committee member) / Arizona State University (Publisher)
Created2014
151838-Thumbnail Image.png
Description
The objective of this research is to develop methods for generating the Tolerance-Map for a line-profile that is specified by a designer to control the geometric profile shape of a surface. After development, the aim is to find one that can be easily implemented in computer software using existing libraries.

The objective of this research is to develop methods for generating the Tolerance-Map for a line-profile that is specified by a designer to control the geometric profile shape of a surface. After development, the aim is to find one that can be easily implemented in computer software using existing libraries. Two methods were explored: the parametric modeling method and the decomposed modeling method. The Tolerance-Map (T-Map) is a hypothetical point-space, each point of which represents one geometric variation of a feature in its tolerance-zone. T-Maps have been produced for most of the tolerance classes that are used by designers, but, prior to the work of this project, the method of construction required considerable intuitive input, rather than being based primarily on automated computer tools. Tolerances on line-profiles are used to control cross-sectional shapes of parts, such as every cross-section of a mildly twisted compressor blade. Such tolerances constrain geometric manufacturing variations within a specified two-dimensional tolerance-zone. A single profile tolerance may be used to control position, orientation, and form of the cross-section. Four independent variables capture all of the profile deviations: two independent translations in the plane of the profile, one rotation in that plane, and the size-increment necessary to identify one of the allowable parallel profiles. For the selected method of generation, the line profile is decomposed into three types of segments, a primitive T-Map is produced for each segment, and finally the T-Maps from all the segments are combined to obtain the T-Map for the given profile. The types of segments are the (straight) line-segment, circular arc-segment, and the freeform-curve segment. The primitive T-Maps are generated analytically, and, for freeform-curves, they are built approximately with the aid of the computer. A deformation matrix is used to transform the primitive T-Maps to a single coordinate system for the whole profile. The T-Map for the whole line profile is generated by the Boolean intersection of the primitive T-Maps for the individual profile segments. This computer-implemented method can generate T-Maps for open profiles, closed ones, and those containing concave shapes.
ContributorsHe, Yifei (Author) / Davidson, Joseph (Thesis advisor) / Shah, Jami (Committee member) / Herrmann, Marcus (Committee member) / Arizona State University (Publisher)
Created2013
152789-Thumbnail Image.png
Description
Multi-pulse particle tracking velocimetry (multi-pulse PTV) is a recently proposed flow measurement technique aiming to improve the performance of conventional PTV/ PIV. In this work, multi-pulse PTV is assessed based on PTV simulations in terms of spatial resolution, velocity measurement accuracy and the capability of acceleration measurement. The errors of

Multi-pulse particle tracking velocimetry (multi-pulse PTV) is a recently proposed flow measurement technique aiming to improve the performance of conventional PTV/ PIV. In this work, multi-pulse PTV is assessed based on PTV simulations in terms of spatial resolution, velocity measurement accuracy and the capability of acceleration measurement. The errors of locating particles, velocity measurement and acceleration measurement are analytically calculated and compared among quadruple-pulse, triple-pulse and dual-pulse PTV. The optimizations of triple-pulse and quadruple-pulse PTV are discussed, and criteria are developed to minimize the combined error in position, velocity and acceleration. Experimentally, the velocity and acceleration fields of a round impinging air jet are measured to test the triple-pulse technique. A high speed beam-splitting camera and a custom 8-pulsed laser system are utilized to achieve good timing flexibility and temporal resolution. A new method to correct the registration error between CCDs is also presented. Consequently, the velocity field shows good consistency between triple-pulse and dual-pulse measurements. The mean acceleration profile along the centerline of the jet is used as the ground truth for the verification of the triple-pulse PIV measurements of the acceleration fields. The instantaneous acceleration field of the jet is directly measured by triple-pulse PIV and presented. Accelerations up to 1,000 g's are measured in these experiments.
ContributorsDing, Liuyang (Author) / Adrian, Ronald J. (Thesis advisor) / Herrmann, Marcus (Committee member) / Huang, Huei-Ping (Committee member) / Arizona State University (Publisher)
Created2014
153520-Thumbnail Image.png
Description
The Volume-of-Fluid method is a popular method for interface tracking in Multiphase applications within Computational Fluid Dynamics. To date there exists several algorithms for reconstruction of a geometric interface surface. Of these are the Finite Difference algorithm, Least Squares Volume-of-Fluid Interface Reconstruction Algorithm, LVIRA, and the Efficient Least Squares Volume-of-Fluid

The Volume-of-Fluid method is a popular method for interface tracking in Multiphase applications within Computational Fluid Dynamics. To date there exists several algorithms for reconstruction of a geometric interface surface. Of these are the Finite Difference algorithm, Least Squares Volume-of-Fluid Interface Reconstruction Algorithm, LVIRA, and the Efficient Least Squares Volume-of-Fluid Interface Reconstruction Algorithm, ELVIRA. Along with these geometric interface reconstruction algorithms, there exist several volume-of-fluid transportation algorithms. This paper will discuss two operator-splitting advection algorithms and an unsplit advection algorithm. Using these three interface reconstruction algorithms, and three advection algorithms, a comparison will be drawn to see how different combinations of these algorithms perform with respect to accuracy as well as computational expense.
ContributorsKedelty, Dominic (Author) / Herrmann, Marcus (Thesis advisor) / Huang, Huei-Ping (Committee member) / Chen, Kangping (Committee member) / Arizona State University (Publisher)
Created2015