Matching Items (43)
Filtering by

Clear all filters

151944-Thumbnail Image.png
Description
The atomization of a liquid jet by a high speed cross-flowing gas has many applications such as gas turbines and augmentors. The mechanisms by which the liquid jet initially breaks up, however, are not well understood. Experimental studies suggest the dependence of spray properties on operating conditions and nozzle geom-

The atomization of a liquid jet by a high speed cross-flowing gas has many applications such as gas turbines and augmentors. The mechanisms by which the liquid jet initially breaks up, however, are not well understood. Experimental studies suggest the dependence of spray properties on operating conditions and nozzle geom- etry. Detailed numerical simulations can offer better understanding of the underlying physical mechanisms that lead to the breakup of the injected liquid jet. In this work, detailed numerical simulation results of turbulent liquid jets injected into turbulent gaseous cross flows for different density ratios is presented. A finite volume, balanced force fractional step flow solver to solve the Navier-Stokes equations is employed and coupled to a Refined Level Set Grid method to follow the phase interface. To enable the simulation of atomization of high density ratio fluids, we ensure discrete consistency between the solution of the conservative momentum equation and the level set based continuity equation by employing the Consistent Rescaled Momentum Transport (CRMT) method. The impact of different inflow jet boundary conditions on different jet properties including jet penetration is analyzed and results are compared to those obtained experimentally by Brown & McDonell(2006). In addition, instability analysis is performed to find the most dominant insta- bility mechanism that causes the liquid jet to breakup. Linear instability analysis is achieved using linear theories for Rayleigh-Taylor and Kelvin- Helmholtz instabilities and non-linear analysis is performed using our flow solver with different inflow jet boundary conditions.
ContributorsGhods, Sina (Author) / Herrmann, Marcus (Thesis advisor) / Squires, Kyle (Committee member) / Chen, Kangping (Committee member) / Huang, Huei-Ping (Committee member) / Tang, Wenbo (Committee member) / Arizona State University (Publisher)
Created2013
151528-Thumbnail Image.png
Description
The heat transfer enhancements available from expanding the cross-section of a boiling microchannel are explored analytically and experimentally. Evaluation of the literature on critical heat flux in flow boiling and associated pressure drop behavior is presented with predictive critical heat flux (CHF) and pressure drop correlations. An optimum channel configuration

The heat transfer enhancements available from expanding the cross-section of a boiling microchannel are explored analytically and experimentally. Evaluation of the literature on critical heat flux in flow boiling and associated pressure drop behavior is presented with predictive critical heat flux (CHF) and pressure drop correlations. An optimum channel configuration allowing maximum CHF while reducing pressure drop is sought. A perturbation of the channel diameter is employed to examine CHF and pressure drop relationships from the literature with the aim of identifying those adequately general and suitable for use in a scenario with an expanding channel. Several CHF criteria are identified which predict an optimizable channel expansion, though many do not. Pressure drop relationships admit improvement with expansion, and no optimum presents itself. The relevant physical phenomena surrounding flow boiling pressure drop are considered, and a balance of dimensionless numbers is presented that may be of qualitative use. The design, fabrication, inspection, and experimental evaluation of four copper microchannel arrays of different channel expansion rates with R-134a refrigerant is presented. Optimum rates of expansion which maximize the critical heat flux are considered at multiple flow rates, and experimental results are presented demonstrating optima. The effect of expansion on the boiling number is considered, and experiments demonstrate that expansion produces a notable increase in the boiling number in the region explored, though no optima are observed. Significant decrease in the pressure drop across the evaporator is observed with the expanding channels, and no optima appear. Discussion of the significance of this finding is presented, along with possible avenues for future work.
ContributorsMiner, Mark (Author) / Phelan, Patrick E (Thesis advisor) / Baer, Steven (Committee member) / Chamberlin, Ralph (Committee member) / Chen, Kangping (Committee member) / Herrmann, Marcus (Committee member) / Arizona State University (Publisher)
Created2013
Description
Increasing computational demands in data centers require facilities to operate at higher ambient temperatures and at higher power densities. Conventionally, data centers are cooled with electrically-driven vapor-compressor equipment. This paper proposes an alternative data center cooling architecture that is heat-driven. The source is heat produced by the computer equipment. This

Increasing computational demands in data centers require facilities to operate at higher ambient temperatures and at higher power densities. Conventionally, data centers are cooled with electrically-driven vapor-compressor equipment. This paper proposes an alternative data center cooling architecture that is heat-driven. The source is heat produced by the computer equipment. This dissertation details experiments investigating the quantity and quality of heat that can be captured from a liquid-cooled microprocessor on a computer server blade from a data center. The experiments involve four liquid-cooling setups and associated heat-extraction, including a radical approach using mineral oil. The trials examine the feasibility of using the thermal energy from a CPU to drive a cooling process. Uniquely, the investigation establishes an interesting and useful relationship simultaneously among CPU temperatures, power, and utilization levels. In response to the system data, this project explores the heat, temperature and power effects of adding insulation, varying water flow, CPU loading, and varying the cold plate-to-CPU clamping pressure. The idea is to provide an optimal and steady range of temperatures necessary for a chiller to operate. Results indicate an increasing relationship among CPU temperature, power and utilization. Since the dissipated heat can be captured and removed from the system for reuse elsewhere, the need for electricity-consuming computer fans is eliminated. Thermocouple readings of CPU temperatures as high as 93°C and a calculated CPU thermal energy up to 67Wth show a sufficiently high temperature and thermal energy to serve as the input temperature and heat medium input to an absorption chiller. This dissertation performs a detailed analysis of the exergy of a processor and determines the maximum amount of energy utilizable for work. Exergy as a source of realizable work is separated into its two contributing constituents: thermal exergy and informational exergy. The informational exergy is that usable form of work contained within the most fundamental unit of information output by a switching device within a CPU. Exergetic thermal, informational and efficiency values are calculated and plotted for our particular CPU, showing how the datasheet standards compare with experimental values. The dissertation concludes with a discussion of the work's significance.
ContributorsHaywood, Anna (Author) / Phelan, Patrick E (Thesis advisor) / Herrmann, Marcus (Committee member) / Gupta, Sandeep (Committee member) / Trimble, Steve (Committee member) / Myhajlenko, Stefan (Committee member) / Arizona State University (Publisher)
Created2014
151838-Thumbnail Image.png
Description
The objective of this research is to develop methods for generating the Tolerance-Map for a line-profile that is specified by a designer to control the geometric profile shape of a surface. After development, the aim is to find one that can be easily implemented in computer software using existing libraries.

The objective of this research is to develop methods for generating the Tolerance-Map for a line-profile that is specified by a designer to control the geometric profile shape of a surface. After development, the aim is to find one that can be easily implemented in computer software using existing libraries. Two methods were explored: the parametric modeling method and the decomposed modeling method. The Tolerance-Map (T-Map) is a hypothetical point-space, each point of which represents one geometric variation of a feature in its tolerance-zone. T-Maps have been produced for most of the tolerance classes that are used by designers, but, prior to the work of this project, the method of construction required considerable intuitive input, rather than being based primarily on automated computer tools. Tolerances on line-profiles are used to control cross-sectional shapes of parts, such as every cross-section of a mildly twisted compressor blade. Such tolerances constrain geometric manufacturing variations within a specified two-dimensional tolerance-zone. A single profile tolerance may be used to control position, orientation, and form of the cross-section. Four independent variables capture all of the profile deviations: two independent translations in the plane of the profile, one rotation in that plane, and the size-increment necessary to identify one of the allowable parallel profiles. For the selected method of generation, the line profile is decomposed into three types of segments, a primitive T-Map is produced for each segment, and finally the T-Maps from all the segments are combined to obtain the T-Map for the given profile. The types of segments are the (straight) line-segment, circular arc-segment, and the freeform-curve segment. The primitive T-Maps are generated analytically, and, for freeform-curves, they are built approximately with the aid of the computer. A deformation matrix is used to transform the primitive T-Maps to a single coordinate system for the whole profile. The T-Map for the whole line profile is generated by the Boolean intersection of the primitive T-Maps for the individual profile segments. This computer-implemented method can generate T-Maps for open profiles, closed ones, and those containing concave shapes.
ContributorsHe, Yifei (Author) / Davidson, Joseph (Thesis advisor) / Shah, Jami (Committee member) / Herrmann, Marcus (Committee member) / Arizona State University (Publisher)
Created2013
152789-Thumbnail Image.png
Description
Multi-pulse particle tracking velocimetry (multi-pulse PTV) is a recently proposed flow measurement technique aiming to improve the performance of conventional PTV/ PIV. In this work, multi-pulse PTV is assessed based on PTV simulations in terms of spatial resolution, velocity measurement accuracy and the capability of acceleration measurement. The errors of

Multi-pulse particle tracking velocimetry (multi-pulse PTV) is a recently proposed flow measurement technique aiming to improve the performance of conventional PTV/ PIV. In this work, multi-pulse PTV is assessed based on PTV simulations in terms of spatial resolution, velocity measurement accuracy and the capability of acceleration measurement. The errors of locating particles, velocity measurement and acceleration measurement are analytically calculated and compared among quadruple-pulse, triple-pulse and dual-pulse PTV. The optimizations of triple-pulse and quadruple-pulse PTV are discussed, and criteria are developed to minimize the combined error in position, velocity and acceleration. Experimentally, the velocity and acceleration fields of a round impinging air jet are measured to test the triple-pulse technique. A high speed beam-splitting camera and a custom 8-pulsed laser system are utilized to achieve good timing flexibility and temporal resolution. A new method to correct the registration error between CCDs is also presented. Consequently, the velocity field shows good consistency between triple-pulse and dual-pulse measurements. The mean acceleration profile along the centerline of the jet is used as the ground truth for the verification of the triple-pulse PIV measurements of the acceleration fields. The instantaneous acceleration field of the jet is directly measured by triple-pulse PIV and presented. Accelerations up to 1,000 g's are measured in these experiments.
ContributorsDing, Liuyang (Author) / Adrian, Ronald J. (Thesis advisor) / Herrmann, Marcus (Committee member) / Huang, Huei-Ping (Committee member) / Arizona State University (Publisher)
Created2014
153520-Thumbnail Image.png
Description
The Volume-of-Fluid method is a popular method for interface tracking in Multiphase applications within Computational Fluid Dynamics. To date there exists several algorithms for reconstruction of a geometric interface surface. Of these are the Finite Difference algorithm, Least Squares Volume-of-Fluid Interface Reconstruction Algorithm, LVIRA, and the Efficient Least Squares Volume-of-Fluid

The Volume-of-Fluid method is a popular method for interface tracking in Multiphase applications within Computational Fluid Dynamics. To date there exists several algorithms for reconstruction of a geometric interface surface. Of these are the Finite Difference algorithm, Least Squares Volume-of-Fluid Interface Reconstruction Algorithm, LVIRA, and the Efficient Least Squares Volume-of-Fluid Interface Reconstruction Algorithm, ELVIRA. Along with these geometric interface reconstruction algorithms, there exist several volume-of-fluid transportation algorithms. This paper will discuss two operator-splitting advection algorithms and an unsplit advection algorithm. Using these three interface reconstruction algorithms, and three advection algorithms, a comparison will be drawn to see how different combinations of these algorithms perform with respect to accuracy as well as computational expense.
ContributorsKedelty, Dominic (Author) / Herrmann, Marcus (Thesis advisor) / Huang, Huei-Ping (Committee member) / Chen, Kangping (Committee member) / Arizona State University (Publisher)
Created2015
Description
The flow of liquid PDMS (10:1 v/v base to cross-linker ratio) in open, rectangular silicon micro channels, with and without a hexa-methyl-di-silazane (HMDS) or poly-tetra-fluoro-ethylene (PTFE) (120 nm) coat, was studied. Photolithographic patterning and etching of silicon wafers was used to create micro channels with a range of widths (5-50

The flow of liquid PDMS (10:1 v/v base to cross-linker ratio) in open, rectangular silicon micro channels, with and without a hexa-methyl-di-silazane (HMDS) or poly-tetra-fluoro-ethylene (PTFE) (120 nm) coat, was studied. Photolithographic patterning and etching of silicon wafers was used to create micro channels with a range of widths (5-50 μm) and depths (5-20 μm). The experimental PDMS flow rates were compared to an analytical model based on the work of Lucas and Washburn. The experimental flow rates closely matched the predicted flow rates for channels with an aspect ratio (width to depth), p, between one and two. Flow rates in channels with p less than one were higher than predicted whereas the opposite was true for channels with p greater than two. The divergence between the experimental and predicted flow rates steadily increased with increasing p. These findings are rationalized in terms of the effect of channel dimensions on the front and top meniscus morphology and the possible deviation from the no-slip condition at the channel walls at high shear rates.

In addition, a preliminary experimental setup for calibration tests on ultrasensitive PDMS cantilever beams is reported. One loading and unloading cycle is completed on a microcantilever PDMS beam (theoretical stiffness 0.5 pN/ µm). Beam deflections are actuated by adjusting the buoyancy force on the beam, which is submerged in water, by the addition of heat. The expected loading and unloading curve is produced, albeit with significant noise. The experimental results indicate that the beam stiffness is a factor of six larger than predicted theoretically. One probable explanation is that the beam geometry may change when it is removed from the channel after curing, making assumptions about the beam geometry used in the theoretical analysis inaccurate. This theory is bolstered by experimental data discussed in the report. Other sources of error which could partially contribute to the divergent results are discussed. Improvements to the experimental setup for future work are suggested.
ContributorsSowers, Timothy Wayne (Author) / Rajagopalan, Jagannathan (Thesis advisor) / Herrmann, Marcus (Committee member) / Huang, Huei-Ping (Committee member) / Arizona State University (Publisher)
Created2014
153123-Thumbnail Image.png
Description
Stereolithography files (STL) are widely used in diverse fields as a means of describing complex geometries through surface triangulations. The resulting stereolithography output is a result of either experimental measurements, or computer-aided design. Often times stereolithography outputs from experimental means are prone to noise, surface irregularities and holes in an

Stereolithography files (STL) are widely used in diverse fields as a means of describing complex geometries through surface triangulations. The resulting stereolithography output is a result of either experimental measurements, or computer-aided design. Often times stereolithography outputs from experimental means are prone to noise, surface irregularities and holes in an otherwise closed surface.

A general method for denoising and adaptively smoothing these dirty stereolithography files is proposed. Unlike existing means, this approach aims to smoothen the dirty surface representation by utilizing the well established levelset method. The level of smoothing and denoising can be set depending on a per-requirement basis by means of input parameters. Once the surface representation is smoothened as desired, it can be extracted as a standard levelset scalar isosurface.

The approach presented in this thesis is also coupled to a fully unstructured Cartesian mesh generation library with built-in localized adaptive mesh refinement (AMR) capabilities, thereby ensuring lower computational cost while also providing sufficient resolution. Future work will focus on implementing tetrahedral cuts to the base hexahedral mesh structure in order to extract a fully unstructured hexahedra-dominant mesh describing the STL geometry, which can be used for fluid flow simulations.
ContributorsKannan, Karthik (Author) / Herrmann, Marcus (Thesis advisor) / Peet, Yulia (Committee member) / Frakes, David (Committee member) / Arizona State University (Publisher)
Created2014
153141-Thumbnail Image.png
Description
Hydraulic fracturing is an effective technique used in well stimulation to increase petroleum well production. A combination of multi-stage hydraulic fracturing and horizontal drilling has led to the recent boom in shale gas production which has changed the energy landscape of North America.

During the fracking process, highly pressurized mixture of

Hydraulic fracturing is an effective technique used in well stimulation to increase petroleum well production. A combination of multi-stage hydraulic fracturing and horizontal drilling has led to the recent boom in shale gas production which has changed the energy landscape of North America.

During the fracking process, highly pressurized mixture of water and proppants (sand and chemicals) is injected into to a crack, which fractures the surrounding rock structure and proppants help in keeping the fracture open. Over a longer period, however, these fractures tend to close due to the difference between the compressive stress exerted by the reservoir on the fracture and the fluid pressure inside the fracture. During production, fluid pressure inside the fracture is reduced further which can accelerate the closure of a fracture.

In this thesis, we study the stress distribution around a hydraulic fracture caused by fluid production. It is shown that fluid flow can induce a very high hoop stress near the fracture tip. As the pressure gradient increases stress concentration increases. If a fracture is very thin, the flow induced stress along the fracture decreases, but the stress concentration at the fracture tip increases and become unbounded for an infinitely thin fracture.

The result from the present study can be used for studying the fracture closure problem, and ultimately this in turn can lead to the development of better proppants so that prolific well production can be sustained for a long period of time.
ContributorsPandit, Harshad Rajendra (Author) / Chen, Kang P (Thesis advisor) / Herrmann, Marcus (Committee member) / Huang, Huei-Ping (Committee member) / Arizona State University (Publisher)
Created2014
153411-Thumbnail Image.png
Description
Gallium-based liquid metals are of interest for a variety of applications including flexible electronics, soft robotics, and biomedical devices. Still, nano- to microscale device fabrication with these materials is challenging because of their strong adhesion to a majority of substrates. This unusual high adhesion is attributed to the formation of

Gallium-based liquid metals are of interest for a variety of applications including flexible electronics, soft robotics, and biomedical devices. Still, nano- to microscale device fabrication with these materials is challenging because of their strong adhesion to a majority of substrates. This unusual high adhesion is attributed to the formation of a thin oxide shell; however, its role in the adhesion process has not yet been established. In the first part of the thesis, we described a multiscale study aiming at understanding the fundamental mechanisms governing wetting and adhesion of gallium-based liquid metals. In particular, macroscale dynamic contact angle measurements were coupled with Scanning Electron Microscope (SEM) imaging to relate macroscopic drop adhesion to morphology of the liquid metal-surface interface. In addition, room temperature liquid-metal microfluidic devices are also attractive systems for hyperelastic strain sensing. Currently two types of liquid metal-based strain sensors exist for inplane measurements: single-microchannel resistive and two-microchannel capacitive devices. However, with a winding serpentine channel geometry, these sensors typically have a footprint of about a square centimeter, limiting the number of sensors that can be embedded into. In the second part of the thesis, firstly, simulations and an experimental setup consisting of two GaInSn filled tubes submerged within a dielectric liquid bath are used to quantify the effects of the cylindrical electrode geometry including diameter, spacing, and meniscus shape as well as dielectric constant of the insulating liquid and the presence of tubing on the overall system's capacitance. Furthermore, a procedure for fabricating the two-liquid capacitor within a single straight polydiemethylsiloxane channel is developed. Lastly, capacitance and response of this compact device to strain and operational issues arising from complex hydrodynamics near liquid-liquid and liquid-elastomer interfaces are described.
ContributorsLiu, Shanliangzi (Author) / Rykaczewski, Konrad (Thesis advisor) / Alford, Terry (Committee member) / Herrmann, Marcus (Committee member) / Hildreth, Owen (Committee member) / Arizona State University (Publisher)
Created2015