Matching Items (59)
Filtering by

Clear all filters

151944-Thumbnail Image.png
Description
The atomization of a liquid jet by a high speed cross-flowing gas has many applications such as gas turbines and augmentors. The mechanisms by which the liquid jet initially breaks up, however, are not well understood. Experimental studies suggest the dependence of spray properties on operating conditions and nozzle geom-

The atomization of a liquid jet by a high speed cross-flowing gas has many applications such as gas turbines and augmentors. The mechanisms by which the liquid jet initially breaks up, however, are not well understood. Experimental studies suggest the dependence of spray properties on operating conditions and nozzle geom- etry. Detailed numerical simulations can offer better understanding of the underlying physical mechanisms that lead to the breakup of the injected liquid jet. In this work, detailed numerical simulation results of turbulent liquid jets injected into turbulent gaseous cross flows for different density ratios is presented. A finite volume, balanced force fractional step flow solver to solve the Navier-Stokes equations is employed and coupled to a Refined Level Set Grid method to follow the phase interface. To enable the simulation of atomization of high density ratio fluids, we ensure discrete consistency between the solution of the conservative momentum equation and the level set based continuity equation by employing the Consistent Rescaled Momentum Transport (CRMT) method. The impact of different inflow jet boundary conditions on different jet properties including jet penetration is analyzed and results are compared to those obtained experimentally by Brown & McDonell(2006). In addition, instability analysis is performed to find the most dominant insta- bility mechanism that causes the liquid jet to breakup. Linear instability analysis is achieved using linear theories for Rayleigh-Taylor and Kelvin- Helmholtz instabilities and non-linear analysis is performed using our flow solver with different inflow jet boundary conditions.
ContributorsGhods, Sina (Author) / Herrmann, Marcus (Thesis advisor) / Squires, Kyle (Committee member) / Chen, Kangping (Committee member) / Huang, Huei-Ping (Committee member) / Tang, Wenbo (Committee member) / Arizona State University (Publisher)
Created2013
151455-Thumbnail Image.png
Description
Although high performance, light-weight composites are increasingly being used in applications ranging from aircraft, rotorcraft, weapon systems and ground vehicles, the assurance of structural reliability remains a critical issue. In composites, damage is absorbed through various fracture processes, including fiber failure, matrix cracking and delamination. An important element in achieving

Although high performance, light-weight composites are increasingly being used in applications ranging from aircraft, rotorcraft, weapon systems and ground vehicles, the assurance of structural reliability remains a critical issue. In composites, damage is absorbed through various fracture processes, including fiber failure, matrix cracking and delamination. An important element in achieving reliable composite systems is a strong capability of assessing and inspecting physical damage of critical structural components. Installation of a robust Structural Health Monitoring (SHM) system would be very valuable in detecting the onset of composite failure. A number of major issues still require serious attention in connection with the research and development aspects of sensor-integrated reliable SHM systems for composite structures. In particular, the sensitivity of currently available sensor systems does not allow detection of micro level damage; this limits the capability of data driven SHM systems. As a fundamental layer in SHM, modeling can provide in-depth information on material and structural behavior for sensing and detection, as well as data for learning algorithms. This dissertation focusses on the development of a multiscale analysis framework, which is used to detect various forms of damage in complex composite structures. A generalized method of cells based micromechanics analysis, as implemented in NASA's MAC/GMC code, is used for the micro-level analysis. First, a baseline study of MAC/GMC is performed to determine the governing failure theories that best capture the damage progression. The deficiencies associated with various layups and loading conditions are addressed. In most micromechanics analysis, a representative unit cell (RUC) with a common fiber packing arrangement is used. The effect of variation in this arrangement within the RUC has been studied and results indicate this variation influences the macro-scale effective material properties and failure stresses. The developed model has been used to simulate impact damage in a composite beam and an airfoil structure. The model data was verified through active interrogation using piezoelectric sensors. The multiscale model was further extended to develop a coupled damage and wave attenuation model, which was used to study different damage states such as fiber-matrix debonding in composite structures with surface bonded piezoelectric sensors.
ContributorsMoncada, Albert (Author) / Chattopadhyay, Aditi (Thesis advisor) / Dai, Lenore (Committee member) / Papandreou-Suppappola, Antonia (Committee member) / Rajadas, John (Committee member) / Yekani Fard, Masoud (Committee member) / Arizona State University (Publisher)
Created2012
151528-Thumbnail Image.png
Description
The heat transfer enhancements available from expanding the cross-section of a boiling microchannel are explored analytically and experimentally. Evaluation of the literature on critical heat flux in flow boiling and associated pressure drop behavior is presented with predictive critical heat flux (CHF) and pressure drop correlations. An optimum channel configuration

The heat transfer enhancements available from expanding the cross-section of a boiling microchannel are explored analytically and experimentally. Evaluation of the literature on critical heat flux in flow boiling and associated pressure drop behavior is presented with predictive critical heat flux (CHF) and pressure drop correlations. An optimum channel configuration allowing maximum CHF while reducing pressure drop is sought. A perturbation of the channel diameter is employed to examine CHF and pressure drop relationships from the literature with the aim of identifying those adequately general and suitable for use in a scenario with an expanding channel. Several CHF criteria are identified which predict an optimizable channel expansion, though many do not. Pressure drop relationships admit improvement with expansion, and no optimum presents itself. The relevant physical phenomena surrounding flow boiling pressure drop are considered, and a balance of dimensionless numbers is presented that may be of qualitative use. The design, fabrication, inspection, and experimental evaluation of four copper microchannel arrays of different channel expansion rates with R-134a refrigerant is presented. Optimum rates of expansion which maximize the critical heat flux are considered at multiple flow rates, and experimental results are presented demonstrating optima. The effect of expansion on the boiling number is considered, and experiments demonstrate that expansion produces a notable increase in the boiling number in the region explored, though no optima are observed. Significant decrease in the pressure drop across the evaporator is observed with the expanding channels, and no optima appear. Discussion of the significance of this finding is presented, along with possible avenues for future work.
ContributorsMiner, Mark (Author) / Phelan, Patrick E (Thesis advisor) / Baer, Steven (Committee member) / Chamberlin, Ralph (Committee member) / Chen, Kangping (Committee member) / Herrmann, Marcus (Committee member) / Arizona State University (Publisher)
Created2013
Description
Increasing computational demands in data centers require facilities to operate at higher ambient temperatures and at higher power densities. Conventionally, data centers are cooled with electrically-driven vapor-compressor equipment. This paper proposes an alternative data center cooling architecture that is heat-driven. The source is heat produced by the computer equipment. This

Increasing computational demands in data centers require facilities to operate at higher ambient temperatures and at higher power densities. Conventionally, data centers are cooled with electrically-driven vapor-compressor equipment. This paper proposes an alternative data center cooling architecture that is heat-driven. The source is heat produced by the computer equipment. This dissertation details experiments investigating the quantity and quality of heat that can be captured from a liquid-cooled microprocessor on a computer server blade from a data center. The experiments involve four liquid-cooling setups and associated heat-extraction, including a radical approach using mineral oil. The trials examine the feasibility of using the thermal energy from a CPU to drive a cooling process. Uniquely, the investigation establishes an interesting and useful relationship simultaneously among CPU temperatures, power, and utilization levels. In response to the system data, this project explores the heat, temperature and power effects of adding insulation, varying water flow, CPU loading, and varying the cold plate-to-CPU clamping pressure. The idea is to provide an optimal and steady range of temperatures necessary for a chiller to operate. Results indicate an increasing relationship among CPU temperature, power and utilization. Since the dissipated heat can be captured and removed from the system for reuse elsewhere, the need for electricity-consuming computer fans is eliminated. Thermocouple readings of CPU temperatures as high as 93°C and a calculated CPU thermal energy up to 67Wth show a sufficiently high temperature and thermal energy to serve as the input temperature and heat medium input to an absorption chiller. This dissertation performs a detailed analysis of the exergy of a processor and determines the maximum amount of energy utilizable for work. Exergy as a source of realizable work is separated into its two contributing constituents: thermal exergy and informational exergy. The informational exergy is that usable form of work contained within the most fundamental unit of information output by a switching device within a CPU. Exergetic thermal, informational and efficiency values are calculated and plotted for our particular CPU, showing how the datasheet standards compare with experimental values. The dissertation concludes with a discussion of the work's significance.
ContributorsHaywood, Anna (Author) / Phelan, Patrick E (Thesis advisor) / Herrmann, Marcus (Committee member) / Gupta, Sandeep (Committee member) / Trimble, Steve (Committee member) / Myhajlenko, Stefan (Committee member) / Arizona State University (Publisher)
Created2014
151838-Thumbnail Image.png
Description
The objective of this research is to develop methods for generating the Tolerance-Map for a line-profile that is specified by a designer to control the geometric profile shape of a surface. After development, the aim is to find one that can be easily implemented in computer software using existing libraries.

The objective of this research is to develop methods for generating the Tolerance-Map for a line-profile that is specified by a designer to control the geometric profile shape of a surface. After development, the aim is to find one that can be easily implemented in computer software using existing libraries. Two methods were explored: the parametric modeling method and the decomposed modeling method. The Tolerance-Map (T-Map) is a hypothetical point-space, each point of which represents one geometric variation of a feature in its tolerance-zone. T-Maps have been produced for most of the tolerance classes that are used by designers, but, prior to the work of this project, the method of construction required considerable intuitive input, rather than being based primarily on automated computer tools. Tolerances on line-profiles are used to control cross-sectional shapes of parts, such as every cross-section of a mildly twisted compressor blade. Such tolerances constrain geometric manufacturing variations within a specified two-dimensional tolerance-zone. A single profile tolerance may be used to control position, orientation, and form of the cross-section. Four independent variables capture all of the profile deviations: two independent translations in the plane of the profile, one rotation in that plane, and the size-increment necessary to identify one of the allowable parallel profiles. For the selected method of generation, the line profile is decomposed into three types of segments, a primitive T-Map is produced for each segment, and finally the T-Maps from all the segments are combined to obtain the T-Map for the given profile. The types of segments are the (straight) line-segment, circular arc-segment, and the freeform-curve segment. The primitive T-Maps are generated analytically, and, for freeform-curves, they are built approximately with the aid of the computer. A deformation matrix is used to transform the primitive T-Maps to a single coordinate system for the whole profile. The T-Map for the whole line profile is generated by the Boolean intersection of the primitive T-Maps for the individual profile segments. This computer-implemented method can generate T-Maps for open profiles, closed ones, and those containing concave shapes.
ContributorsHe, Yifei (Author) / Davidson, Joseph (Thesis advisor) / Shah, Jami (Committee member) / Herrmann, Marcus (Committee member) / Arizona State University (Publisher)
Created2013
152789-Thumbnail Image.png
Description
Multi-pulse particle tracking velocimetry (multi-pulse PTV) is a recently proposed flow measurement technique aiming to improve the performance of conventional PTV/ PIV. In this work, multi-pulse PTV is assessed based on PTV simulations in terms of spatial resolution, velocity measurement accuracy and the capability of acceleration measurement. The errors of

Multi-pulse particle tracking velocimetry (multi-pulse PTV) is a recently proposed flow measurement technique aiming to improve the performance of conventional PTV/ PIV. In this work, multi-pulse PTV is assessed based on PTV simulations in terms of spatial resolution, velocity measurement accuracy and the capability of acceleration measurement. The errors of locating particles, velocity measurement and acceleration measurement are analytically calculated and compared among quadruple-pulse, triple-pulse and dual-pulse PTV. The optimizations of triple-pulse and quadruple-pulse PTV are discussed, and criteria are developed to minimize the combined error in position, velocity and acceleration. Experimentally, the velocity and acceleration fields of a round impinging air jet are measured to test the triple-pulse technique. A high speed beam-splitting camera and a custom 8-pulsed laser system are utilized to achieve good timing flexibility and temporal resolution. A new method to correct the registration error between CCDs is also presented. Consequently, the velocity field shows good consistency between triple-pulse and dual-pulse measurements. The mean acceleration profile along the centerline of the jet is used as the ground truth for the verification of the triple-pulse PIV measurements of the acceleration fields. The instantaneous acceleration field of the jet is directly measured by triple-pulse PIV and presented. Accelerations up to 1,000 g's are measured in these experiments.
ContributorsDing, Liuyang (Author) / Adrian, Ronald J. (Thesis advisor) / Herrmann, Marcus (Committee member) / Huang, Huei-Ping (Committee member) / Arizona State University (Publisher)
Created2014
152982-Thumbnail Image.png
Description
Damage detection in heterogeneous material systems is a complex problem and requires an in-depth understanding of the material characteristics and response under varying load and environmental conditions. A significant amount of research has been conducted in this field to enhance the fidelity of damage assessment methodologies, using a wide range

Damage detection in heterogeneous material systems is a complex problem and requires an in-depth understanding of the material characteristics and response under varying load and environmental conditions. A significant amount of research has been conducted in this field to enhance the fidelity of damage assessment methodologies, using a wide range of sensors and detection techniques, for both metallic materials and composites. However, detecting damage at the microscale is not possible with commercially available sensors. A probable way to approach this problem is through accurate and efficient multiscale modeling techniques, which are capable of tracking damage initiation at the microscale and propagation across the length scales. The output from these models will provide an improved understanding of damage initiation; the knowledge can be used in conjunction with information from physical sensors to improve the size of detectable damage. In this research, effort has been dedicated to develop multiscale modeling approaches and associated damage criteria for the estimation of damage evolution across the relevant length scales. Important issues such as length and time scales, anisotropy and variability in material properties at the microscale, and response under mechanical and thermal loading are addressed. Two different material systems have been studied: metallic material and a novel stress-sensitive epoxy polymer.

For metallic material (Al 2024-T351), the methodology initiates at the microscale where extensive material characterization is conducted to capture the microstructural variability. A statistical volume element (SVE) model is constructed to represent the material properties. Geometric and crystallographic features including grain orientation, misorientation, size, shape, principal axis direction and aspect ratio are captured. This SVE model provides a computationally efficient alternative to traditional techniques using representative volume element (RVE) models while maintaining statistical accuracy. A physics based multiscale damage criterion is developed to simulate the fatigue crack initiation. The crack growth rate and probable directions are estimated simultaneously.

Mechanically sensitive materials that exhibit specific chemical reactions upon external loading are currently being investigated for self-sensing applications. The "smart" polymer modeled in this research consists of epoxy resin, hardener, and a stress-sensitive material called mechanophore The mechanophore activation is based on covalent bond-breaking induced by external stimuli; this feature can be used for material-level damage detections. In this work Tris-(Cinnamoyl oxymethyl)-Ethane (TCE) is used as the cyclobutane-based mechanophore (stress-sensitive) material in the polymer matrix. The TCE embedded polymers have shown promising results in early damage detection through mechanically induced fluorescence. A spring-bead based network model, which bridges nanoscale information to higher length scales, has been developed to model this material system. The material is partitioned into discrete mass beads which are linked using linear springs at the microscale. A series of MD simulations were performed to define the spring stiffness in the statistical network model. By integrating multiple spring-bead models a network model has been developed to represent the material properties at the mesoscale. The model captures the statistical distribution of crosslinking degree of the polymer to represent the heterogeneous material properties at the microscale. The developed multiscale methodology is computationally efficient and provides a possible means to bridge multiple length scales (from 10 nm in MD simulation to 10 mm in FE model) without significant loss of accuracy. Parametric studies have been conducted to investigate the influence of the crosslinking degree on the material behavior. The developed methodology has been used to evaluate damage evolution in the self-sensing polymer.
ContributorsZhang, Jinjun (Author) / Chattopadhyay, Aditi (Thesis advisor) / Dai, Lenore (Committee member) / Jiang, Hanqing (Committee member) / Papandreou-Suppappola, Antonia (Committee member) / Rajadas, John (Committee member) / Arizona State University (Publisher)
Created2014
153520-Thumbnail Image.png
Description
The Volume-of-Fluid method is a popular method for interface tracking in Multiphase applications within Computational Fluid Dynamics. To date there exists several algorithms for reconstruction of a geometric interface surface. Of these are the Finite Difference algorithm, Least Squares Volume-of-Fluid Interface Reconstruction Algorithm, LVIRA, and the Efficient Least Squares Volume-of-Fluid

The Volume-of-Fluid method is a popular method for interface tracking in Multiphase applications within Computational Fluid Dynamics. To date there exists several algorithms for reconstruction of a geometric interface surface. Of these are the Finite Difference algorithm, Least Squares Volume-of-Fluid Interface Reconstruction Algorithm, LVIRA, and the Efficient Least Squares Volume-of-Fluid Interface Reconstruction Algorithm, ELVIRA. Along with these geometric interface reconstruction algorithms, there exist several volume-of-fluid transportation algorithms. This paper will discuss two operator-splitting advection algorithms and an unsplit advection algorithm. Using these three interface reconstruction algorithms, and three advection algorithms, a comparison will be drawn to see how different combinations of these algorithms perform with respect to accuracy as well as computational expense.
ContributorsKedelty, Dominic (Author) / Herrmann, Marcus (Thesis advisor) / Huang, Huei-Ping (Committee member) / Chen, Kangping (Committee member) / Arizona State University (Publisher)
Created2015
Description
The flow of liquid PDMS (10:1 v/v base to cross-linker ratio) in open, rectangular silicon micro channels, with and without a hexa-methyl-di-silazane (HMDS) or poly-tetra-fluoro-ethylene (PTFE) (120 nm) coat, was studied. Photolithographic patterning and etching of silicon wafers was used to create micro channels with a range of widths (5-50

The flow of liquid PDMS (10:1 v/v base to cross-linker ratio) in open, rectangular silicon micro channels, with and without a hexa-methyl-di-silazane (HMDS) or poly-tetra-fluoro-ethylene (PTFE) (120 nm) coat, was studied. Photolithographic patterning and etching of silicon wafers was used to create micro channels with a range of widths (5-50 μm) and depths (5-20 μm). The experimental PDMS flow rates were compared to an analytical model based on the work of Lucas and Washburn. The experimental flow rates closely matched the predicted flow rates for channels with an aspect ratio (width to depth), p, between one and two. Flow rates in channels with p less than one were higher than predicted whereas the opposite was true for channels with p greater than two. The divergence between the experimental and predicted flow rates steadily increased with increasing p. These findings are rationalized in terms of the effect of channel dimensions on the front and top meniscus morphology and the possible deviation from the no-slip condition at the channel walls at high shear rates.

In addition, a preliminary experimental setup for calibration tests on ultrasensitive PDMS cantilever beams is reported. One loading and unloading cycle is completed on a microcantilever PDMS beam (theoretical stiffness 0.5 pN/ µm). Beam deflections are actuated by adjusting the buoyancy force on the beam, which is submerged in water, by the addition of heat. The expected loading and unloading curve is produced, albeit with significant noise. The experimental results indicate that the beam stiffness is a factor of six larger than predicted theoretically. One probable explanation is that the beam geometry may change when it is removed from the channel after curing, making assumptions about the beam geometry used in the theoretical analysis inaccurate. This theory is bolstered by experimental data discussed in the report. Other sources of error which could partially contribute to the divergent results are discussed. Improvements to the experimental setup for future work are suggested.
ContributorsSowers, Timothy Wayne (Author) / Rajagopalan, Jagannathan (Thesis advisor) / Herrmann, Marcus (Committee member) / Huang, Huei-Ping (Committee member) / Arizona State University (Publisher)
Created2014
153123-Thumbnail Image.png
Description
Stereolithography files (STL) are widely used in diverse fields as a means of describing complex geometries through surface triangulations. The resulting stereolithography output is a result of either experimental measurements, or computer-aided design. Often times stereolithography outputs from experimental means are prone to noise, surface irregularities and holes in an

Stereolithography files (STL) are widely used in diverse fields as a means of describing complex geometries through surface triangulations. The resulting stereolithography output is a result of either experimental measurements, or computer-aided design. Often times stereolithography outputs from experimental means are prone to noise, surface irregularities and holes in an otherwise closed surface.

A general method for denoising and adaptively smoothing these dirty stereolithography files is proposed. Unlike existing means, this approach aims to smoothen the dirty surface representation by utilizing the well established levelset method. The level of smoothing and denoising can be set depending on a per-requirement basis by means of input parameters. Once the surface representation is smoothened as desired, it can be extracted as a standard levelset scalar isosurface.

The approach presented in this thesis is also coupled to a fully unstructured Cartesian mesh generation library with built-in localized adaptive mesh refinement (AMR) capabilities, thereby ensuring lower computational cost while also providing sufficient resolution. Future work will focus on implementing tetrahedral cuts to the base hexahedral mesh structure in order to extract a fully unstructured hexahedra-dominant mesh describing the STL geometry, which can be used for fluid flow simulations.
ContributorsKannan, Karthik (Author) / Herrmann, Marcus (Thesis advisor) / Peet, Yulia (Committee member) / Frakes, David (Committee member) / Arizona State University (Publisher)
Created2014