Matching Items (55)
152074-Thumbnail Image.png
Description
Locomotion of microorganisms is commonly observed in nature and some aspects of their motion can be replicated by synthetic motors. Synthetic motors rely on a variety of propulsion mechanisms including auto-diffusiophoresis, auto-electrophoresis, and bubble generation. Regardless of the source of the locomotion, the motion of any motor can be characterized

Locomotion of microorganisms is commonly observed in nature and some aspects of their motion can be replicated by synthetic motors. Synthetic motors rely on a variety of propulsion mechanisms including auto-diffusiophoresis, auto-electrophoresis, and bubble generation. Regardless of the source of the locomotion, the motion of any motor can be characterized by the translational and rotational velocity and effective diffusivity. In a uniform environment the long-time motion of a motor can be fully characterized by the effective diffusivity. In this work it is shown that when motors possess both translational and rotational velocity the motor transitions from a short-time diffusivity to a long-time diffusivity at a time of pi/w. The short-time diffusivities are two to three orders of magnitude larger than the diffusivity of a Brownian sphere of the same size, increase linearly with concentration, and scale as v^2/2w. The measured long-time diffusivities are five times lower than the short-time diffusivities, scale as v^2/{2Dr [1 + (w/Dr )^2]}, and exhibit a maximum as a function of concentration. The variation of a colloid's velocity and effective diffusivity to its local environment (e.g. fuel concentration) suggests that the motors can accumulate in a bounded system, analogous to biological chemokinesis. Chemokinesis of organisms is the non-uniform equilibrium concentration that arises from a bounded random walk of swimming organisms in a chemical concentration gradient. In non-swimming organisms we term this response diffusiokinesis. We show that particles that migrate only by Brownian thermal motion are capable of achieving non-uniform pseudo equilibrium distribution in a diffusivity gradient. The concentration is a result of a bounded random-walk process where at any given time a larger percentage of particles can be found in the regions of low diffusivity than in regions of high diffusivity. Individual particles are not trapped in any given region but at equilibrium the net flux between regions is zero. For Brownian particles the gradient in diffusivity is achieved by creating a viscosity gradient in a microfluidic device. The distribution of the particles is described by the Fokker-Planck equation for variable diffusivity. The strength of the probe concentration gradient is proportional to the strength of the diffusivity gradient and inversely proportional to the mean probe diffusivity in the channel in accordance with the no flux condition at steady state. This suggests that Brownian colloids, natural or synthetic, will concentrate in a bounded system in response to a gradient in diffusivity and that the magnitude of the response is proportional to the magnitude of the gradient in diffusivity divided by the mean diffusivity in the channel.
ContributorsMarine, Nathan Arasmus (Author) / Posner, Jonathan D (Thesis advisor) / Adrian, Ronald J (Committee member) / Frakes, David (Committee member) / Phelan, Patrick E (Committee member) / Santos, Veronica J (Committee member) / Arizona State University (Publisher)
Created2013
151528-Thumbnail Image.png
Description
The heat transfer enhancements available from expanding the cross-section of a boiling microchannel are explored analytically and experimentally. Evaluation of the literature on critical heat flux in flow boiling and associated pressure drop behavior is presented with predictive critical heat flux (CHF) and pressure drop correlations. An optimum channel configuration

The heat transfer enhancements available from expanding the cross-section of a boiling microchannel are explored analytically and experimentally. Evaluation of the literature on critical heat flux in flow boiling and associated pressure drop behavior is presented with predictive critical heat flux (CHF) and pressure drop correlations. An optimum channel configuration allowing maximum CHF while reducing pressure drop is sought. A perturbation of the channel diameter is employed to examine CHF and pressure drop relationships from the literature with the aim of identifying those adequately general and suitable for use in a scenario with an expanding channel. Several CHF criteria are identified which predict an optimizable channel expansion, though many do not. Pressure drop relationships admit improvement with expansion, and no optimum presents itself. The relevant physical phenomena surrounding flow boiling pressure drop are considered, and a balance of dimensionless numbers is presented that may be of qualitative use. The design, fabrication, inspection, and experimental evaluation of four copper microchannel arrays of different channel expansion rates with R-134a refrigerant is presented. Optimum rates of expansion which maximize the critical heat flux are considered at multiple flow rates, and experimental results are presented demonstrating optima. The effect of expansion on the boiling number is considered, and experiments demonstrate that expansion produces a notable increase in the boiling number in the region explored, though no optima are observed. Significant decrease in the pressure drop across the evaporator is observed with the expanding channels, and no optima appear. Discussion of the significance of this finding is presented, along with possible avenues for future work.
ContributorsMiner, Mark (Author) / Phelan, Patrick E (Thesis advisor) / Baer, Steven (Committee member) / Chamberlin, Ralph (Committee member) / Chen, Kangping (Committee member) / Herrmann, Marcus (Committee member) / Arizona State University (Publisher)
Created2013
151543-Thumbnail Image.png
Description
The numerical climate models have provided scientists, policy makers and the general public, crucial information for climate projections since mid-20th century. An international effort to compare and validate the simulations of all major climate models is organized by the Coupled Model Intercomparison Project (CMIP), which has gone through several phases

The numerical climate models have provided scientists, policy makers and the general public, crucial information for climate projections since mid-20th century. An international effort to compare and validate the simulations of all major climate models is organized by the Coupled Model Intercomparison Project (CMIP), which has gone through several phases since 1995 with CMIP5 being the state of the art. In parallel, an organized effort to consolidate all observational data in the past century culminates in the creation of several "reanalysis" datasets that are considered the closest representation of the true observation. This study compared the climate variability and trend in the climate model simulations and observations on the timescales ranging from interannual to centennial. The analysis focused on the dynamic climate quantity of zonal-mean zonal wind and global atmospheric angular momentum (AAM), and incorporated multiple datasets from reanalysis and the most recent CMIP3 and CMIP5 archives. For the observation, the validation of AAM by the length-of-day (LOD) and the intercomparison of AAM revealed a good agreement among reanalyses on the interannual and the decadal-to-interdecadal timescales, respectively. But the most significant discrepancies among them are in the long-term mean and long-term trend. For the simulations, the CMIP5 models produced a significantly smaller bias and a narrower ensemble spread of the climatology and trend in the 20th century for AAM compared to CMIP3, while CMIP3 and CMIP5 simulations consistently produced a positive trend for the 20th and 21st century. Both CMIP3 and CMIP5 models produced a wide range of the magnitudes of decadal and interdecadal variability of wind component of AAM (MR) compared to observation. The ensemble means of CMIP3 and CMIP5 are not statistically distinguishable for either the 20th- or 21st-century runs. The in-house atmospheric general circulation model (AGCM) simulations forced by the sea surface temperature (SST) taken from the CMIP5 simulations as lower boundary conditions were carried out. The zonal wind and MR in the CMIP5 simulations are well simulated in the AGCM simulations. This confirmed SST as an important mediator in regulating the global atmospheric changes due to GHG effect.
ContributorsPaek, Houk (Author) / Huang, Huei-Ping (Thesis advisor) / Adrian, Ronald (Committee member) / Wang, Zhihua (Committee member) / Anderson, James (Committee member) / Herrmann, Marcus (Committee member) / Arizona State University (Publisher)
Created2013
Description
Increasing computational demands in data centers require facilities to operate at higher ambient temperatures and at higher power densities. Conventionally, data centers are cooled with electrically-driven vapor-compressor equipment. This paper proposes an alternative data center cooling architecture that is heat-driven. The source is heat produced by the computer equipment. This

Increasing computational demands in data centers require facilities to operate at higher ambient temperatures and at higher power densities. Conventionally, data centers are cooled with electrically-driven vapor-compressor equipment. This paper proposes an alternative data center cooling architecture that is heat-driven. The source is heat produced by the computer equipment. This dissertation details experiments investigating the quantity and quality of heat that can be captured from a liquid-cooled microprocessor on a computer server blade from a data center. The experiments involve four liquid-cooling setups and associated heat-extraction, including a radical approach using mineral oil. The trials examine the feasibility of using the thermal energy from a CPU to drive a cooling process. Uniquely, the investigation establishes an interesting and useful relationship simultaneously among CPU temperatures, power, and utilization levels. In response to the system data, this project explores the heat, temperature and power effects of adding insulation, varying water flow, CPU loading, and varying the cold plate-to-CPU clamping pressure. The idea is to provide an optimal and steady range of temperatures necessary for a chiller to operate. Results indicate an increasing relationship among CPU temperature, power and utilization. Since the dissipated heat can be captured and removed from the system for reuse elsewhere, the need for electricity-consuming computer fans is eliminated. Thermocouple readings of CPU temperatures as high as 93°C and a calculated CPU thermal energy up to 67Wth show a sufficiently high temperature and thermal energy to serve as the input temperature and heat medium input to an absorption chiller. This dissertation performs a detailed analysis of the exergy of a processor and determines the maximum amount of energy utilizable for work. Exergy as a source of realizable work is separated into its two contributing constituents: thermal exergy and informational exergy. The informational exergy is that usable form of work contained within the most fundamental unit of information output by a switching device within a CPU. Exergetic thermal, informational and efficiency values are calculated and plotted for our particular CPU, showing how the datasheet standards compare with experimental values. The dissertation concludes with a discussion of the work's significance.
ContributorsHaywood, Anna (Author) / Phelan, Patrick E (Thesis advisor) / Herrmann, Marcus (Committee member) / Gupta, Sandeep (Committee member) / Trimble, Steve (Committee member) / Myhajlenko, Stefan (Committee member) / Arizona State University (Publisher)
Created2014
152160-Thumbnail Image.png
Description
A cerebral aneurysm is an abnormal ballooning of the blood vessel wall in the brain that occurs in approximately 6% of the general population. When a cerebral aneurysm ruptures, the subsequent damage is lethal damage in nearly 50% of cases. Over the past decade, endovascular treatment has emerged as an

A cerebral aneurysm is an abnormal ballooning of the blood vessel wall in the brain that occurs in approximately 6% of the general population. When a cerebral aneurysm ruptures, the subsequent damage is lethal damage in nearly 50% of cases. Over the past decade, endovascular treatment has emerged as an effective treatment option for cerebral aneurysms that is far less invasive than conventional surgical options. Nonetheless, the rate of successful treatment is as low as 50% for certain types of aneurysms. Treatment success has been correlated with favorable post-treatment hemodynamics. However, current understanding of the effects of endovascular treatment parameters on post-treatment hemodynamics is limited. This limitation is due in part to current challenges in in vivo flow measurement techniques. Improved understanding of post-treatment hemodynamics can lead to more effective treatments. However, the effects of treatment on hemodynamics may be patient-specific and thus, accurate tools that can predict hemodynamics on a case by case basis are also required for improving outcomes.Accordingly, the main objectives of this work were 1) to develop computational tools for predicting post-treatment hemodynamics and 2) to build a foundation of understanding on the effects of controllable treatment parameters on cerebral aneurysm hemodynamics. Experimental flow measurement techniques, using particle image velocimetry, were first developed for acquiring flow data in cerebral aneurysm models treated with an endovascular device. The experimental data were then used to guide the development of novel computational tools, which consider the physical properties, design specifications, and deployment mechanics of endovascular devices to simulate post-treatment hemodynamics. The effects of different endovascular treatment parameters on cerebral aneurysm hemodynamics were then characterized under controlled conditions. Lastly, application of the computational tools for interventional planning was demonstrated through the evaluation of two patient cases.
ContributorsBabiker, M. Haithem (Author) / Frakes, David H (Thesis advisor) / Adrian, Ronald (Committee member) / Caplan, Michael (Committee member) / Chong, Brian (Committee member) / Vernon, Brent (Committee member) / Arizona State University (Publisher)
Created2013
150310-Thumbnail Image.png
Description
The world is grappling with two serious issues related to energy and climate change. The use of solar energy is receiving much attention due to its potential as one of the solutions. Air conditioning is particularly attractive as a solar energy application because of the near coincidence of peak cooling

The world is grappling with two serious issues related to energy and climate change. The use of solar energy is receiving much attention due to its potential as one of the solutions. Air conditioning is particularly attractive as a solar energy application because of the near coincidence of peak cooling loads with the available solar power. Recently, researchers have started serious discussions of using adsorptive processes for refrigeration and heat pumps. There is some success for the >100 ton adsorption systems but none exists in the <10 ton size range required for residential air conditioning. There are myriad reasons for the lack of small-scale systems such as low Coefficient of Performance (COP), high capital cost, scalability, and limited performance data. A numerical model to simulate an adsorption system was developed and its performance was compared with similar thermal-powered systems. Results showed that both the adsorption and absorption systems provide equal cooling capacity for a driving temperature range of 70-120 ºC, but the adsorption system is the only system to deliver cooling at temperatures below 65 ºC. Additionally, the absorption and desiccant systems provide better COP at low temperatures, but the COP's of the three systems converge at higher regeneration temperatures. To further investigate the viability of solar-powered heat pump systems, an hourly building load simulation was developed for a single-family house in the Phoenix metropolitan area. Thermal as well as economic performance comparison was conducted for adsorption, absorption, and solar photovoltaic (PV) powered vapor compression systems for a range of solar collector area and storage capacity. The results showed that for a small collector area, solar PV is more cost-effective whereas adsorption is better than absorption for larger collector area. The optimum solar collector area and the storage size were determined for each type of solar system. As part of this dissertation work, a small-scale proof-of-concept prototype of the adsorption system was assembled using some novel heat transfer enhancement strategies. Activated carbon and butane was chosen as the adsorbent-refrigerant pair. It was found that a COP of 0.12 and a cooling capacity of 89.6 W can be achieved.
ContributorsGupta, Yeshpal (Author) / Phelan, Patrick E (Thesis advisor) / Bryan, Harvey J. (Committee member) / Mikellidas, Pavlos G (Committee member) / Pacheco, Jose R (Committee member) / Trimble, Steven W (Committee member) / Arizona State University (Publisher)
Created2011
150329-Thumbnail Image.png
Description
The flow around a golf ball is studied using direct numerical simulation (DNS). An immersed boundary approach is adopted in which the incompressible Navier-Stokes equations are solved using a fractional step method on a structured, staggered grid in cylindrical coordinates. The boundary conditions on the surface are imposed using momentum

The flow around a golf ball is studied using direct numerical simulation (DNS). An immersed boundary approach is adopted in which the incompressible Navier-Stokes equations are solved using a fractional step method on a structured, staggered grid in cylindrical coordinates. The boundary conditions on the surface are imposed using momentum forcing in the vicinity of the boundary. The flow solver is parallelized using a domain decomposition strategy and message passing interface (MPI), and exhibits linear scaling on as many as 500 processors. A laminar flow case is presented to verify the formal accuracy of the method. The immersed boundary approach is validated by comparison with computations of the flow over a smooth sphere. Simulations are performed at Reynolds numbers of 2.5 × 104 and 1.1 × 105 based on the diameter of the ball and the freestream speed and using grids comprised of more than 1.14 × 109 points. Flow visualizations reveal the location of separation, as well as the delay of complete detachment. Predictions of the aerodynamic forces at both Reynolds numbers are in reasonable agreement with measurements. Energy spectra of the velocity quantify the dominant frequencies of the flow near separation and in the wake. Time-averaged statistics reveal characteristic physical patterns in the flow as well as local trends within dimples. A mechanism of drag reduction due to the dimples is confirmed, and metrics for dimple optimization are proposed.
ContributorsSmith, Clinton E (Author) / Squires, Kyle D (Thesis advisor) / Balaras, Elias (Committee member) / Herrmann, Marcus (Committee member) / Adrian, Ronald (Committee member) / Stanzione, Daniel C (Committee member) / Calhoun, Ronald (Committee member) / Arizona State University (Publisher)
Created2011
150215-Thumbnail Image.png
Description
Multiphase flows are an important part of many natural and technological phe- nomena such as ocean-air coupling (which is important for climate modeling) and the atomization of liquid fuel jets in combustion engines. The unique challenges of multiphase flow often make analytical solutions to the governing equations impos- sible and

Multiphase flows are an important part of many natural and technological phe- nomena such as ocean-air coupling (which is important for climate modeling) and the atomization of liquid fuel jets in combustion engines. The unique challenges of multiphase flow often make analytical solutions to the governing equations impos- sible and experimental investigations very difficult. Thus, high-fidelity numerical simulations can play a pivotal role in understanding these systems. This disserta- tion describes numerical methods developed for complex multiphase flows and the simulations performed using these methods. First, the issue of multiphase code verification is addressed. Code verification answers the question "Is this code solving the equations correctly?" The method of manufactured solutions (MMS) is a procedure for generating exact benchmark solutions which can test the most general capabilities of a code. The chief obstacle to applying MMS to multiphase flow lies in the discontinuous nature of the material properties at the interface. An extension of the MMS procedure to multiphase flow is presented, using an adaptive marching tetrahedron style algorithm to compute the source terms near the interface. Guidelines for the use of the MMS to help locate coding mistakes are also detailed. Three multiphase systems are then investigated: (1) the thermocapillary motion of three-dimensional and axisymmetric drops in a confined apparatus, (2) the flow of two immiscible fluids completely filling an enclosed cylinder and driven by the rotation of the bottom endwall, and (3) the atomization of a single drop subjected to a high shear turbulent flow. The systems are simulated numerically by solving the full multiphase Navier- Stokes equations coupled to the various equations of state and a level set interface tracking scheme based on the refined level set grid method. The codes have been parallelized using MPI in order to take advantage of today's very large parallel computational architectures. In the first system, the code's ability to handle surface tension and large tem- perature gradients is established. In the second system, the code's ability to sim- ulate simple interface geometries with strong shear is demonstrated. In the third system, the ability to handle extremely complex geometries and topology changes with strong shear is shown.
ContributorsBrady, Peter, Ph.D (Author) / Herrmann, Marcus (Thesis advisor) / Lopez, Juan (Thesis advisor) / Adrian, Ronald (Committee member) / Calhoun, Ronald (Committee member) / Chen, Kangping (Committee member) / Arizona State University (Publisher)
Created2011
150141-Thumbnail Image.png
Description
A method of determining nanoparticle temperature through fluorescence intensity levels is described. Intracellular processes are often tracked through the use of fluorescence tagging, and ideal temperatures for many of these processes are unknown. Through the use of fluorescence-based thermometry, cellular processes such as intracellular enzyme movement can be studied and

A method of determining nanoparticle temperature through fluorescence intensity levels is described. Intracellular processes are often tracked through the use of fluorescence tagging, and ideal temperatures for many of these processes are unknown. Through the use of fluorescence-based thermometry, cellular processes such as intracellular enzyme movement can be studied and their respective temperatures established simultaneously. Polystyrene and silica nanoparticles are synthesized with a variety of temperature-sensitive dyes such as BODIPY, rose Bengal, Rhodamine dyes 6G, 700, and 800, and Nile Blue A and Nile Red. Photographs are taken with a QImaging QM1 Questar EXi Retiga camera while particles are heated from 25 to 70 C and excited at 532 nm with a Coherent DPSS-532 laser. Photographs are converted to intensity images in MATLAB and analyzed for fluorescence intensity, and plots are generated in MATLAB to describe each dye's intensity vs temperature. Regression curves are created to describe change in fluorescence intensity over temperature. Dyes are compared as nanoparticle core material is varied. Large particles are also created to match the camera's optical resolution capabilities, and it is established that intensity values increase proportionally with nanoparticle size. Nile Red yielded the closest-fit model, with R2 values greater than 0.99 for a second-order polynomial fit. By contrast, Rhodamine 6G only yielded an R2 value of 0.88 for a third-order polynomial fit, making it the least reliable dye for temperature measurements using the polynomial model. Of particular interest in this work is Nile Blue A, whose fluorescence-temperature curve yielded a much different shape from the other dyes. It is recommended that future work describe a broader range of dyes and nanoparticle sizes, and use multiple excitation wavelengths to better quantify each dye's quantum efficiency. Further research into the effects of nanoparticle size on fluorescence intensity levels should be considered as the particles used here greatly exceed 2 ìm. In addition, Nile Blue A should be further investigated as to why its fluorescence-temperature curve did not take on a characteristic shape for a temperature-sensitive dye in these experiments.
ContributorsTomforde, Christine (Author) / Phelan, Patrick (Thesis advisor) / Dai, Lenore (Committee member) / Adrian, Ronald (Committee member) / Arizona State University (Publisher)
Created2011
150613-Thumbnail Image.png
Description
Next generation gas turbines will be required to produce low concentrations of pollutants such as oxides of nitrogen (NOx), carbon monoxide (CO), and soot. In order to design gas turbines which produce lower emissions it is essential to have computational tools to help designers. Over the past few decades, computational

Next generation gas turbines will be required to produce low concentrations of pollutants such as oxides of nitrogen (NOx), carbon monoxide (CO), and soot. In order to design gas turbines which produce lower emissions it is essential to have computational tools to help designers. Over the past few decades, computational fluid dynamics (CFD) has played a key role in the design of turbomachinary and will be heavily relied upon for the design of future components. In order to design components with the least amount of experimental rig testing, the ensemble of submodels used in simulations must be known to accurately predict the component's performance. The present work aims to validate a CFD model used for a reverse flow, rich-burn, quick quench, lean-burn combustor being developed at Honeywell. Initially, simulations are performed to establish a baseline which will help to assess impact to combustor performance made by changing CFD models. Rig test data from Honeywell is compared to these baseline simulation results. Reynolds averaged Navier-Stokes (RANS) and Large Eddy Simulation (LES) turbulence models are both used with the presumption that the LES turbulence model will better predict combustor performance. One specific model, the fuel spray model, is evaluated next. Experimental data of the fuel spray in an isolated environment is used to evaluate models for the fuel spray and a new, simpler approach for inputting the spray boundary conditions (BC) in the combustor is developed. The combustor is simulated once more to evaluate changes from the new fuel spray boundary conditions. This CFD model is then used in a predictive simulation of eight other combustor configurations. All computer simulations in this work were preformed with the commercial CFD software ANSYS FLUENT. NOx pollutant emissions are predicted reasonably well across the range of configurations tested using the RANS turbulence model. However, in LES, significant under predictions are seen. Causes of the under prediction in NOx concentrations are investigated. Temperature metrics at the exit of the combustor, however, are seen to be better predicted with LES.
ContributorsSpencer, A. Jeffrey (Author) / Herrmann, Marcus (Thesis advisor) / Chen, Kangping (Committee member) / Adrian, Ronald (Committee member) / Arizona State University (Publisher)
Created2012