Matching Items (100)
Filtering by

Clear all filters

150256-Thumbnail Image.png
Description
While much effort in Stirling engine development is placed on making the high-temperature region of the Stirling engine warmer, this research explores methods to lower the temperature of the cold region by improving heat transfer in the cooler. This paper presents heat transfer coefficients obtained for a Stirling engine heat

While much effort in Stirling engine development is placed on making the high-temperature region of the Stirling engine warmer, this research explores methods to lower the temperature of the cold region by improving heat transfer in the cooler. This paper presents heat transfer coefficients obtained for a Stirling engine heat exchanger with oscillatory flow. The effects of oscillating frequency and input heat rate on the heat transfer coefficients are evaluated and details on the design and development of the heat exchanger test apparatus are also explained. Featured results include the relationship between overall heat transfer coefficients and oscillation frequency which increase from 21.5 to 46.1 Wm-2K-1 as the oscillation frequency increases from 6.0 to 19.3 Hz. A correlation for the Nusselt number on the inside of the heat exchange tubes in oscillatory flow is presented in a concise, dimensionless form in terms of the kinetic Reynolds number as a result of a statistical analysis. The test apparatus design is proven to be successful throughout its implementation due to the usefulness of data and clear trends observed. The author is not aware of any other publicly-available research on a Stirling engine cooler to the extent presented in this paper. Therefore, the present results are analyzed on a part-by-part basis and compared to segments of other research; however, strong correlations with data from other studies are not expected. The data presented in this paper are part of a continuing effort to better understand heat transfer properties in Stirling engines as well as other oscillating flow applications.
ContributorsEppard, Erin (Author) / Phelan, Patrick (Thesis advisor) / Trimble, Steve (Committee member) / Calhoun, Ronald (Committee member) / Arizona State University (Publisher)
Created2011
150141-Thumbnail Image.png
Description
A method of determining nanoparticle temperature through fluorescence intensity levels is described. Intracellular processes are often tracked through the use of fluorescence tagging, and ideal temperatures for many of these processes are unknown. Through the use of fluorescence-based thermometry, cellular processes such as intracellular enzyme movement can be studied and

A method of determining nanoparticle temperature through fluorescence intensity levels is described. Intracellular processes are often tracked through the use of fluorescence tagging, and ideal temperatures for many of these processes are unknown. Through the use of fluorescence-based thermometry, cellular processes such as intracellular enzyme movement can be studied and their respective temperatures established simultaneously. Polystyrene and silica nanoparticles are synthesized with a variety of temperature-sensitive dyes such as BODIPY, rose Bengal, Rhodamine dyes 6G, 700, and 800, and Nile Blue A and Nile Red. Photographs are taken with a QImaging QM1 Questar EXi Retiga camera while particles are heated from 25 to 70 C and excited at 532 nm with a Coherent DPSS-532 laser. Photographs are converted to intensity images in MATLAB and analyzed for fluorescence intensity, and plots are generated in MATLAB to describe each dye's intensity vs temperature. Regression curves are created to describe change in fluorescence intensity over temperature. Dyes are compared as nanoparticle core material is varied. Large particles are also created to match the camera's optical resolution capabilities, and it is established that intensity values increase proportionally with nanoparticle size. Nile Red yielded the closest-fit model, with R2 values greater than 0.99 for a second-order polynomial fit. By contrast, Rhodamine 6G only yielded an R2 value of 0.88 for a third-order polynomial fit, making it the least reliable dye for temperature measurements using the polynomial model. Of particular interest in this work is Nile Blue A, whose fluorescence-temperature curve yielded a much different shape from the other dyes. It is recommended that future work describe a broader range of dyes and nanoparticle sizes, and use multiple excitation wavelengths to better quantify each dye's quantum efficiency. Further research into the effects of nanoparticle size on fluorescence intensity levels should be considered as the particles used here greatly exceed 2 ìm. In addition, Nile Blue A should be further investigated as to why its fluorescence-temperature curve did not take on a characteristic shape for a temperature-sensitive dye in these experiments.
ContributorsTomforde, Christine (Author) / Phelan, Patrick (Thesis advisor) / Dai, Lenore (Committee member) / Adrian, Ronald (Committee member) / Arizona State University (Publisher)
Created2011
150613-Thumbnail Image.png
Description
Next generation gas turbines will be required to produce low concentrations of pollutants such as oxides of nitrogen (NOx), carbon monoxide (CO), and soot. In order to design gas turbines which produce lower emissions it is essential to have computational tools to help designers. Over the past few decades, computational

Next generation gas turbines will be required to produce low concentrations of pollutants such as oxides of nitrogen (NOx), carbon monoxide (CO), and soot. In order to design gas turbines which produce lower emissions it is essential to have computational tools to help designers. Over the past few decades, computational fluid dynamics (CFD) has played a key role in the design of turbomachinary and will be heavily relied upon for the design of future components. In order to design components with the least amount of experimental rig testing, the ensemble of submodels used in simulations must be known to accurately predict the component's performance. The present work aims to validate a CFD model used for a reverse flow, rich-burn, quick quench, lean-burn combustor being developed at Honeywell. Initially, simulations are performed to establish a baseline which will help to assess impact to combustor performance made by changing CFD models. Rig test data from Honeywell is compared to these baseline simulation results. Reynolds averaged Navier-Stokes (RANS) and Large Eddy Simulation (LES) turbulence models are both used with the presumption that the LES turbulence model will better predict combustor performance. One specific model, the fuel spray model, is evaluated next. Experimental data of the fuel spray in an isolated environment is used to evaluate models for the fuel spray and a new, simpler approach for inputting the spray boundary conditions (BC) in the combustor is developed. The combustor is simulated once more to evaluate changes from the new fuel spray boundary conditions. This CFD model is then used in a predictive simulation of eight other combustor configurations. All computer simulations in this work were preformed with the commercial CFD software ANSYS FLUENT. NOx pollutant emissions are predicted reasonably well across the range of configurations tested using the RANS turbulence model. However, in LES, significant under predictions are seen. Causes of the under prediction in NOx concentrations are investigated. Temperature metrics at the exit of the combustor, however, are seen to be better predicted with LES.
ContributorsSpencer, A. Jeffrey (Author) / Herrmann, Marcus (Thesis advisor) / Chen, Kangping (Committee member) / Adrian, Ronald (Committee member) / Arizona State University (Publisher)
Created2012
150756-Thumbnail Image.png
Description
Energy efficient design and management of data centers has seen considerable interest in the recent years owing to its potential to reduce the overall energy consumption and thereby the costs associated with it. Therefore, it is of utmost importance that new methods for improved physical design of data centers, resource

Energy efficient design and management of data centers has seen considerable interest in the recent years owing to its potential to reduce the overall energy consumption and thereby the costs associated with it. Therefore, it is of utmost importance that new methods for improved physical design of data centers, resource management schemes for efficient workload distribution and sustainable operation for improving the energy efficiency, be developed and tested before implementation on an actual data center. The BlueTool project, provides such a state-of-the-art platform, both software and hardware, to design and analyze energy efficiency of data centers. The software platform, namely GDCSim uses cyber-physical approach to study the physical behavior of the data center in response to the management decisions by taking into account the heat recirculation patterns in the data center room. Such an approach yields best possible energy savings owing to the characterization of cyber-physical interactions and the ability of the resource management to take decisions based on physical behavior of data centers. The GDCSim mainly uses two Computational Fluid Dynamics (CFD) based cyber-physical models namely, Heat Recirculation Matrix (HRM) and Transient Heat Distribution Model (THDM) for thermal predictions based on different management schemes. They are generated using a model generator namely BlueSim. To ensure the accuracy of the thermal predictions using the GDCSim, the models, HRM and THDM and the model generator, BlueSim need to be validated experimentally. For this purpose, the hardware platform of the BlueTool project, namely the BlueCenter, a mini data center, can be used. As a part of this thesis, the HRM and THDM were generated using the BlueSim and experimentally validated using the BlueCenter. An average error of 4.08% was observed for BlueSim, 5.84% for HRM and 4.24% for THDM. Further, a high initial error was observed for transient thermal prediction, which is due to the inability of BlueSim to account for the heat retained by server components.
ContributorsGilbert, Rose Robin (Author) / Gupta, Sandeep K.S (Thesis advisor) / Artemiadis, Panagiotis (Committee member) / Phelan, Patrick (Committee member) / Arizona State University (Publisher)
Created2012
Description
Microfluidics is the study of fluid flow at very small scales (micro -- one millionth of a meter) and is prevalent in many areas of science and engineering. Typical applications include lab-on-a-chip devices, microfluidic fuel cells, and DNA separation technologies. Many of these microfluidic devices rely on micron-resolution velocimetry measurements

Microfluidics is the study of fluid flow at very small scales (micro -- one millionth of a meter) and is prevalent in many areas of science and engineering. Typical applications include lab-on-a-chip devices, microfluidic fuel cells, and DNA separation technologies. Many of these microfluidic devices rely on micron-resolution velocimetry measurements to improve microchannel design and characterize existing devices. Methods such as micro particle imaging velocimetry (microPIV) and micro particle tracking velocimetry (microPTV) are mature and established methods for characterization of steady 2D flow fields. Increasingly complex microdevices require techniques that measure unsteady and/or three dimensional velocity fields. This dissertation presents a method for three-dimensional velocimetry of unsteady microflows based on spinning disk confocal microscopy and depth scanning of a microvolume. High-speed 2D unsteady velocity fields are resolved by acquiring images of particle motion using a high-speed CMOS camera and confocal microscope. The confocal microscope spatially filters out of focus light using a rotating disk of pinholes placed in the imaging path, improving the ability of the system to resolve unsteady microPIV measurements by improving the image and correlation signal to noise ratio. For 3D3C measurements, a piezo-actuated objective positioner quickly scans the depth of the microvolume and collects 2D image slices, which are stacked into 3D images. Super resolution microPIV interrogates these 3D images using microPIV as a predictor field for tracking individual particles with microPTV. The 3D3C diagnostic is demonstrated by measuring a pressure driven flow in a three-dimensional expanding microchannel. The experimental velocimetry data acquired at 30 Hz with instantaneous spatial resolution of 4.5 by 4.5 by 4.5 microns agrees well with a computational model of the flow field. The technique allows for isosurface visualization of time resolved 3D3C particle motion and high spatial resolution velocity measurements without requiring a calibration step or reconstruction algorithms. Several applications are investigated, including 3D quantitative fluorescence imaging of isotachophoresis plugs advecting through a microchannel and the dynamics of reaction induced colloidal crystal deposition.
ContributorsKlein, Steven Adam (Author) / Posner, Jonathan D (Thesis advisor) / Adrian, Ronald (Committee member) / Chen, Kangping (Committee member) / Devasenathipathy, Shankar (Committee member) / Frakes, David (Committee member) / Arizona State University (Publisher)
Created2011
150035-Thumbnail Image.png
Description
Concrete columns constitute the fundamental supports of buildings, bridges, and various other infrastructures, and their failure could lead to the collapse of the entire structure. As such, great effort goes into improving the fire resistance of such columns. In a time sensitive fire situation, a delay in the failure of

Concrete columns constitute the fundamental supports of buildings, bridges, and various other infrastructures, and their failure could lead to the collapse of the entire structure. As such, great effort goes into improving the fire resistance of such columns. In a time sensitive fire situation, a delay in the failure of critical load bearing structures can lead to an increase in time allowed for the evacuation of occupants, recovery of property, and access to the fire. Much work has been done in improving the structural performance of concrete including reducing column sizes and providing a safer structure. As a result, high-strength (HS) concrete has been developed to fulfill the needs of such improvements. HS concrete varies from normal-strength (NS) concrete in that it has a higher stiffness, lower permeability and larger durability. This, unfortunately, has resulted in poor performance under fire. The lower permeability allows for water vapor to build up causing HS concrete to suffer from explosive spalling under rapid heating. In addition, the coefficient of thermal expansion (CTE) of HS concrete is lower than that of NS concrete. In this study, the effects of introducing a region of crumb rubber concrete into a steel-reinforced concrete column were analyzed. The inclusion of crumb rubber concrete into a column will greatly increase the thermal resistivity of the overall column, leading to a reduction in core temperature as well as the rate at which the column is heated. Different cases were analyzed while varying the positioning of the crumb-rubber region to characterize the effect of position on the improvement of fire resistance. Computer simulated finite element analysis was used to calculate the temperature and strain distribution with time across the column's cross-sectional area with specific interest in the steel - concrete region. Of the several cases which were investigated, it was found that the improvement of time before failure ranged between 32 to 45 minutes.
ContributorsZiadeh, Bassam Mohammed (Author) / Phelan, Patrick (Thesis advisor) / Kaloush, Kamil (Thesis advisor) / Jiang, Hanqing (Committee member) / Arizona State University (Publisher)
Created2011
150473-Thumbnail Image.png
Description
ABSTRACT The heat recovery steam generator (HRSG) is a key component of Combined Cycle Power Plants (CCPP). The exhaust (flue gas) from the CCPP gas turbine flows through the HRSG − this gas typically contains a high concentration of NO and cannot be discharged directly to the atmosphere because of

ABSTRACT The heat recovery steam generator (HRSG) is a key component of Combined Cycle Power Plants (CCPP). The exhaust (flue gas) from the CCPP gas turbine flows through the HRSG − this gas typically contains a high concentration of NO and cannot be discharged directly to the atmosphere because of environmental restrictions. In the HRSG, one method of reducing the flue gas NO concentration is to inject ammonia into the gas at a plane upstream of the Selective Catalytic Reduction (SCR) unit through an injection grid (AIG); the SCR is where the NO is reduced to N2 and H2O. The amount and spatial distribution of the injected ammonia are key considerations for NO reduction while using the minimum possible amount of ammonia. This work had three objectives. First, a flow network model of the Ammonia Flow Control Unit (AFCU) was to be developed to calculate the quantity of ammonia released into the flue gas from each AIG perforation. Second, CFD simulation of the flue gas flow was to be performed to obtain the velocity, temperature, and species concentration fields in the gas upstream and downstream of the SCR. Finally, performance characteristics of the ammonia injection system were to be evaluated. All three objectives were reached. The AFCU was modeled using JAVA - with a graphical user interface provided for the user. The commercial software Fluent was used for CFD simulation. To evaluate the efficacy of the ammonia injection system in reducing the flue gas NO concentration, the twelve butterfly valves in the AFCU ammonia delivery piping (risers) were throttled by various degrees in the model and the NO concentration distribution computed for each operational scenario. When the valves were kept fully open, it was found that it led to a more uniform reduction in NO concentration compared to throttling the valves such that the riser flows were equal. Additionally, the SCR catalyst was consumed somewhat more uniformly, and ammonia slip (ammonia not consumed in reaction) was found lower. The ammonia use could be decreased by 10 percent while maintaining the NO concentration limit in the flue gas exhausting into the atmosphere.
ContributorsAdulkar, Sajesh (Author) / Roy, Ramendra (Thesis advisor) / Lee, Taewoo (Thesis advisor) / Phelan, Patrick (Committee member) / Arizona State University (Publisher)
Created2011
150270-Thumbnail Image.png
Description
Thermal interface materials (TIMs) are extensively used in thermal management applications especially in the microelectronics industry. With the advancement in microprocessors design and speed, the thermal management is becoming more complex. With these advancements in microelectronics, there have been parallel advancements in thermal interface materials. Given the vast number of

Thermal interface materials (TIMs) are extensively used in thermal management applications especially in the microelectronics industry. With the advancement in microprocessors design and speed, the thermal management is becoming more complex. With these advancements in microelectronics, there have been parallel advancements in thermal interface materials. Given the vast number of available TIM types, selection of the material for each specific application is crucial. Most of the metrologies currently available on the market are designed to qualify TIMs between two perfectly flat surfaces, mimicking an ideal scenario. However, in realistic applications parallel surfaces may not be the case. In this study, a unique characterization method is proposed to address the need for TIMs characterization between non-parallel surfaces. Two different metrologies are custom-designed and built to measure the impact of tilt angle on the performance of TIMs. The first metrology, Angular TIM Tester, is based on the ASTM D5470 standard with flexibility to perform characterization of the sample under induced tilt angle of the rods. The second metrology, Bare Die Tilting Metrology, is designed to validate the performance of TIM under induced tilt angle between the bare die and the cooling solution in an "in-situ" package testing format. Several types of off-the-shelf thermal interface materials were tested and the results are outlined in the study. Data were collected using both metrologies for all selected materials. It was found that small tilt angles, up to 0.6°, have an impact on thermal resistance of all materials especially for in-situ testing. In addition, resistance change between 0° and the selected tilt angle was found to be in close agreement between the two metrologies for paste-based materials and phase-change material. However, a clear difference in the thermal performance of the tested materials was observed between the two metrologies for the gap filler materials.
ContributorsHarris, Enisa (Author) / Phelan, Patrick (Thesis advisor) / Calhoun, Ronald (Committee member) / Devasenathipathy, Shankar (Committee member) / Arizona State University (Publisher)
Created2011
150428-Thumbnail Image.png
Description
Evacuated tube solar thermal collector arrays have a wide range of applications. While most of these applications are limited in performance due to relatively low maximum operating temperatures, these collectors can still be useful in low grade thermal systems. An array of fifteen Apricus AP-30 evacuated tube collectors was designed,

Evacuated tube solar thermal collector arrays have a wide range of applications. While most of these applications are limited in performance due to relatively low maximum operating temperatures, these collectors can still be useful in low grade thermal systems. An array of fifteen Apricus AP-30 evacuated tube collectors was designed, assembled, and tested on the Arizona State University campus in Tempe, AZ. An existing system model was reprogrammed and updated for increased flexibility and ease of use. The model predicts the outlet temperature of the collector array based on the specified environmental conditions. The model was verified through a comparative analysis to the data collected during a three-month test period. The accuracy of this model was then compared against data calculated from the Solar Rating and Certification Corporation (SRCC) efficiency curve to determine the relative performance. It was found that both the original and updated models were able to generate reasonable predictions of the performance of the collector array with overall average percentage errors of 1.0% and 1.8%, respectively.
ContributorsStonebraker, Matthew (Author) / Phelan, Patrick (Thesis advisor) / Reddy, Agami (Committee member) / Bryan, Harvey (Committee member) / Arizona State University (Publisher)
Created2011
Description
It is possible in a properly controlled environment, such as industrial metrology, to make significant headway into the non-industrial constraints on image-based position measurement using the techniques of image registration and achieve repeatable feature measurements on the order of 0.3% of a pixel, or about an order of magnitude improvement

It is possible in a properly controlled environment, such as industrial metrology, to make significant headway into the non-industrial constraints on image-based position measurement using the techniques of image registration and achieve repeatable feature measurements on the order of 0.3% of a pixel, or about an order of magnitude improvement on conventional real-world performance. These measurements are then used as inputs for a model optimal, model agnostic, smoothing for calibration of a laser scribe and online tracking of velocimeter using video input. Using appropriate smooth interpolation to increase effective sample density can reduce uncertainty and improve estimates. Use of the proper negative offset of the template function has the result of creating a convolution with higher local curvature than either template of target function which allows improved center-finding. Using the Akaike Information Criterion with a smoothing spline function it is possible to perform a model-optimal smooth on scalar measurements without knowing the underlying model and to determine the function describing the uncertainty in that optimal smooth. An example of empiric derivation of the parameters for a rudimentary Kalman Filter from this is then provided, and tested. Using the techniques of Exploratory Data Analysis and the "Formulize" genetic algorithm tool to convert the spline models into more accessible analytic forms resulted in stable, properly generalized, KF with performance and simplicity that exceeds "textbook" implementations thereof. Validation of the measurement includes that, in analytic case, it led to arbitrary precision in measurement of feature; in reasonable test case using the methods proposed, a reasonable and consistent maximum error of around 0.3% the length of a pixel was achieved and in practice using pixels that were 700nm in size feature position was located to within ± 2 nm. Robust applicability is demonstrated by the measurement of indicator position for a King model 2-32-G-042 rotameter.
ContributorsMunroe, Michael R (Author) / Phelan, Patrick (Thesis advisor) / Kostelich, Eric (Committee member) / Mahalov, Alex (Committee member) / Arizona State University (Publisher)
Created2012