Matching Items (94)
Filtering by

Clear all filters

151532-Thumbnail Image.png
Description
Modern day gas turbine designers face the problem of hot mainstream gas ingestion into rotor-stator disk cavities. To counter this ingestion, seals are installed on the rotor and stator disk rims and purge air, bled off from the compressor, is injected into the cavities. It is desirable to reduce the

Modern day gas turbine designers face the problem of hot mainstream gas ingestion into rotor-stator disk cavities. To counter this ingestion, seals are installed on the rotor and stator disk rims and purge air, bled off from the compressor, is injected into the cavities. It is desirable to reduce the supply of purge air as this decreases the net power output as well as efficiency of the gas turbine. Since the purge air influences the disk cavity flow field and effectively the amount of ingestion, the aim of this work was to study the cavity velocity field experimentally using Particle Image Velocimetry (PIV). Experiments were carried out in a model single-stage axial flow turbine set-up that featured blades as well as vanes, with purge air supplied at the hub of the rotor-stator disk cavity. Along with the rotor and stator rim seals, an inner labyrinth seal was provided which split the disk cavity into a rim cavity and an inner cavity. First, static gage pressure distribution was measured to ensure that nominally steady flow conditions had been achieved. The PIV experiments were then performed to map the velocity field on the radial-tangential plane within the rim cavity at four axial locations. Instantaneous velocity maps obtained by PIV were analyzed sector-by-sector to understand the rim cavity flow field. It was observed that the tangential velocity dominated the cavity flow at low purge air flow rate, its dominance decreasing with increase in the purge air flow rate. Radially inboard of the rim cavity, negative radial velocity near the stator surface and positive radial velocity near the rotor surface indicated the presence of a recirculation region in the cavity whose radial extent increased with increase in the purge air flow rate. Qualitative flow streamline patterns are plotted within the rim cavity for different experimental conditions by combining the PIV map information with ingestion measurements within the cavity as reported in Thiagarajan (2013).
ContributorsPathak, Parag (Author) / Roy, Ramendra P (Thesis advisor) / Calhoun, Ronald (Committee member) / Lee, Taewoo (Committee member) / Arizona State University (Publisher)
Created2013
151543-Thumbnail Image.png
Description
The numerical climate models have provided scientists, policy makers and the general public, crucial information for climate projections since mid-20th century. An international effort to compare and validate the simulations of all major climate models is organized by the Coupled Model Intercomparison Project (CMIP), which has gone through several phases

The numerical climate models have provided scientists, policy makers and the general public, crucial information for climate projections since mid-20th century. An international effort to compare and validate the simulations of all major climate models is organized by the Coupled Model Intercomparison Project (CMIP), which has gone through several phases since 1995 with CMIP5 being the state of the art. In parallel, an organized effort to consolidate all observational data in the past century culminates in the creation of several "reanalysis" datasets that are considered the closest representation of the true observation. This study compared the climate variability and trend in the climate model simulations and observations on the timescales ranging from interannual to centennial. The analysis focused on the dynamic climate quantity of zonal-mean zonal wind and global atmospheric angular momentum (AAM), and incorporated multiple datasets from reanalysis and the most recent CMIP3 and CMIP5 archives. For the observation, the validation of AAM by the length-of-day (LOD) and the intercomparison of AAM revealed a good agreement among reanalyses on the interannual and the decadal-to-interdecadal timescales, respectively. But the most significant discrepancies among them are in the long-term mean and long-term trend. For the simulations, the CMIP5 models produced a significantly smaller bias and a narrower ensemble spread of the climatology and trend in the 20th century for AAM compared to CMIP3, while CMIP3 and CMIP5 simulations consistently produced a positive trend for the 20th and 21st century. Both CMIP3 and CMIP5 models produced a wide range of the magnitudes of decadal and interdecadal variability of wind component of AAM (MR) compared to observation. The ensemble means of CMIP3 and CMIP5 are not statistically distinguishable for either the 20th- or 21st-century runs. The in-house atmospheric general circulation model (AGCM) simulations forced by the sea surface temperature (SST) taken from the CMIP5 simulations as lower boundary conditions were carried out. The zonal wind and MR in the CMIP5 simulations are well simulated in the AGCM simulations. This confirmed SST as an important mediator in regulating the global atmospheric changes due to GHG effect.
ContributorsPaek, Houk (Author) / Huang, Huei-Ping (Thesis advisor) / Adrian, Ronald (Committee member) / Wang, Zhihua (Committee member) / Anderson, James (Committee member) / Herrmann, Marcus (Committee member) / Arizona State University (Publisher)
Created2013
152296-Thumbnail Image.png
Description
Ten regional climate models (RCMs) and atmosphere-ocean generalized model parings from the North America Regional Climate Change Assessment Program were used to estimate the shift of extreme precipitation due to climate change using present-day and future-day climate scenarios. RCMs emulate winter storms and one-day duration events at the sub-regional level.

Ten regional climate models (RCMs) and atmosphere-ocean generalized model parings from the North America Regional Climate Change Assessment Program were used to estimate the shift of extreme precipitation due to climate change using present-day and future-day climate scenarios. RCMs emulate winter storms and one-day duration events at the sub-regional level. Annual maximum series were derived for each model pairing, each modeling period; and for annual and winter seasons. The reliability ensemble average (REA) method was used to qualify each RCM annual maximum series to reproduce historical records and approximate average predictions, because there are no future records. These series determined (a) shifts in extreme precipitation frequencies and magnitudes, and (b) shifts in parameters during modeling periods. The REA method demonstrated that the winter season had lower REA factors than the annual season. For the winter season the RCM pairing of the Hadley regional Model 3 and the Geophysical Fluid-Dynamics Laboratory atmospheric-land generalized model had the lowest REA factors. However, in replicating present-day climate, the pairing of the Abdus Salam International Center for Theoretical Physics' Regional Climate Model Version 3 with the Geophysical Fluid-Dynamics Laboratory atmospheric-land generalized model was superior. Shifts of extreme precipitation in the 24-hour event were measured using precipitation magnitude for each frequency in the annual maximum series, and the difference frequency curve in the generalized extreme-value-function parameters. The average trend of all RCM pairings implied no significant shift in the winter annual maximum series, however the REA-selected models showed an increase in annual-season precipitation extremes: 0.37 inches for the 100-year return period and for the winter season suggested approximately 0.57 inches for the same return period. Shifts of extreme precipitation were estimated using predictions 70 years into the future based on RCMs. Although these models do not provide climate information for the intervening 70 year period, the models provide an assertion on the behavior of future climate. The shift in extreme precipitation may be significant in the frequency distribution function, and will vary depending on each model-pairing condition. The proposed methodology addresses the many uncertainties associated with the current methodologies dealing with extreme precipitation.
ContributorsRiaño, Alejandro (Author) / Mays, Larry W. (Thesis advisor) / Vivoni, Enrique (Committee member) / Huang, Huei-Ping (Committee member) / Arizona State University (Publisher)
Created2013
152502-Thumbnail Image.png
Description
Climate change has been one of the major issues of global economic and social concerns in the past decade. To quantitatively predict global climate change, the Intergovernmental Panel on Climate Change (IPCC) of the United Nations have organized a multi-national effort to use global atmosphere-ocean models to project anthropogenically induced

Climate change has been one of the major issues of global economic and social concerns in the past decade. To quantitatively predict global climate change, the Intergovernmental Panel on Climate Change (IPCC) of the United Nations have organized a multi-national effort to use global atmosphere-ocean models to project anthropogenically induced climate changes in the 21st century. The computer simulations performed with those models and archived by the Coupled Model Intercomparison Project - Phase 5 (CMIP5) form the most comprehensive quantitative basis for the prediction of global environmental changes on decadal-to-centennial time scales. While the CMIP5 archives have been widely used for policy making, the inherent biases in the models have not been systematically examined. The main objective of this study is to validate the CMIP5 simulations of the 20th century climate with observations to quantify the biases and uncertainties in state-of-the-art climate models. Specifically, this work focuses on three major features in the atmosphere: the jet streams over the North Pacific and Atlantic Oceans and the low level jet (LLJ) stream over central North America which affects the weather in the United States, and the near-surface wind field over North America which is relevant to energy applications. The errors in the model simulations of those features are systematically quantified and the uncertainties in future predictions are assessed for stakeholders to use in climate applications. Additional atmospheric model simulations are performed to determine the sources of the errors in climate models. The results reject a popular idea that the errors in the sea surface temperature due to an inaccurate ocean circulation contributes to the errors in major atmospheric jet streams.
ContributorsKulkarni, Sujay (Author) / Huang, Huei-Ping (Thesis advisor) / Calhoun, Ronald (Committee member) / Peet, Yulia (Committee member) / Arizona State University (Publisher)
Created2014
152789-Thumbnail Image.png
Description
Multi-pulse particle tracking velocimetry (multi-pulse PTV) is a recently proposed flow measurement technique aiming to improve the performance of conventional PTV/ PIV. In this work, multi-pulse PTV is assessed based on PTV simulations in terms of spatial resolution, velocity measurement accuracy and the capability of acceleration measurement. The errors of

Multi-pulse particle tracking velocimetry (multi-pulse PTV) is a recently proposed flow measurement technique aiming to improve the performance of conventional PTV/ PIV. In this work, multi-pulse PTV is assessed based on PTV simulations in terms of spatial resolution, velocity measurement accuracy and the capability of acceleration measurement. The errors of locating particles, velocity measurement and acceleration measurement are analytically calculated and compared among quadruple-pulse, triple-pulse and dual-pulse PTV. The optimizations of triple-pulse and quadruple-pulse PTV are discussed, and criteria are developed to minimize the combined error in position, velocity and acceleration. Experimentally, the velocity and acceleration fields of a round impinging air jet are measured to test the triple-pulse technique. A high speed beam-splitting camera and a custom 8-pulsed laser system are utilized to achieve good timing flexibility and temporal resolution. A new method to correct the registration error between CCDs is also presented. Consequently, the velocity field shows good consistency between triple-pulse and dual-pulse measurements. The mean acceleration profile along the centerline of the jet is used as the ground truth for the verification of the triple-pulse PIV measurements of the acceleration fields. The instantaneous acceleration field of the jet is directly measured by triple-pulse PIV and presented. Accelerations up to 1,000 g's are measured in these experiments.
ContributorsDing, Liuyang (Author) / Adrian, Ronald J. (Thesis advisor) / Herrmann, Marcus (Committee member) / Huang, Huei-Ping (Committee member) / Arizona State University (Publisher)
Created2014
152984-Thumbnail Image.png
Description
Multi-touch tablets and smart phones are now widely used in both workplace and consumer settings. Interacting with these devices requires hand and arm movements that are potentially complex and poorly understood. Experimental studies have revealed differences in performance that could potentially be associated with injury risk. However, underlying causes for

Multi-touch tablets and smart phones are now widely used in both workplace and consumer settings. Interacting with these devices requires hand and arm movements that are potentially complex and poorly understood. Experimental studies have revealed differences in performance that could potentially be associated with injury risk. However, underlying causes for performance differences are often difficult to identify. For example, many patterns of muscle activity can potentially result in similar behavioral output. Muscle activity is one factor contributing to forces in tissues that could contribute to injury. However, experimental measurements of muscle activity and force for humans are extremely challenging. Models of the musculoskeletal system can be used to make specific estimates of neuromuscular coordination and musculoskeletal forces. However, existing models cannot easily be used to describe complex, multi-finger gestures such as those used for multi-touch human computer interaction (HCI) tasks. We therefore seek to develop a dynamic musculoskeletal simulation capable of estimating internal musculoskeletal loading during multi-touch tasks involving multi digits of the hand, and use the simulation to better understand complex multi-touch and gestural movements, and potentially guide the design of technologies the reduce injury risk. To accomplish these, we focused on three specific tasks. First, we aimed at determining the optimal index finger muscle attachment points within the context of the established, validated OpenSim arm model using measured moment arm data taken from the literature. Second, we aimed at deriving moment arm values from experimentally-measured muscle attachments and using these values to determine muscle-tendon paths for both extrinsic and intrinsic muscles of middle, ring and little fingers. Finally, we aimed at exploring differences in hand muscle activation patterns during zooming and rotating tasks on the tablet computer in twelve subjects. Towards this end, our musculoskeletal hand model will help better address the neuromuscular coordination, safe gesture performance and internal loadings for multi-touch applications.
ContributorsYi, Chong-hwan (Author) / Jindrich, Devin L. (Thesis advisor) / Artemiadis, Panagiotis K. (Thesis advisor) / Phelan, Patrick (Committee member) / Santos, Veronica J. (Committee member) / Huang, Huei-Ping (Committee member) / Arizona State University (Publisher)
Created2014
153520-Thumbnail Image.png
Description
The Volume-of-Fluid method is a popular method for interface tracking in Multiphase applications within Computational Fluid Dynamics. To date there exists several algorithms for reconstruction of a geometric interface surface. Of these are the Finite Difference algorithm, Least Squares Volume-of-Fluid Interface Reconstruction Algorithm, LVIRA, and the Efficient Least Squares Volume-of-Fluid

The Volume-of-Fluid method is a popular method for interface tracking in Multiphase applications within Computational Fluid Dynamics. To date there exists several algorithms for reconstruction of a geometric interface surface. Of these are the Finite Difference algorithm, Least Squares Volume-of-Fluid Interface Reconstruction Algorithm, LVIRA, and the Efficient Least Squares Volume-of-Fluid Interface Reconstruction Algorithm, ELVIRA. Along with these geometric interface reconstruction algorithms, there exist several volume-of-fluid transportation algorithms. This paper will discuss two operator-splitting advection algorithms and an unsplit advection algorithm. Using these three interface reconstruction algorithms, and three advection algorithms, a comparison will be drawn to see how different combinations of these algorithms perform with respect to accuracy as well as computational expense.
ContributorsKedelty, Dominic (Author) / Herrmann, Marcus (Thesis advisor) / Huang, Huei-Ping (Committee member) / Chen, Kangping (Committee member) / Arizona State University (Publisher)
Created2015
Description
The flow of liquid PDMS (10:1 v/v base to cross-linker ratio) in open, rectangular silicon micro channels, with and without a hexa-methyl-di-silazane (HMDS) or poly-tetra-fluoro-ethylene (PTFE) (120 nm) coat, was studied. Photolithographic patterning and etching of silicon wafers was used to create micro channels with a range of widths (5-50

The flow of liquid PDMS (10:1 v/v base to cross-linker ratio) in open, rectangular silicon micro channels, with and without a hexa-methyl-di-silazane (HMDS) or poly-tetra-fluoro-ethylene (PTFE) (120 nm) coat, was studied. Photolithographic patterning and etching of silicon wafers was used to create micro channels with a range of widths (5-50 μm) and depths (5-20 μm). The experimental PDMS flow rates were compared to an analytical model based on the work of Lucas and Washburn. The experimental flow rates closely matched the predicted flow rates for channels with an aspect ratio (width to depth), p, between one and two. Flow rates in channels with p less than one were higher than predicted whereas the opposite was true for channels with p greater than two. The divergence between the experimental and predicted flow rates steadily increased with increasing p. These findings are rationalized in terms of the effect of channel dimensions on the front and top meniscus morphology and the possible deviation from the no-slip condition at the channel walls at high shear rates.

In addition, a preliminary experimental setup for calibration tests on ultrasensitive PDMS cantilever beams is reported. One loading and unloading cycle is completed on a microcantilever PDMS beam (theoretical stiffness 0.5 pN/ µm). Beam deflections are actuated by adjusting the buoyancy force on the beam, which is submerged in water, by the addition of heat. The expected loading and unloading curve is produced, albeit with significant noise. The experimental results indicate that the beam stiffness is a factor of six larger than predicted theoretically. One probable explanation is that the beam geometry may change when it is removed from the channel after curing, making assumptions about the beam geometry used in the theoretical analysis inaccurate. This theory is bolstered by experimental data discussed in the report. Other sources of error which could partially contribute to the divergent results are discussed. Improvements to the experimental setup for future work are suggested.
ContributorsSowers, Timothy Wayne (Author) / Rajagopalan, Jagannathan (Thesis advisor) / Herrmann, Marcus (Committee member) / Huang, Huei-Ping (Committee member) / Arizona State University (Publisher)
Created2014
153141-Thumbnail Image.png
Description
Hydraulic fracturing is an effective technique used in well stimulation to increase petroleum well production. A combination of multi-stage hydraulic fracturing and horizontal drilling has led to the recent boom in shale gas production which has changed the energy landscape of North America.

During the fracking process, highly pressurized mixture of

Hydraulic fracturing is an effective technique used in well stimulation to increase petroleum well production. A combination of multi-stage hydraulic fracturing and horizontal drilling has led to the recent boom in shale gas production which has changed the energy landscape of North America.

During the fracking process, highly pressurized mixture of water and proppants (sand and chemicals) is injected into to a crack, which fractures the surrounding rock structure and proppants help in keeping the fracture open. Over a longer period, however, these fractures tend to close due to the difference between the compressive stress exerted by the reservoir on the fracture and the fluid pressure inside the fracture. During production, fluid pressure inside the fracture is reduced further which can accelerate the closure of a fracture.

In this thesis, we study the stress distribution around a hydraulic fracture caused by fluid production. It is shown that fluid flow can induce a very high hoop stress near the fracture tip. As the pressure gradient increases stress concentration increases. If a fracture is very thin, the flow induced stress along the fracture decreases, but the stress concentration at the fracture tip increases and become unbounded for an infinitely thin fracture.

The result from the present study can be used for studying the fracture closure problem, and ultimately this in turn can lead to the development of better proppants so that prolific well production can be sustained for a long period of time.
ContributorsPandit, Harshad Rajendra (Author) / Chen, Kang P (Thesis advisor) / Herrmann, Marcus (Committee member) / Huang, Huei-Ping (Committee member) / Arizona State University (Publisher)
Created2014
153410-Thumbnail Image.png
Description
With the ever-increasing demand for high-end services, technological companies have been forced to operate on high performance servers. In addition to the customer services, the company's internal need to store and manage huge amounts of data has also increased their need to invest in High Density Data Centers. As a

With the ever-increasing demand for high-end services, technological companies have been forced to operate on high performance servers. In addition to the customer services, the company's internal need to store and manage huge amounts of data has also increased their need to invest in High Density Data Centers. As a result, the performance to size of the data center has increased tremendously. Most of the consumed power by the servers is emitted as heat. In a High Density Data Center, the power per floor space area is higher compared to the regular data center. Hence the thermal management of this type of data center is relatively complicated.

Because of the very high power emission in a smaller containment, improper maintenance can result in failure of the data center operation in a shorter period. Hence the response time of the cooler to the temperature rise of the servers is very critical. Any delay in response will constantly lead to increased temperature and hence the server's failure.

In this paper, the significance of this delay time is understood by performing CFD simulation on different variants of High Density Modules using ANSYS Fluent. It was found out that the delay was becoming longer as the size of the data center increases. But the overload temperature, ie. the temperature rise beyond the set-point became lower with the increase in data center size. The results were common for both the single-row and the double-row model. The causes of the increased delay are accounted and explained in detail manner in this paper.
ContributorsRamaraj, Dinesh Balaji (Author) / Gupta, Sandeep (Thesis advisor) / Hermann, Marcus (Committee member) / Huang, Huei-Ping (Committee member) / Arizona State University (Publisher)
Created2015