Matching Items (87)
Filtering by

Clear all filters

152296-Thumbnail Image.png
Description
Ten regional climate models (RCMs) and atmosphere-ocean generalized model parings from the North America Regional Climate Change Assessment Program were used to estimate the shift of extreme precipitation due to climate change using present-day and future-day climate scenarios. RCMs emulate winter storms and one-day duration events at the sub-regional level.

Ten regional climate models (RCMs) and atmosphere-ocean generalized model parings from the North America Regional Climate Change Assessment Program were used to estimate the shift of extreme precipitation due to climate change using present-day and future-day climate scenarios. RCMs emulate winter storms and one-day duration events at the sub-regional level. Annual maximum series were derived for each model pairing, each modeling period; and for annual and winter seasons. The reliability ensemble average (REA) method was used to qualify each RCM annual maximum series to reproduce historical records and approximate average predictions, because there are no future records. These series determined (a) shifts in extreme precipitation frequencies and magnitudes, and (b) shifts in parameters during modeling periods. The REA method demonstrated that the winter season had lower REA factors than the annual season. For the winter season the RCM pairing of the Hadley regional Model 3 and the Geophysical Fluid-Dynamics Laboratory atmospheric-land generalized model had the lowest REA factors. However, in replicating present-day climate, the pairing of the Abdus Salam International Center for Theoretical Physics' Regional Climate Model Version 3 with the Geophysical Fluid-Dynamics Laboratory atmospheric-land generalized model was superior. Shifts of extreme precipitation in the 24-hour event were measured using precipitation magnitude for each frequency in the annual maximum series, and the difference frequency curve in the generalized extreme-value-function parameters. The average trend of all RCM pairings implied no significant shift in the winter annual maximum series, however the REA-selected models showed an increase in annual-season precipitation extremes: 0.37 inches for the 100-year return period and for the winter season suggested approximately 0.57 inches for the same return period. Shifts of extreme precipitation were estimated using predictions 70 years into the future based on RCMs. Although these models do not provide climate information for the intervening 70 year period, the models provide an assertion on the behavior of future climate. The shift in extreme precipitation may be significant in the frequency distribution function, and will vary depending on each model-pairing condition. The proposed methodology addresses the many uncertainties associated with the current methodologies dealing with extreme precipitation.
ContributorsRiaño, Alejandro (Author) / Mays, Larry W. (Thesis advisor) / Vivoni, Enrique (Committee member) / Huang, Huei-Ping (Committee member) / Arizona State University (Publisher)
Created2013
152502-Thumbnail Image.png
Description
Climate change has been one of the major issues of global economic and social concerns in the past decade. To quantitatively predict global climate change, the Intergovernmental Panel on Climate Change (IPCC) of the United Nations have organized a multi-national effort to use global atmosphere-ocean models to project anthropogenically induced

Climate change has been one of the major issues of global economic and social concerns in the past decade. To quantitatively predict global climate change, the Intergovernmental Panel on Climate Change (IPCC) of the United Nations have organized a multi-national effort to use global atmosphere-ocean models to project anthropogenically induced climate changes in the 21st century. The computer simulations performed with those models and archived by the Coupled Model Intercomparison Project - Phase 5 (CMIP5) form the most comprehensive quantitative basis for the prediction of global environmental changes on decadal-to-centennial time scales. While the CMIP5 archives have been widely used for policy making, the inherent biases in the models have not been systematically examined. The main objective of this study is to validate the CMIP5 simulations of the 20th century climate with observations to quantify the biases and uncertainties in state-of-the-art climate models. Specifically, this work focuses on three major features in the atmosphere: the jet streams over the North Pacific and Atlantic Oceans and the low level jet (LLJ) stream over central North America which affects the weather in the United States, and the near-surface wind field over North America which is relevant to energy applications. The errors in the model simulations of those features are systematically quantified and the uncertainties in future predictions are assessed for stakeholders to use in climate applications. Additional atmospheric model simulations are performed to determine the sources of the errors in climate models. The results reject a popular idea that the errors in the sea surface temperature due to an inaccurate ocean circulation contributes to the errors in major atmospheric jet streams.
ContributorsKulkarni, Sujay (Author) / Huang, Huei-Ping (Thesis advisor) / Calhoun, Ronald (Committee member) / Peet, Yulia (Committee member) / Arizona State University (Publisher)
Created2014
152789-Thumbnail Image.png
Description
Multi-pulse particle tracking velocimetry (multi-pulse PTV) is a recently proposed flow measurement technique aiming to improve the performance of conventional PTV/ PIV. In this work, multi-pulse PTV is assessed based on PTV simulations in terms of spatial resolution, velocity measurement accuracy and the capability of acceleration measurement. The errors of

Multi-pulse particle tracking velocimetry (multi-pulse PTV) is a recently proposed flow measurement technique aiming to improve the performance of conventional PTV/ PIV. In this work, multi-pulse PTV is assessed based on PTV simulations in terms of spatial resolution, velocity measurement accuracy and the capability of acceleration measurement. The errors of locating particles, velocity measurement and acceleration measurement are analytically calculated and compared among quadruple-pulse, triple-pulse and dual-pulse PTV. The optimizations of triple-pulse and quadruple-pulse PTV are discussed, and criteria are developed to minimize the combined error in position, velocity and acceleration. Experimentally, the velocity and acceleration fields of a round impinging air jet are measured to test the triple-pulse technique. A high speed beam-splitting camera and a custom 8-pulsed laser system are utilized to achieve good timing flexibility and temporal resolution. A new method to correct the registration error between CCDs is also presented. Consequently, the velocity field shows good consistency between triple-pulse and dual-pulse measurements. The mean acceleration profile along the centerline of the jet is used as the ground truth for the verification of the triple-pulse PIV measurements of the acceleration fields. The instantaneous acceleration field of the jet is directly measured by triple-pulse PIV and presented. Accelerations up to 1,000 g's are measured in these experiments.
ContributorsDing, Liuyang (Author) / Adrian, Ronald J. (Thesis advisor) / Herrmann, Marcus (Committee member) / Huang, Huei-Ping (Committee member) / Arizona State University (Publisher)
Created2014
152984-Thumbnail Image.png
Description
Multi-touch tablets and smart phones are now widely used in both workplace and consumer settings. Interacting with these devices requires hand and arm movements that are potentially complex and poorly understood. Experimental studies have revealed differences in performance that could potentially be associated with injury risk. However, underlying causes for

Multi-touch tablets and smart phones are now widely used in both workplace and consumer settings. Interacting with these devices requires hand and arm movements that are potentially complex and poorly understood. Experimental studies have revealed differences in performance that could potentially be associated with injury risk. However, underlying causes for performance differences are often difficult to identify. For example, many patterns of muscle activity can potentially result in similar behavioral output. Muscle activity is one factor contributing to forces in tissues that could contribute to injury. However, experimental measurements of muscle activity and force for humans are extremely challenging. Models of the musculoskeletal system can be used to make specific estimates of neuromuscular coordination and musculoskeletal forces. However, existing models cannot easily be used to describe complex, multi-finger gestures such as those used for multi-touch human computer interaction (HCI) tasks. We therefore seek to develop a dynamic musculoskeletal simulation capable of estimating internal musculoskeletal loading during multi-touch tasks involving multi digits of the hand, and use the simulation to better understand complex multi-touch and gestural movements, and potentially guide the design of technologies the reduce injury risk. To accomplish these, we focused on three specific tasks. First, we aimed at determining the optimal index finger muscle attachment points within the context of the established, validated OpenSim arm model using measured moment arm data taken from the literature. Second, we aimed at deriving moment arm values from experimentally-measured muscle attachments and using these values to determine muscle-tendon paths for both extrinsic and intrinsic muscles of middle, ring and little fingers. Finally, we aimed at exploring differences in hand muscle activation patterns during zooming and rotating tasks on the tablet computer in twelve subjects. Towards this end, our musculoskeletal hand model will help better address the neuromuscular coordination, safe gesture performance and internal loadings for multi-touch applications.
ContributorsYi, Chong-hwan (Author) / Jindrich, Devin L. (Thesis advisor) / Artemiadis, Panagiotis K. (Thesis advisor) / Phelan, Patrick (Committee member) / Santos, Veronica J. (Committee member) / Huang, Huei-Ping (Committee member) / Arizona State University (Publisher)
Created2014
153520-Thumbnail Image.png
Description
The Volume-of-Fluid method is a popular method for interface tracking in Multiphase applications within Computational Fluid Dynamics. To date there exists several algorithms for reconstruction of a geometric interface surface. Of these are the Finite Difference algorithm, Least Squares Volume-of-Fluid Interface Reconstruction Algorithm, LVIRA, and the Efficient Least Squares Volume-of-Fluid

The Volume-of-Fluid method is a popular method for interface tracking in Multiphase applications within Computational Fluid Dynamics. To date there exists several algorithms for reconstruction of a geometric interface surface. Of these are the Finite Difference algorithm, Least Squares Volume-of-Fluid Interface Reconstruction Algorithm, LVIRA, and the Efficient Least Squares Volume-of-Fluid Interface Reconstruction Algorithm, ELVIRA. Along with these geometric interface reconstruction algorithms, there exist several volume-of-fluid transportation algorithms. This paper will discuss two operator-splitting advection algorithms and an unsplit advection algorithm. Using these three interface reconstruction algorithms, and three advection algorithms, a comparison will be drawn to see how different combinations of these algorithms perform with respect to accuracy as well as computational expense.
ContributorsKedelty, Dominic (Author) / Herrmann, Marcus (Thesis advisor) / Huang, Huei-Ping (Committee member) / Chen, Kangping (Committee member) / Arizona State University (Publisher)
Created2015
153141-Thumbnail Image.png
Description
Hydraulic fracturing is an effective technique used in well stimulation to increase petroleum well production. A combination of multi-stage hydraulic fracturing and horizontal drilling has led to the recent boom in shale gas production which has changed the energy landscape of North America.

During the fracking process, highly pressurized mixture of

Hydraulic fracturing is an effective technique used in well stimulation to increase petroleum well production. A combination of multi-stage hydraulic fracturing and horizontal drilling has led to the recent boom in shale gas production which has changed the energy landscape of North America.

During the fracking process, highly pressurized mixture of water and proppants (sand and chemicals) is injected into to a crack, which fractures the surrounding rock structure and proppants help in keeping the fracture open. Over a longer period, however, these fractures tend to close due to the difference between the compressive stress exerted by the reservoir on the fracture and the fluid pressure inside the fracture. During production, fluid pressure inside the fracture is reduced further which can accelerate the closure of a fracture.

In this thesis, we study the stress distribution around a hydraulic fracture caused by fluid production. It is shown that fluid flow can induce a very high hoop stress near the fracture tip. As the pressure gradient increases stress concentration increases. If a fracture is very thin, the flow induced stress along the fracture decreases, but the stress concentration at the fracture tip increases and become unbounded for an infinitely thin fracture.

The result from the present study can be used for studying the fracture closure problem, and ultimately this in turn can lead to the development of better proppants so that prolific well production can be sustained for a long period of time.
ContributorsPandit, Harshad Rajendra (Author) / Chen, Kang P (Thesis advisor) / Herrmann, Marcus (Committee member) / Huang, Huei-Ping (Committee member) / Arizona State University (Publisher)
Created2014
153410-Thumbnail Image.png
Description
With the ever-increasing demand for high-end services, technological companies have been forced to operate on high performance servers. In addition to the customer services, the company's internal need to store and manage huge amounts of data has also increased their need to invest in High Density Data Centers. As a

With the ever-increasing demand for high-end services, technological companies have been forced to operate on high performance servers. In addition to the customer services, the company's internal need to store and manage huge amounts of data has also increased their need to invest in High Density Data Centers. As a result, the performance to size of the data center has increased tremendously. Most of the consumed power by the servers is emitted as heat. In a High Density Data Center, the power per floor space area is higher compared to the regular data center. Hence the thermal management of this type of data center is relatively complicated.

Because of the very high power emission in a smaller containment, improper maintenance can result in failure of the data center operation in a shorter period. Hence the response time of the cooler to the temperature rise of the servers is very critical. Any delay in response will constantly lead to increased temperature and hence the server's failure.

In this paper, the significance of this delay time is understood by performing CFD simulation on different variants of High Density Modules using ANSYS Fluent. It was found out that the delay was becoming longer as the size of the data center increases. But the overload temperature, ie. the temperature rise beyond the set-point became lower with the increase in data center size. The results were common for both the single-row and the double-row model. The causes of the increased delay are accounted and explained in detail manner in this paper.
ContributorsRamaraj, Dinesh Balaji (Author) / Gupta, Sandeep (Thesis advisor) / Hermann, Marcus (Committee member) / Huang, Huei-Ping (Committee member) / Arizona State University (Publisher)
Created2015
153416-Thumbnail Image.png
Description
Due to decrease in fossil fuel levels, the world is shifting focus towards renewable sources of energy. With an annual average growth rate of 25%, wind is one of the foremost source of harnessing cleaner energy for production of electricity. Wind turbines have been developed to tap power from wind.

Due to decrease in fossil fuel levels, the world is shifting focus towards renewable sources of energy. With an annual average growth rate of 25%, wind is one of the foremost source of harnessing cleaner energy for production of electricity. Wind turbines have been developed to tap power from wind. As a single wind turbine is insufficient, multiple turbines are installed forming a wind farm. Generally, wind farms can have hundreds to thousands of turbines concentrated in a small region. There have been multiple studies centering the influence of weather on such wind farms, but no substantial research focused on how wind farms effect local climate. Technological advances have allowed development of commercial wind turbines with a power output greater than 7.58 MW. This has led to a reduction in required number of turbines and has optimized land usage. Hence, current research considers higher power density compared to previous works that relied on wind farm density of 2 to 4 W/m 2 . Simulations were performed using Weather Research and Forecasting software provided by NCAR. The region of simulation is Southern Oregon, with domains including both onshore and offshore wind farms. Unlike most previous works, where wind farms were considered to be on a flat ground, effects of topography have also been considered here. Study of seasonal effects over wind farms has provided better insight into changes in local wind direction. Analysis of mean velocity difference across wind farms at a height of 10m and 150m gives an understanding of wind velocity profiles. Results presented in this research tends to contradict earlier belief that velocity reduces throughout the farm. Large scale simulations have shown that sometimes, more than 50% of the farm can have an increased wind velocity of up to 1m/s

at an altitude of 10m.
ContributorsKadiyala, Yogesh Rao (Author) / Huang, Huei-Ping (Thesis advisor) / Rajagopalan, Jagannathan (Committee member) / Calhoun, Ronald (Committee member) / Arizona State University (Publisher)
Created2015
153171-Thumbnail Image.png
Description
The role of environmental factors that influence atmospheric propagation of sound originating from freeway noise sources is studied with a combination of field experiments and numerical simulations. Acoustic propagation models are developed and adapted for refractive index depending upon meteorological conditions. A high-resolution multi-nested environmental forecasting model forced by coarse

The role of environmental factors that influence atmospheric propagation of sound originating from freeway noise sources is studied with a combination of field experiments and numerical simulations. Acoustic propagation models are developed and adapted for refractive index depending upon meteorological conditions. A high-resolution multi-nested environmental forecasting model forced by coarse global analysis is applied to predict real meteorological profiles at fine scales. These profiles are then used as input for the acoustic models. Numerical methods for producing higher resolution acoustic refractive index fields are proposed. These include spatial and temporal nested meteorological simulations with vertical grid refinement. It is shown that vertical nesting can improve the prediction of finer structures in near-ground temperature and velocity profiles, such as morning temperature inversions and low level jet-like features. Accurate representation of these features is shown to be important for modeling sound refraction phenomena and for enabling accurate noise assessment. Comparisons are made using the acoustic model for predictions with profiles derived from meteorological simulations and from field experiment observations in Phoenix, Arizona. The challenges faced in simulating accurate meteorological profiles at high resolution for sound propagation applications are highlighted and areas for possible improvement are discussed.



A detailed evaluation of the environmental forecast is conducted by investigating the Surface Energy Balance (SEB) obtained from observations made with an eddy-covariance flux tower compared with SEB from simulations using several physical parameterizations of urban effects and planetary boundary layer schemes. Diurnal variation in SEB constituent fluxes are examined in relation to surface layer stability and modeled diagnostic variables. Improvement is found when adapting parameterizations for Phoenix with reduced errors in the SEB components. Finer model resolution (to 333 m) is seen to have insignificant ($<1\sigma$) influence on mean absolute percent difference of 30-minute diurnal mean SEB terms. A new method of representing inhomogeneous urban development density derived from observations of impervious surfaces with sub-grid scale resolution is then proposed for mesoscale applications. This method was implemented and evaluated within the environmental modeling framework. Finally, a new semi-implicit scheme based on Leapfrog and a fourth-order implicit time-filter is developed.
ContributorsShaffer, Stephen R. (Author) / Moustaoui, Mohamed (Thesis advisor) / Mahalov, Alex (Committee member) / Fernando, Harindra J.S. (Committee member) / Ovenden, Nicholas C. (Committee member) / Huang, Huei-Ping (Committee member) / Calhoun, Ronald (Committee member) / Arizona State University (Publisher)
Created2014
153344-Thumbnail Image.png
Description
Increasing concentrations of carbon dioxide in the atmosphere will inevitably lead to long-term changes in climate that can have serious consequences. Controlling anthropogenic emission of carbon dioxide into the atmosphere, however, represents a significant technological challenge. Various chemical approaches have been suggested, perhaps the most promising of these is based

Increasing concentrations of carbon dioxide in the atmosphere will inevitably lead to long-term changes in climate that can have serious consequences. Controlling anthropogenic emission of carbon dioxide into the atmosphere, however, represents a significant technological challenge. Various chemical approaches have been suggested, perhaps the most promising of these is based on electrochemical trapping of carbon dioxide using pyridine and derivatives. Optimization of this process requires a detailed understanding of the mechanisms of the reactions of reduced pyridines with carbon dioxide, which are not currently well known. This thesis describes a detailed mechanistic study of the nucleophilic and Bronsted basic properties of the radical anion of bipyridine as a model pyridine derivative, formed by one-electron reduction, with particular emphasis on the reactions with carbon dioxide. A time-resolved spectroscopic method was used to characterize the key intermediates and determine the kinetics of the reactions of the radical anion and its protonated radical form. Using a pulsed nanosecond laser, the bipyridine radical anion could be generated in-situ in less than 100 ns, which allows fast reactions to be monitored in real time. The bipyridine radical anion was found to be a very powerful one-electron donor, Bronsted base and nucleophile. It reacts by addition to the C=O bonds of ketones with a bimolecular rate constant around 1* 107 M-1 s-1. These are among the fastest nucleophilic additions that have been reported in literature. Temperature dependence studies demonstrate very low activation energies and large Arrhenius pre-exponential parameters, consistent with very high reactivity. The kinetics of E2 elimination, where the radical anion acts as a base, and SN2 substitution, where the radical anion acts as a nucleophile, are also characterized by large bimolecular rate constants in the range ca. 106 - 107 M-1 s-1. The pKa of the bipyridine radical anion was measured using a kinetic method and analysis of the data using a Marcus theory model for proton transfer. The bipyridine radical anion is found to have a pKa of 40±5 in DMSO. The reorganization energy for the proton transfer reaction was found to be 70±5 kJ/mol. The bipyridine radical anion was found to react very rapidly with carbon dioxide, with a bimolecular rate constant of 1* 108 M-1 s-1 and a small activation energy, whereas the protonated radical reacted with carbon dioxide with a rate constant that was too small to measure. The kinetic and thermodynamic data obtained in this work can be used to understand the mechanisms of the reactions of pyridines with carbon dioxide under reducing conditions.
ContributorsRanjan, Rajeev (Author) / Gould, Ian R (Thesis advisor) / Buttry, Daniel A (Thesis advisor) / Yarger, Jeff (Committee member) / Seo, Dong-Kyun (Committee member) / Arizona State University (Publisher)
Created2015