Matching Items (93)
Filtering by

Clear all filters

152239-Thumbnail Image.png
Description
Production from a high pressure gas well at a high production-rate encounters the risk of operating near the choking condition for a compressible flow in porous media. The unbounded gas pressure gradient near the point of choking, which is located near the wellbore, generates an effective tensile stress on the

Production from a high pressure gas well at a high production-rate encounters the risk of operating near the choking condition for a compressible flow in porous media. The unbounded gas pressure gradient near the point of choking, which is located near the wellbore, generates an effective tensile stress on the porous rock frame. This tensile stress almost always exceeds the tensile strength of the rock and it causes a tensile failure of the rock, leading to wellbore instability. In a porous rock, not all pores are choked at the same flow rate, and when just one pore is choked, the flow through the entire porous medium should be considered choked as the gas pressure gradient at the point of choking becomes singular. This thesis investigates the choking condition for compressible gas flow in a single microscopic pore. Quasi-one-dimensional analysis and axisymmetric numerical simulations of compressible gas flow in a pore scale varicose tube with a number of bumps are carried out, and the local Mach number and pressure along the tube are computed for the flow near choking condition. The effects of tube length, inlet-to-outlet pressure ratio, the number of bumps and the amplitude of the bumps on the choking condition are obtained. These critical values provide guidance for avoiding the choking condition in practice.
ContributorsYuan, Jing (Author) / Chen, Kangping (Thesis advisor) / Wang, Liping (Committee member) / Huang, Huei-Ping (Committee member) / Arizona State University (Publisher)
Created2013
152249-Thumbnail Image.png
Description
For CFD validation, hypersonic flow fields are simulated and compared with experimental data specifically designed to recreate conditions found by hypersonic vehicles. Simulated flow fields on a cone-ogive with flare at Mach 7.2 are compared with experimental data from NASA Ames Research Center 3.5" hypersonic wind tunnel. A parametric study

For CFD validation, hypersonic flow fields are simulated and compared with experimental data specifically designed to recreate conditions found by hypersonic vehicles. Simulated flow fields on a cone-ogive with flare at Mach 7.2 are compared with experimental data from NASA Ames Research Center 3.5" hypersonic wind tunnel. A parametric study of turbulence models is presented and concludes that the k-kl-omega transition and SST transition turbulence model have the best correlation. Downstream of the flare's shockwave, good correlation is found for all boundary layer profiles, with some slight discrepancies of the static temperature near the surface. Simulated flow fields on a blunt cone with flare above Mach 10 are compared with experimental data from CUBRC LENS hypervelocity shock tunnel. Lack of vibrational non-equilibrium calculations causes discrepancies in heat flux near the leading edge. Temperature profiles, where non-equilibrium effects are dominant, are compared with the dissociation of molecules to show the effects of dissociation on static temperature. Following the validation studies is a parametric analysis of a hypersonic inlet from Mach 6 to 20. Compressor performance is investigated for numerous cowl leading edge locations up to speeds of Mach 10. The variable cowl study showed positive trends in compressor performance parameters for a range of Mach numbers that arise from maximizing the intake of compressed flow. An interesting phenomenon due to the change in shock wave formation for different Mach numbers developed inside the cowl that had a negative influence on the total pressure recovery. Investigation of the hypersonic inlet at different altitudes is performed to study the effects of Reynolds number, and consequently, turbulent viscous effects on compressor performance. Turbulent boundary layer separation was noted as the cause for a change in compressor performance parameters due to a change in Reynolds number. This effect would not be noticeable if laminar flow was assumed. Mach numbers up to 20 are investigated to study the effects of vibrational and chemical non-equilibrium on compressor performance. A direct impact on the trends on the kinetic energy efficiency and compressor efficiency was found due to dissociation.
ContributorsOliden, Daniel (Author) / Lee, Tae-Woo (Thesis advisor) / Peet, Yulia (Committee member) / Huang, Huei-Ping (Committee member) / Arizona State University (Publisher)
Created2013
152067-Thumbnail Image.png
Description
A new theoretical model was developed utilizing energy conservation methods in order to determine the fully-atomized cross-sectional Sauter mean diameters of pressure-swirl atomizers. A detailed boundary-layer assessment led to the development of a new viscous dissipation model for droplets in the spray. Integral momentum methods were also used to determine

A new theoretical model was developed utilizing energy conservation methods in order to determine the fully-atomized cross-sectional Sauter mean diameters of pressure-swirl atomizers. A detailed boundary-layer assessment led to the development of a new viscous dissipation model for droplets in the spray. Integral momentum methods were also used to determine the complete velocity history of the droplets and entrained gas in the spray. The model was extensively validated through comparison with experiment and it was found that the model could predict the correct droplet size with high accuracy for a wide range of operating conditions. Based on detailed analysis, it was found that the energy model has a tendency to overestimate the droplet diameters for very low injection velocities, Weber numbers, and cone angles. A full parametric study was also performed in order to unveil some underlying behavior of pressure-swirl atomizers. It was found that at high injection velocities, the kinetic energy in the spray is significantly larger than the surface tension energy, therefore, efforts into improving atomization quality by changing the liquid's surface tension may not be the most productive. From the parametric studies it was also shown how the Sauter mean diameter and entrained velocities vary with increasing ambient gas density. Overall, the present energy model has the potential to provide quick and reasonably accurate solutions for a wide range of operating conditions enabling the user to determine how different injection parameters affect the spray quality.
ContributorsMoradi, Ali (Author) / Lee, Taewoo (Thesis advisor) / Herrmann, Marcus (Committee member) / Huang, Huei-Ping (Committee member) / Arizona State University (Publisher)
Created2013
151963-Thumbnail Image.png
Description
Currently, to interact with computer based systems one needs to learn the specific interface language of that system. In most cases, interaction would be much easier if it could be done in natural language. For that, we will need a module which understands natural language and automatically translates it to

Currently, to interact with computer based systems one needs to learn the specific interface language of that system. In most cases, interaction would be much easier if it could be done in natural language. For that, we will need a module which understands natural language and automatically translates it to the interface language of the system. NL2KR (Natural language to knowledge representation) v.1 system is a prototype of such a system. It is a learning based system that learns new meanings of words in terms of lambda-calculus formulas given an initial lexicon of some words and their meanings and a training corpus of sentences with their translations. As a part of this thesis, we take the prototype NL2KR v.1 system and enhance various components of it to make it usable for somewhat substantial and useful interface languages. We revamped the lexicon learning components, Inverse-lambda and Generalization modules, and redesigned the lexicon learning algorithm which uses these components to learn new meanings of words. Similarly, we re-developed an inbuilt parser of the system in Answer Set Programming (ASP) and also integrated external parser with the system. Apart from this, we added some new rich features like various system configurations and memory cache in the learning component of the NL2KR system. These enhancements helped in learning more meanings of the words, boosted performance of the system by reducing the computation time by a factor of 8 and improved the usability of the system. We evaluated the NL2KR system on iRODS domain. iRODS is a rule-oriented data system, which helps in managing large set of computer files using policies. This system provides a Rule-Oriented interface langauge whose syntactic structure is like any procedural programming language (eg. C). However, direct translation of natural language (NL) to this interface language is difficult. So, for automatic translation of NL to this language, we define a simple intermediate Policy Declarative Language (IPDL) to represent the knowledge in the policies, which then can be directly translated to iRODS rules. We develop a corpus of 100 policy statements and manually translate them to IPDL langauge. This corpus is then used for the evaluation of NL2KR system. We performed 10 fold cross validation on the system. Furthermore, using this corpus, we illustrate how different components of our NL2KR system work.
ContributorsKumbhare, Kanchan Ravishankar (Author) / Baral, Chitta (Thesis advisor) / Ye, Jieping (Committee member) / Li, Baoxin (Committee member) / Arizona State University (Publisher)
Created2013
152003-Thumbnail Image.png
Description
We solve the problem of activity verification in the context of sustainability. Activity verification is the process of proving the user assertions pertaining to a certain activity performed by the user. Our motivation lies in incentivizing the user for engaging in sustainable activities like taking public transport or recycling. Such

We solve the problem of activity verification in the context of sustainability. Activity verification is the process of proving the user assertions pertaining to a certain activity performed by the user. Our motivation lies in incentivizing the user for engaging in sustainable activities like taking public transport or recycling. Such incentivization schemes require the system to verify the claim made by the user. The system verifies these claims by analyzing the supporting evidence captured by the user while performing the activity. The proliferation of portable smart-phones in the past few years has provided us with a ubiquitous and relatively cheap platform, having multiple sensors like accelerometer, gyroscope, microphone etc. to capture this evidence data in-situ. In this research, we investigate the supervised and semi-supervised learning techniques for activity verification. Both these techniques make use the data set constructed using the evidence submitted by the user. Supervised learning makes use of annotated evidence data to build a function to predict the class labels of the unlabeled data points. The evidence data captured can be either unimodal or multimodal in nature. We use the accelerometer data as evidence for transportation mode verification and image data as evidence for recycling verification. After training the system, we achieve maximum accuracy of 94% when classifying the transport mode and 81% when detecting recycle activity. In the case of recycle verification, we could improve the classification accuracy by asking the user for more evidence. We present some techniques to ask the user for the next best piece of evidence that maximizes the probability of classification. Using these techniques for detecting recycle activity, the accuracy increases to 93%. The major disadvantage of using supervised models is that it requires extensive annotated training data, which expensive to collect. Due to the limited training data, we look at the graph based inductive semi-supervised learning methods to propagate the labels among the unlabeled samples. In the semi-supervised approach, we represent each instance in the data set as a node in the graph. Since it is a complete graph, edges interconnect these nodes, with each edge having some weight representing the similarity between the points. We propagate the labels in this graph, based on the proximity of the data points to the labeled nodes. We estimate the performance of these algorithms by measuring how close the probability distribution of the data after label propagation is to the probability distribution of the ground truth data. Since labeling has a cost associated with it, in this thesis we propose two algorithms that help us in selecting minimum number of labeled points to propagate the labels accurately. Our proposed algorithm achieves a maximum of 73% increase in performance when compared to the baseline algorithm.
ContributorsDesai, Vaishnav (Author) / Sundaram, Hari (Thesis advisor) / Li, Baoxin (Thesis advisor) / Turaga, Pavan (Committee member) / Arizona State University (Publisher)
Created2013
151645-Thumbnail Image.png
Description
Gas turbines have become widely used in the generation of power for cities. They are used all over the world and must operate under a wide variety of ambient conditions. Every turbine has a temperature at which it operates at peak capacity. In order to attain this temperature in the

Gas turbines have become widely used in the generation of power for cities. They are used all over the world and must operate under a wide variety of ambient conditions. Every turbine has a temperature at which it operates at peak capacity. In order to attain this temperature in the hotter months various cooling methods are used such as refrigeration inlet cooling systems, evaporative methods, and thermal energy storage systems. One of the more widely used is the evaporative systems because it is one of the safest and easiest to utilize method. However, the behavior of water droplets within the inlet to the turbine has not been extensively studied or documented. It is important to understand how the droplets behave within the inlet so that water droplets above a critical diameter will not enter the compressor and cause damage to the compressor blades. In order to do this a FLUENT simulation was constructed in order to determine the behavior of the water droplets and if any droplets remain at the exit of the inlet, along with their size. In order to do this several engineering drawings were obtained from SRP and studies in order to obtain the correct dimensions. Then the simulation was set up using data obtained from SRP and Parker-Hannifin, the maker of the spray nozzles. Then several sets of simulations were run in order to see how the water droplets behaved under various conditions. These results were then analyzed and quantified so that they could be easily understood. The results showed that the possible damage to the compressor increased with increasing temperature at a constant relative humidity. This is due in part to the fact that in order to keep a constant relative humidity at varying temperatures the mass fraction of water vapor in the air must be changed. As temperature increases the water vapor mass fraction must increase in order to maintain a constant relative humidity. This in turn makes it slightly increases the evaporation time of the water droplets. This will then lead to more droplets exiting the inlet and at larger diameters.
ContributorsHargrave, Kevin (Author) / Lee, Taewoo (Thesis advisor) / Huang, Huei-Ping (Committee member) / Chen, Kaangping (Committee member) / Arizona State University (Publisher)
Created2013
151294-Thumbnail Image.png
Description
The partitioning of available solar energy into different fluxes at the Earth's surface is important in determining different physical processes, such as turbulent transport, subsurface hydrology, land-atmospheric interactions, etc. Direct measurements of these turbulent fluxes were carried out using eddy-covariance (EC) towers. However, the distribution of EC towers is sparse

The partitioning of available solar energy into different fluxes at the Earth's surface is important in determining different physical processes, such as turbulent transport, subsurface hydrology, land-atmospheric interactions, etc. Direct measurements of these turbulent fluxes were carried out using eddy-covariance (EC) towers. However, the distribution of EC towers is sparse due to relatively high cost and practical difficulties in logistics and deployment. As a result, data is temporally and spatially limited and is inadequate to be used for researches at large scales, such as regional and global climate modeling. Besides field measurements, an alternative way is to estimate turbulent fluxes based on the intrinsic relations between surface energy budget components, largely through thermodynamic equilibrium. These relations, referred as relative efficiency, have been included in several models to estimate the magnitude of turbulent fluxes in surface energy budgets such as latent heat and sensible heat. In this study, three theoretical models based on the lumped heat transfer model, the linear stability analysis and the maximum entropy principle respectively, were investigated. Model predictions of relative efficiencies were compared with turbulent flux data over different land covers, viz. lake, grassland and suburban surfaces. Similar results were observed over lake and suburban surface but significant deviation is found over vegetation surface. The relative efficiency of outgoing longwave radiation is found to be orders of magnitude deviated from theoretic predictions. Meanwhile, results show that energy partitioning process is influenced by the surface water availability to a great extent. The study provides insight into what property is determining energy partitioning process over different land covers and gives suggestion for future models.
ContributorsYang, Jiachuan (Author) / Wang, Zhihua (Thesis advisor) / Huang, Huei-Ping (Committee member) / Vivoni, Enrique (Committee member) / Mays, Larry (Committee member) / Arizona State University (Publisher)
Created2012
152296-Thumbnail Image.png
Description
Ten regional climate models (RCMs) and atmosphere-ocean generalized model parings from the North America Regional Climate Change Assessment Program were used to estimate the shift of extreme precipitation due to climate change using present-day and future-day climate scenarios. RCMs emulate winter storms and one-day duration events at the sub-regional level.

Ten regional climate models (RCMs) and atmosphere-ocean generalized model parings from the North America Regional Climate Change Assessment Program were used to estimate the shift of extreme precipitation due to climate change using present-day and future-day climate scenarios. RCMs emulate winter storms and one-day duration events at the sub-regional level. Annual maximum series were derived for each model pairing, each modeling period; and for annual and winter seasons. The reliability ensemble average (REA) method was used to qualify each RCM annual maximum series to reproduce historical records and approximate average predictions, because there are no future records. These series determined (a) shifts in extreme precipitation frequencies and magnitudes, and (b) shifts in parameters during modeling periods. The REA method demonstrated that the winter season had lower REA factors than the annual season. For the winter season the RCM pairing of the Hadley regional Model 3 and the Geophysical Fluid-Dynamics Laboratory atmospheric-land generalized model had the lowest REA factors. However, in replicating present-day climate, the pairing of the Abdus Salam International Center for Theoretical Physics' Regional Climate Model Version 3 with the Geophysical Fluid-Dynamics Laboratory atmospheric-land generalized model was superior. Shifts of extreme precipitation in the 24-hour event were measured using precipitation magnitude for each frequency in the annual maximum series, and the difference frequency curve in the generalized extreme-value-function parameters. The average trend of all RCM pairings implied no significant shift in the winter annual maximum series, however the REA-selected models showed an increase in annual-season precipitation extremes: 0.37 inches for the 100-year return period and for the winter season suggested approximately 0.57 inches for the same return period. Shifts of extreme precipitation were estimated using predictions 70 years into the future based on RCMs. Although these models do not provide climate information for the intervening 70 year period, the models provide an assertion on the behavior of future climate. The shift in extreme precipitation may be significant in the frequency distribution function, and will vary depending on each model-pairing condition. The proposed methodology addresses the many uncertainties associated with the current methodologies dealing with extreme precipitation.
ContributorsRiaño, Alejandro (Author) / Mays, Larry W. (Thesis advisor) / Vivoni, Enrique (Committee member) / Huang, Huei-Ping (Committee member) / Arizona State University (Publisher)
Created2013
152502-Thumbnail Image.png
Description
Climate change has been one of the major issues of global economic and social concerns in the past decade. To quantitatively predict global climate change, the Intergovernmental Panel on Climate Change (IPCC) of the United Nations have organized a multi-national effort to use global atmosphere-ocean models to project anthropogenically induced

Climate change has been one of the major issues of global economic and social concerns in the past decade. To quantitatively predict global climate change, the Intergovernmental Panel on Climate Change (IPCC) of the United Nations have organized a multi-national effort to use global atmosphere-ocean models to project anthropogenically induced climate changes in the 21st century. The computer simulations performed with those models and archived by the Coupled Model Intercomparison Project - Phase 5 (CMIP5) form the most comprehensive quantitative basis for the prediction of global environmental changes on decadal-to-centennial time scales. While the CMIP5 archives have been widely used for policy making, the inherent biases in the models have not been systematically examined. The main objective of this study is to validate the CMIP5 simulations of the 20th century climate with observations to quantify the biases and uncertainties in state-of-the-art climate models. Specifically, this work focuses on three major features in the atmosphere: the jet streams over the North Pacific and Atlantic Oceans and the low level jet (LLJ) stream over central North America which affects the weather in the United States, and the near-surface wind field over North America which is relevant to energy applications. The errors in the model simulations of those features are systematically quantified and the uncertainties in future predictions are assessed for stakeholders to use in climate applications. Additional atmospheric model simulations are performed to determine the sources of the errors in climate models. The results reject a popular idea that the errors in the sea surface temperature due to an inaccurate ocean circulation contributes to the errors in major atmospheric jet streams.
ContributorsKulkarni, Sujay (Author) / Huang, Huei-Ping (Thesis advisor) / Calhoun, Ronald (Committee member) / Peet, Yulia (Committee member) / Arizona State University (Publisher)
Created2014
152778-Thumbnail Image.png
Description
Software has a great impact on the energy efficiency of any computing system--it can manage the components of a system efficiently or inefficiently. The impact of software is amplified in the context of a wearable computing system used for activity recognition. The design space this platform opens up is immense

Software has a great impact on the energy efficiency of any computing system--it can manage the components of a system efficiently or inefficiently. The impact of software is amplified in the context of a wearable computing system used for activity recognition. The design space this platform opens up is immense and encompasses sensors, feature calculations, activity classification algorithms, sleep schedules, and transmission protocols. Design choices in each of these areas impact energy use, overall accuracy, and usefulness of the system. This thesis explores methods software can influence the trade-off between energy consumption and system accuracy. In general the more energy a system consumes the more accurate will be. We explore how finding the transitions between human activities is able to reduce the energy consumption of such systems without reducing much accuracy. We introduce the Log-likelihood Ratio Test as a method to detect transitions, and explore how choices of sensor, feature calculations, and parameters concerning time segmentation affect the accuracy of this method. We discovered an approximate 5X increase in energy efficiency could be achieved with only a 5% decrease in accuracy. We also address how a system's sleep mode, in which the processor enters a low-power state and sensors are turned off, affects a wearable computing platform that does activity recognition. We discuss the energy trade-offs in each stage of the activity recognition process. We find that careful analysis of these parameters can result in great increases in energy efficiency if small compromises in overall accuracy can be tolerated. We call this the ``Great Compromise.'' We found a 6X increase in efficiency with a 7% decrease in accuracy. We then consider how wireless transmission of data affects the overall energy efficiency of a wearable computing platform. We find that design decisions such as feature calculations and grouping size have a great impact on the energy consumption of the system because of the amount of data that is stored and transmitted. For example, storing and transmitting vector-based features such as FFT or DCT do not compress the signal and would use more energy than storing and transmitting the raw signal. The effect of grouping size on energy consumption depends on the feature. For scalar features energy consumption is proportional in the inverse of grouping size, so it's reduced as grouping size goes up. For features that depend on the grouping size, such as FFT, energy increases with the logarithm of grouping size, so energy consumption increases slowly as grouping size increases. We find that compressing data through activity classification and transition detection significantly reduces energy consumption and that the energy consumed for the classification overhead is negligible compared to the energy savings from data compression. We provide mathematical models of energy usage and data generation, and test our ideas using a mobile computing platform, the Texas Instruments Chronos watch.
ContributorsBoyd, Jeffrey Michael (Author) / Sundaram, Hari (Thesis advisor) / Li, Baoxin (Thesis advisor) / Shrivastava, Aviral (Committee member) / Turaga, Pavan (Committee member) / Arizona State University (Publisher)
Created2014