Matching Items (94)
Filtering by

Clear all filters

150341-Thumbnail Image.png
Description
A numerical study of incremental spin-up and spin-up from rest of a thermally- stratified fluid enclosed within a right circular cylinder with rigid bottom and side walls and stress-free upper surface is presented. Thermally stratified spin-up is a typical example of baroclinity, which is initiated by a sudden increase in

A numerical study of incremental spin-up and spin-up from rest of a thermally- stratified fluid enclosed within a right circular cylinder with rigid bottom and side walls and stress-free upper surface is presented. Thermally stratified spin-up is a typical example of baroclinity, which is initiated by a sudden increase in rotation rate and the tilting of isotherms gives rise to baroclinic source of vorticity. Research by (Smirnov et al. [2010a]) showed the differences in evolution of instabilities when Dirichlet and Neumann thermal boundary conditions were applied at top and bottom walls. Study of parametric variations carried out in this dissertation confirmed the instability patterns observed by them for given aspect ratio and Rossby number values greater than 0.5. Also results reveal that flow maintained axisymmetry and stability for short aspect ratio containers independent of amount of rotational increment imparted. Investigation on vorticity components provides framework for baroclinic vorticity feedback mechanism which plays important role in delayed rise of instabilities when Dirichlet thermal Boundary Conditions are applied.
ContributorsKher, Aditya Deepak (Author) / Chen, Kangping (Thesis advisor) / Huang, Huei-Ping (Committee member) / Herrmann, Marcus (Committee member) / Arizona State University (Publisher)
Created2011
149965-Thumbnail Image.png
Description
Image processing in canals, rivers and other bodies of water has been a very important concern. This research using Image Processing was performed to obtain a photographic evidence of the data of the site which helps in monitoring the conditions of the water body and the surroundings. Images are captured

Image processing in canals, rivers and other bodies of water has been a very important concern. This research using Image Processing was performed to obtain a photographic evidence of the data of the site which helps in monitoring the conditions of the water body and the surroundings. Images are captured using a digital camera and the images are stored onto a datalogger, these images are retrieved using a cellular/ satellite modem. A MATLAB program was designed to obtain the level of water by just entering the file name into to the program, a curve fit model was created to determine the contrast parameters. The contrast parameters were obtained using the data obtained from the gray scale image mainly the mean and variance of the intensity values. The enhanced images are used to determine the level of water by taking pixel intensity plots along the region of interest. The level of water obtained is accurate to less than 2% of the actual level of water observed from the image. High speed imaging in micro channels have various application in industrial field, medical field etc. In medical field it is tested by using blood samples. The experimental procedure proposed determines the flow duration and the defects observed in these channel using a fluid introduced into the micro channel the fluid being water based dye and whole milk. The viscosity of the fluid shows different types of flow patterns and defects in the micro channel. The defects observed vary from a small effect to the flow pattern to an extreme defect in the channel such as obstruction of flow or deformation in the channel. The sample needs to be further analyzed by SEM to get a better insight on the defects.
ContributorsShasedhara, Abhijeet Bangalore (Author) / Lee, Taewoo (Thesis advisor) / Huang, Huei-Ping (Committee member) / Chen, Kangping (Committee member) / Arizona State University (Publisher)
Created2011
149803-Thumbnail Image.png
Description
With the advent of technologies such as web services, service oriented architecture and cloud computing, modern organizations have to deal with policies such as Firewall policies to secure the networks, XACML (eXtensible Access Control Markup Language) policies for controlling the access to critical information as well as resources. Management of

With the advent of technologies such as web services, service oriented architecture and cloud computing, modern organizations have to deal with policies such as Firewall policies to secure the networks, XACML (eXtensible Access Control Markup Language) policies for controlling the access to critical information as well as resources. Management of these policies is an extremely important task in order to avoid unintended security leakages via illegal accesses, while maintaining proper access to services for legitimate users. Managing and maintaining access control policies manually over long period of time is an error prone task due to their inherent complex nature. Existing tools and mechanisms for policy management use different approaches for different types of policies. This research thesis represents a generic framework to provide an unified approach for policy analysis and management of different types of policies. Generic approach captures the common semantics and structure of different access control policies with the notion of policy ontology. Policy ontology representation is then utilized for effectively analyzing and managing the policies. This thesis also discusses a proof-of-concept implementation of the proposed generic framework and demonstrates how efficiently this unified approach can be used for analysis and management of different types of access control policies.
ContributorsKulkarni, Ketan (Author) / Ahn, Gail-Joon (Thesis advisor) / Yau, Stephen S. (Committee member) / Huang, Dijiang (Committee member) / Arizona State University (Publisher)
Created2011
150190-Thumbnail Image.png
Description
Sparse learning is a technique in machine learning for feature selection and dimensionality reduction, to find a sparse set of the most relevant features. In any machine learning problem, there is a considerable amount of irrelevant information, and separating relevant information from the irrelevant information has been a topic of

Sparse learning is a technique in machine learning for feature selection and dimensionality reduction, to find a sparse set of the most relevant features. In any machine learning problem, there is a considerable amount of irrelevant information, and separating relevant information from the irrelevant information has been a topic of focus. In supervised learning like regression, the data consists of many features and only a subset of the features may be responsible for the result. Also, the features might require special structural requirements, which introduces additional complexity for feature selection. The sparse learning package, provides a set of algorithms for learning a sparse set of the most relevant features for both regression and classification problems. Structural dependencies among features which introduce additional requirements are also provided as part of the package. The features may be grouped together, and there may exist hierarchies and over- lapping groups among these, and there may be requirements for selecting the most relevant groups among them. In spite of getting sparse solutions, the solutions are not guaranteed to be robust. For the selection to be robust, there are certain techniques which provide theoretical justification of why certain features are selected. The stability selection, is a method for feature selection which allows the use of existing sparse learning methods to select the stable set of features for a given training sample. This is done by assigning probabilities for the features: by sub-sampling the training data and using a specific sparse learning technique to learn the relevant features, and repeating this a large number of times, and counting the probability as the number of times a feature is selected. Cross-validation which is used to determine the best parameter value over a range of values, further allows to select the best parameter value. This is done by selecting the parameter value which gives the maximum accuracy score. With such a combination of algorithms, with good convergence guarantees, stable feature selection properties and the inclusion of various structural dependencies among features, the sparse learning package will be a powerful tool for machine learning research. Modular structure, C implementation, ATLAS integration for fast linear algebraic subroutines, make it one of the best tool for a large sparse setting. The varied collection of algorithms, support for group sparsity, batch algorithms, are a few of the notable functionality of the SLEP package, and these features can be used in a variety of fields to infer relevant elements. The Alzheimer Disease(AD) is a neurodegenerative disease, which gradually leads to dementia. The SLEP package is used for feature selection for getting the most relevant biomarkers from the available AD dataset, and the results show that, indeed, only a subset of the features are required to gain valuable insights.
ContributorsThulasiram, Ramesh (Author) / Ye, Jieping (Thesis advisor) / Xue, Guoliang (Committee member) / Sen, Arunabha (Committee member) / Arizona State University (Publisher)
Created2011
150092-Thumbnail Image.png
Description
The evolution of single hairpin vortices and multiple interacting hairpin vortices are studied in direct numerical simulations of channel flow at Re-tau=395. The purpose of this study is to observe the effects of increased Reynolds number and varying initial conditions on the growth of hairpins and the conditions under which

The evolution of single hairpin vortices and multiple interacting hairpin vortices are studied in direct numerical simulations of channel flow at Re-tau=395. The purpose of this study is to observe the effects of increased Reynolds number and varying initial conditions on the growth of hairpins and the conditions under which single hairpins autogenerate hairpin packets. The hairpin vortices are believed to provide a unified picture of wall turbulence and play an important role in the production of Reynolds shear stress which is directly related to turbulent drag. The structures of the initial three-dimensional vortices are extracted from the two-point spatial correlation of the fully turbulent direct numerical simulation of the velocity field by linear stochastic estimation and embedded in a mean flow having the profile of the fully turbulent flow. The Reynolds number of the present simulation is more than twice that of the Re-tau=180 flow from earlier literature and the conditional events used to define the stochastically estimated single vortex initial conditions include a number of new types of events such as quasi-streamwise vorticity and Q4 events. The effects of parameters like strength, asymmetry and position are evaluated and compared with existing results in the literature. This study then attempts to answer questions concerning how vortex mergers produce larger scale structures, a process that may contribute to the growth of length scale with increasing distance from the wall in turbulent wall flows. Multiple vortex interactions are studied in detail.
ContributorsParthasarathy, Praveen Kumar (Author) / Adrian, Ronald (Thesis advisor) / Huang, Huei-Ping (Committee member) / Herrmann, Marcus (Committee member) / Arizona State University (Publisher)
Created2011
150093-Thumbnail Image.png
Description
Action language C+ is a formalism for describing properties of actions, which is based on nonmonotonic causal logic. The definite fragment of C+ is implemented in the Causal Calculator (CCalc), which is based on the reduction of nonmonotonic causal logic to propositional logic. This thesis describes the language

Action language C+ is a formalism for describing properties of actions, which is based on nonmonotonic causal logic. The definite fragment of C+ is implemented in the Causal Calculator (CCalc), which is based on the reduction of nonmonotonic causal logic to propositional logic. This thesis describes the language of CCalc in terms of answer set programming (ASP), based on the translation of nonmonotonic causal logic to formulas under the stable model semantics. I designed a standard library which describes the constructs of the input language of CCalc in terms of ASP, allowing a simple modular method to represent CCalc input programs in the language of ASP. Using the combination of system F2LP and answer set solvers, this method achieves functionality close to that of CCalc while taking advantage of answer set solvers to yield efficient computation that is orders of magnitude faster than CCalc for many benchmark examples. In support of this, I created an automated translation system Cplus2ASP that implements the translation and encoding method and automatically invokes the necessary software to solve the translated input programs.
ContributorsCasolary, Michael (Author) / Lee, Joohyung (Thesis advisor) / Ahn, Gail-Joon (Committee member) / Baral, Chitta (Committee member) / Arizona State University (Publisher)
Created2011
150148-Thumbnail Image.png
Description
In order to catch the smartest criminals in the world, digital forensics examiners need a means of collaborating and sharing information with each other and outside experts that is not prohibitively difficult. However, standard operating procedures and the rules of evidence generally disallow the use of the collaboration software and

In order to catch the smartest criminals in the world, digital forensics examiners need a means of collaborating and sharing information with each other and outside experts that is not prohibitively difficult. However, standard operating procedures and the rules of evidence generally disallow the use of the collaboration software and techniques that are currently available because they do not fully adhere to the dictated procedures for the handling, analysis, and disclosure of items relating to cases. The aim of this work is to conceive and design a framework that provides a completely new architecture that 1) can perform fundamental functions that are common and necessary to forensic analyses, and 2) is structured such that it is possible to include collaboration-facilitating components without changing the way users interact with the system sans collaboration. This framework is called the Collaborative Forensic Framework (CUFF). CUFF is constructed from four main components: Cuff Link, Storage, Web Interface, and Analysis Block. With the Cuff Link acting as a mediator between components, CUFF is flexible in both the method of deployment and the technologies used in implementation. The details of a realization of CUFF are given, which uses a combination of Java, the Google Web Toolkit, Django with Apache for a RESTful web service, and an Ubuntu Enterprise Cloud using Eucalyptus. The functionality of CUFF's components is demonstrated by the integration of an acquisition script designed for Android OS-based mobile devices that use the YAFFS2 file system. While this work has obvious application to examination labs which work under the mandate of judicial or investigative bodies, security officers at any organization would benefit from the improved ability to cooperate in electronic discovery efforts and internal investigations.
ContributorsMabey, Michael Kent (Author) / Ahn, Gail-Joon (Thesis advisor) / Yau, Stephen S. (Committee member) / Huang, Dijiang (Committee member) / Arizona State University (Publisher)
Created2011
152278-Thumbnail Image.png
Description
The digital forensics community has neglected email forensics as a process, despite the fact that email remains an important tool in the commission of crime. Current forensic practices focus mostly on that of disk forensics, while email forensics is left as an analysis task stemming from that practice. As there

The digital forensics community has neglected email forensics as a process, despite the fact that email remains an important tool in the commission of crime. Current forensic practices focus mostly on that of disk forensics, while email forensics is left as an analysis task stemming from that practice. As there is no well-defined process to be used for email forensics the comprehensiveness, extensibility of tools, uniformity of evidence, usefulness in collaborative/distributed environments, and consistency of investigations are hindered. At present, there exists little support for discovering, acquiring, and representing web-based email, despite its widespread use. To remedy this, a systematic process which includes discovering, acquiring, and representing web-based email for email forensics which is integrated into the normal forensic analysis workflow, and which accommodates the distinct characteristics of email evidence will be presented. This process focuses on detecting the presence of non-obvious artifacts related to email accounts, retrieving the data from the service provider, and representing email in a well-structured format based on existing standards. As a result, developers and organizations can collaboratively create and use analysis tools that can analyze email evidence from any source in the same fashion and the examiner can access additional data relevant to their forensic cases. Following, an extensible framework implementing this novel process-driven approach has been implemented in an attempt to address the problems of comprehensiveness, extensibility, uniformity, collaboration/distribution, and consistency within forensic investigations involving email evidence.
ContributorsPaglierani, Justin W (Author) / Ahn, Gail-Joon (Thesis advisor) / Yau, Stephen S. (Committee member) / Santanam, Raghu T (Committee member) / Arizona State University (Publisher)
Created2013
152296-Thumbnail Image.png
Description
Ten regional climate models (RCMs) and atmosphere-ocean generalized model parings from the North America Regional Climate Change Assessment Program were used to estimate the shift of extreme precipitation due to climate change using present-day and future-day climate scenarios. RCMs emulate winter storms and one-day duration events at the sub-regional level.

Ten regional climate models (RCMs) and atmosphere-ocean generalized model parings from the North America Regional Climate Change Assessment Program were used to estimate the shift of extreme precipitation due to climate change using present-day and future-day climate scenarios. RCMs emulate winter storms and one-day duration events at the sub-regional level. Annual maximum series were derived for each model pairing, each modeling period; and for annual and winter seasons. The reliability ensemble average (REA) method was used to qualify each RCM annual maximum series to reproduce historical records and approximate average predictions, because there are no future records. These series determined (a) shifts in extreme precipitation frequencies and magnitudes, and (b) shifts in parameters during modeling periods. The REA method demonstrated that the winter season had lower REA factors than the annual season. For the winter season the RCM pairing of the Hadley regional Model 3 and the Geophysical Fluid-Dynamics Laboratory atmospheric-land generalized model had the lowest REA factors. However, in replicating present-day climate, the pairing of the Abdus Salam International Center for Theoretical Physics' Regional Climate Model Version 3 with the Geophysical Fluid-Dynamics Laboratory atmospheric-land generalized model was superior. Shifts of extreme precipitation in the 24-hour event were measured using precipitation magnitude for each frequency in the annual maximum series, and the difference frequency curve in the generalized extreme-value-function parameters. The average trend of all RCM pairings implied no significant shift in the winter annual maximum series, however the REA-selected models showed an increase in annual-season precipitation extremes: 0.37 inches for the 100-year return period and for the winter season suggested approximately 0.57 inches for the same return period. Shifts of extreme precipitation were estimated using predictions 70 years into the future based on RCMs. Although these models do not provide climate information for the intervening 70 year period, the models provide an assertion on the behavior of future climate. The shift in extreme precipitation may be significant in the frequency distribution function, and will vary depending on each model-pairing condition. The proposed methodology addresses the many uncertainties associated with the current methodologies dealing with extreme precipitation.
ContributorsRiaño, Alejandro (Author) / Mays, Larry W. (Thesis advisor) / Vivoni, Enrique (Committee member) / Huang, Huei-Ping (Committee member) / Arizona State University (Publisher)
Created2013
151645-Thumbnail Image.png
Description
Gas turbines have become widely used in the generation of power for cities. They are used all over the world and must operate under a wide variety of ambient conditions. Every turbine has a temperature at which it operates at peak capacity. In order to attain this temperature in the

Gas turbines have become widely used in the generation of power for cities. They are used all over the world and must operate under a wide variety of ambient conditions. Every turbine has a temperature at which it operates at peak capacity. In order to attain this temperature in the hotter months various cooling methods are used such as refrigeration inlet cooling systems, evaporative methods, and thermal energy storage systems. One of the more widely used is the evaporative systems because it is one of the safest and easiest to utilize method. However, the behavior of water droplets within the inlet to the turbine has not been extensively studied or documented. It is important to understand how the droplets behave within the inlet so that water droplets above a critical diameter will not enter the compressor and cause damage to the compressor blades. In order to do this a FLUENT simulation was constructed in order to determine the behavior of the water droplets and if any droplets remain at the exit of the inlet, along with their size. In order to do this several engineering drawings were obtained from SRP and studies in order to obtain the correct dimensions. Then the simulation was set up using data obtained from SRP and Parker-Hannifin, the maker of the spray nozzles. Then several sets of simulations were run in order to see how the water droplets behaved under various conditions. These results were then analyzed and quantified so that they could be easily understood. The results showed that the possible damage to the compressor increased with increasing temperature at a constant relative humidity. This is due in part to the fact that in order to keep a constant relative humidity at varying temperatures the mass fraction of water vapor in the air must be changed. As temperature increases the water vapor mass fraction must increase in order to maintain a constant relative humidity. This in turn makes it slightly increases the evaporation time of the water droplets. This will then lead to more droplets exiting the inlet and at larger diameters.
ContributorsHargrave, Kevin (Author) / Lee, Taewoo (Thesis advisor) / Huang, Huei-Ping (Committee member) / Chen, Kaangping (Committee member) / Arizona State University (Publisher)
Created2013