This collection includes most of the ASU Theses and Dissertations from 2011 to present. ASU Theses and Dissertations are available in downloadable PDF format; however, a small percentage of items are under embargo. Information about the dissertations/theses includes degree information, committee members, an abstract, supporting data or media.

In addition to the electronic theses found in the ASU Digital Repository, ASU Theses and Dissertations can be found in the ASU Library Catalog.

Dissertations and Theses granted by Arizona State University are archived and made available through a joint effort of the ASU Graduate College and the ASU Libraries. For more information or questions about this collection contact or visit the Digital Repository ETD Library Guide or contact the ASU Graduate College at gradformat@asu.edu.

Displaying 1 - 10 of 260
Filtering by

Clear all filters

150019-Thumbnail Image.png
Description
Currently Java is making its way into the embedded systems and mobile devices like androids. The programs written in Java are compiled into machine independent binary class byte codes. A Java Virtual Machine (JVM) executes these classes. The Java platform additionally specifies the Java Native Interface (JNI). JNI allows Java

Currently Java is making its way into the embedded systems and mobile devices like androids. The programs written in Java are compiled into machine independent binary class byte codes. A Java Virtual Machine (JVM) executes these classes. The Java platform additionally specifies the Java Native Interface (JNI). JNI allows Java code that runs within a JVM to interoperate with applications or libraries that are written in other languages and compiled to the host CPU ISA. JNI plays an important role in embedded system as it provides a mechanism to interact with libraries specific to the platform. This thesis addresses the overhead incurred in the JNI due to reflection and serialization when objects are accessed on android based mobile devices. It provides techniques to reduce this overhead. It also provides an API to access objects through its reference through pinning its memory location. The Android emulator was used to evaluate the performance of these techniques and we observed that there was 5 - 10 % performance gain in the new Java Native Interface.
ContributorsChandrian, Preetham (Author) / Lee, Yann-Hang (Thesis advisor) / Davulcu, Hasan (Committee member) / Li, Baoxin (Committee member) / Arizona State University (Publisher)
Created2011
150026-Thumbnail Image.png
Description
As pointed out in the keynote speech by H. V. Jagadish in SIGMOD'07, and also commonly agreed in the database community, the usability of structured data by casual users is as important as the data management systems' functionalities. A major hardness of using structured data is the problem of easily

As pointed out in the keynote speech by H. V. Jagadish in SIGMOD'07, and also commonly agreed in the database community, the usability of structured data by casual users is as important as the data management systems' functionalities. A major hardness of using structured data is the problem of easily retrieving information from them given a user's information needs. Learning and using a structured query language (e.g., SQL and XQuery) is overwhelmingly burdensome for most users, as not only are these languages sophisticated, but the users need to know the data schema. Keyword search provides us with opportunities to conveniently access structured data and potentially significantly enhances the usability of structured data. However, processing keyword search on structured data is challenging due to various types of ambiguities such as structural ambiguity (keyword queries have no structure), keyword ambiguity (the keywords may not be accurate), user preference ambiguity (the user may have implicit preferences that are not indicated in the query), as well as the efficiency challenges due to large search space. This dissertation performs an expansive study on keyword search processing techniques as a gateway for users to access structured data and retrieve desired information. The key issues addressed include: (1) Resolving structural ambiguities in keyword queries by generating meaningful query results, which involves identifying relevant keyword matches, identifying return information, composing query results based on relevant matches and return information. (2) Resolving structural, keyword and user preference ambiguities through result analysis, including snippet generation, result differentiation, result clustering, result summarization/query expansion, etc. (3) Resolving the efficiency challenge in processing keyword search on structured data by utilizing and efficiently maintaining materialized views. These works deliver significant technical contributions towards building a full-fledged search engine for structured data.
ContributorsLiu, Ziyang (Author) / Chen, Yi (Thesis advisor) / Candan, Kasim S (Committee member) / Davulcu, Hasan (Committee member) / Jagadish, H V (Committee member) / Arizona State University (Publisher)
Created2011
149681-Thumbnail Image.png
Description
The trend towards using recycled materials on new construction projects is growing as the cost for construction materials are ever increasing and the awareness of the responsibility we have to be good stewards of our environment is heightened. While recycled asphalt is sometimes used in pavements, its use as structural

The trend towards using recycled materials on new construction projects is growing as the cost for construction materials are ever increasing and the awareness of the responsibility we have to be good stewards of our environment is heightened. While recycled asphalt is sometimes used in pavements, its use as structural fill has been hindered by concern that it is susceptible to large long-term deformations (creep), preventing its use for a great many geotechnical applications. While asphalt/soil blends are often proposed as an alternative to 100% recycled asphalt fill, little data is available characterizing the geotechnical properties of recycled asphalt soil blends. In this dissertation, the geotechnical properties for five different recycled asphalt soil blends are characterized. Data includes the particle size distribution, plasticity index, creep, and shear strength for each blend. Blends with 0%, 25%, 50%, 75% and 100% recycled asphalt were tested. As the recycled asphalt material used for testing had particles sizes up to 1.5 inches, a large 18 inch diameter direct shear apparatus was used to determine the shear strength and creep characteristics of the material. The results of the testing program confirm that the creep potential of recycled asphalt is a geotechnical concern when the material is subjected to loads greater than 1500 pounds per square foot (psf). In addition, the test results demonstrate that the amount of soil blended with the recycled asphalt can greatly influence the creep and shear strength behavior of the composite material. Furthermore, there appears to be an optimal blend ratio where the composite material had better properties than either the recycled asphalt or virgin soil alone with respect to shear strength.
ContributorsSchaper, Jeffery M (Author) / Kavazanjian, Edward (Thesis advisor) / Houston, Sandra L. (Committee member) / Zapata, Claudia E (Committee member) / Arizona State University (Publisher)
Created2011
149728-Thumbnail Image.png
Description
In geotechnical engineering, measuring the unsaturated hydraulic conductivity of fine grained soils can be time consuming and tedious. The various applications that require knowledge of the unsaturated hydraulic conductivity function are great, and in geotechnical engineering, they range from modeling seepage through landfill covers to determining infiltration of water

In geotechnical engineering, measuring the unsaturated hydraulic conductivity of fine grained soils can be time consuming and tedious. The various applications that require knowledge of the unsaturated hydraulic conductivity function are great, and in geotechnical engineering, they range from modeling seepage through landfill covers to determining infiltration of water under a building slab. The unsaturated hydraulic conductivity function can be measured using various direct and indirect techniques. The instantaneous profile method has been found to be the most promising unsteady state method for measuring the unsaturated hydraulic conductivity function for fine grained soils over a wide range of suction values. The instantaneous profile method can be modified by using different techniques to measure suction and water content and also through the way water is introduced or removed from the soil profile. In this study, the instantaneous profile method was modified by creating duplicate soil samples compacted into cylindrical tubes at two different water contents. The techniques used in the duplicate method to measure the water content and matric suction included volumetric moisture probes, manual water content measurements, and filter paper tests. The experimental testing conducted in this study provided insight into determining the unsaturated hydraulic conductivity using the instantaneous profile method for a sandy clay soil and recommendations are provided for further evaluation. Overall, this study has demonstrated that the presence of cracks has no significant impact on the hydraulic behavior of soil in high suction ranges. The results of this study do not examine the behavior of cracked soil unsaturated hydraulic conductivity at low suction and at moisture contents near saturation.
ContributorsJacquemin, Sean Christopher (Author) / Zapata, Claudia (Thesis advisor) / Houston, Sandra (Committee member) / Kavazanjian, Edward (Committee member) / Arizona State University (Publisher)
Created2011
149794-Thumbnail Image.png
Description
Genes have widely different pertinences to the etiology and pathology of diseases. Thus, they can be ranked according to their disease-significance on a genomic scale, which is the subject of gene prioritization. Given a set of genes known to be related to a disease, it is reasonable to use them

Genes have widely different pertinences to the etiology and pathology of diseases. Thus, they can be ranked according to their disease-significance on a genomic scale, which is the subject of gene prioritization. Given a set of genes known to be related to a disease, it is reasonable to use them as a basis to determine the significance of other candidate genes, which will then be ranked based on the association they exhibit with respect to the given set of known genes. Experimental and computational data of various kinds have different reliability and relevance to a disease under study. This work presents a gene prioritization method based on integrated biological networks that incorporates and models the various levels of relevance and reliability of diverse sources. The method is shown to achieve significantly higher performance as compared to two well-known gene prioritization algorithms. Essentially, no bias in the performance was seen as it was applied to diseases of diverse ethnology, e.g., monogenic, polygenic and cancer. The method was highly stable and robust against significant levels of noise in the data. Biological networks are often sparse, which can impede the operation of associationbased gene prioritization algorithms such as the one presented here from a computational perspective. As a potential approach to overcome this limitation, we explore the value that transcription factor binding sites can have in elucidating suitable targets. Transcription factors are needed for the expression of most genes, especially in higher organisms and hence genes can be associated via their genetic regulatory properties. While each transcription factor recognizes specific DNA sequence patterns, such patterns are mostly unknown for many transcription factors. Even those that are known are inconsistently reported in the literature, implying a potentially high level of inaccuracy. We developed computational methods for prediction and improvement of transcription factor binding patterns. Tests performed on the improvement method by employing synthetic patterns under various conditions showed that the method is very robust and the patterns produced invariably converge to nearly identical series of patterns. Preliminary tests were conducted to incorporate knowledge from transcription factor binding sites into our networkbased model for prioritization, with encouraging results. Genes have widely different pertinences to the etiology and pathology of diseases. Thus, they can be ranked according to their disease-significance on a genomic scale, which is the subject of gene prioritization. Given a set of genes known to be related to a disease, it is reasonable to use them as a basis to determine the significance of other candidate genes, which will then be ranked based on the association they exhibit with respect to the given set of known genes. Experimental and computational data of various kinds have different reliability and relevance to a disease under study. This work presents a gene prioritization method based on integrated biological networks that incorporates and models the various levels of relevance and reliability of diverse sources. The method is shown to achieve significantly higher performance as compared to two well-known gene prioritization algorithms. Essentially, no bias in the performance was seen as it was applied to diseases of diverse ethnology, e.g., monogenic, polygenic and cancer. The method was highly stable and robust against significant levels of noise in the data. Biological networks are often sparse, which can impede the operation of associationbased gene prioritization algorithms such as the one presented here from a computational perspective. As a potential approach to overcome this limitation, we explore the value that transcription factor binding sites can have in elucidating suitable targets. Transcription factors are needed for the expression of most genes, especially in higher organisms and hence genes can be associated via their genetic regulatory properties. While each transcription factor recognizes specific DNA sequence patterns, such patterns are mostly unknown for many transcription factors. Even those that are known are inconsistently reported in the literature, implying a potentially high level of inaccuracy. We developed computational methods for prediction and improvement of transcription factor binding patterns. Tests performed on the improvement method by employing synthetic patterns under various conditions showed that the method is very robust and the patterns produced invariably converge to nearly identical series of patterns. Preliminary tests were conducted to incorporate knowledge from transcription factor binding sites into our networkbased model for prioritization, with encouraging results. To validate these approaches in a disease-specific context, we built a schizophreniaspecific network based on the inferred associations and performed a comprehensive prioritization of human genes with respect to the disease. These results are expected to be validated empirically, but computational validation using known targets are very positive.
ContributorsLee, Jang (Author) / Gonzalez, Graciela (Thesis advisor) / Ye, Jieping (Committee member) / Davulcu, Hasan (Committee member) / Gallitano-Mendel, Amelia (Committee member) / Arizona State University (Publisher)
Created2011
149907-Thumbnail Image.png
Description
Most existing approaches to complex event processing over streaming data rely on the assumption that the matches to the queries are rare and that the goal of the system is to identify these few matches within the incoming deluge of data. In many applications, such as stock market analysis and

Most existing approaches to complex event processing over streaming data rely on the assumption that the matches to the queries are rare and that the goal of the system is to identify these few matches within the incoming deluge of data. In many applications, such as stock market analysis and user credit card purchase pattern monitoring, however the matches to the user queries are in fact plentiful and the system has to efficiently sift through these many matches to locate only the few most preferable matches. In this work, we propose a complex pattern ranking (CPR) framework for specifying top-k pattern queries over streaming data, present new algorithms to support top-k pattern queries in data streaming environments, and verify the effectiveness and efficiency of the proposed algorithms. The developed algorithms identify top-k matching results satisfying both patterns as well as additional criteria. To support real-time processing of the data streams, instead of computing top-k results from scratch for each time window, we maintain top-k results dynamically as new events come and old ones expire. We also develop new top-k join execution strategies that are able to adapt to the changing situations (e.g., sorted and random access costs, join rates) without having to assume a priori presence of data statistics. Experiments show significant improvements over existing approaches.
ContributorsWang, Xinxin (Author) / Candan, K. Selcuk (Thesis advisor) / Chen, Yi (Committee member) / Davulcu, Hasan (Committee member) / Arizona State University (Publisher)
Created2011
149822-Thumbnail Image.png
Description
It is estimated that wind induced soil transports more than 500 x 106 metric tons of fugitive dust annually. Soil erosion has negative effects on human health, the productivity of farms, and the quality of surface waters. A variety of different polymer stabilizers are available on the market for fugitive

It is estimated that wind induced soil transports more than 500 x 106 metric tons of fugitive dust annually. Soil erosion has negative effects on human health, the productivity of farms, and the quality of surface waters. A variety of different polymer stabilizers are available on the market for fugitive dust control. Most of these polymer stabilizers are expensive synthetic polymer products. Their adverse effects and expense usually limits their use. Biopolymers provide a potential alternative to synthetic polymers. They can provide dust abatement by encapsulating soil particles and creating a binding network throughout the treated area. This research into the effectiveness of biopolymers for fugitive dust control involved three phases. Phase I included proof of concept tests. Phase II included carrying out the tests in a wind tunnel. Phase III consisted of conducting the experiments in the field. Proof of concept tests showed that biopolymers have the potential to reduce soil erosion and fugitive dust transport. Wind tunnel tests on two candidate biopolymers, xanthan and chitosan, showed that there is a proportional relationship between biopolymer application rates and threshold wind velocities. The wind tunnel tests also showed that xanthan gum is more successful in the field than chitosan. The field tests showed that xanthan gum was effective at controlling soil erosion. However, the chitosan field data was inconsistent with the xanthan data and field data on bare soil.
ContributorsAlsanad, Abdullah (Author) / Kavazanjian, Edward (Thesis advisor) / Edwards, David (Committee member) / Zapata, Claudia (Committee member) / Arizona State University (Publisher)
Created2011
149695-Thumbnail Image.png
Description
Data-driven applications are becoming increasingly complex with support for processing events and data streams in a loosely-coupled distributed environment, providing integrated access to heterogeneous data sources such as relational databases and XML documents. This dissertation explores the use of materialized views over structured heterogeneous data sources to support multiple query

Data-driven applications are becoming increasingly complex with support for processing events and data streams in a loosely-coupled distributed environment, providing integrated access to heterogeneous data sources such as relational databases and XML documents. This dissertation explores the use of materialized views over structured heterogeneous data sources to support multiple query optimization in a distributed event stream processing framework that supports such applications involving various query expressions for detecting events, monitoring conditions, handling data streams, and querying data. Materialized views store the results of the computed view so that subsequent access to the view retrieves the materialized results, avoiding the cost of recomputing the entire view from base data sources. Using a service-based metadata repository that provides metadata level access to the various language components in the system, a heuristics-based algorithm detects the common subexpressions from the queries represented in a mixed multigraph model over relational and structured XML data sources. These common subexpressions can be relational, XML or a hybrid join over the heterogeneous data sources. This research examines the challenges in the definition and materialization of views when the heterogeneous data sources are retained in their native format, instead of converting the data to a common model. LINQ serves as the materialized view definition language for creating the view definitions. An algorithm is introduced that uses LINQ to create a data structure for the persistence of these hybrid views. Any changes to base data sources used to materialize views are captured and mapped to a delta structure. The deltas are then streamed within the framework for use in the incremental update of the materialized view. Algorithms are presented that use the magic sets query optimization approach to both efficiently materialize the views and to propagate the relevant changes to the views for incremental maintenance. Using representative scenarios over structured heterogeneous data sources, an evaluation of the framework demonstrates an improvement in performance. Thus, defining the LINQ-based materialized views over heterogeneous structured data sources using the detected common subexpressions and incrementally maintaining the views by using magic sets enhances the efficiency of the distributed event stream processing environment.
ContributorsChaudhari, Mahesh Balkrishna (Author) / Dietrich, Suzanne W (Thesis advisor) / Urban, Susan D (Committee member) / Davulcu, Hasan (Committee member) / Chen, Yi (Committee member) / Arizona State University (Publisher)
Created2011
150169-Thumbnail Image.png
Description
A method for evaluating the integrity of geosynthetic elements of a waste containment system subject to seismic loading is developed using a large strain finite difference numerical computer program. The method accounts for the effect of interaction between the geosynthetic elements and the overlying waste on seismic response and allows

A method for evaluating the integrity of geosynthetic elements of a waste containment system subject to seismic loading is developed using a large strain finite difference numerical computer program. The method accounts for the effect of interaction between the geosynthetic elements and the overlying waste on seismic response and allows for explicit calculation of forces and strains in the geosynthetic elements. Based upon comparison of numerical results to experimental data, an elastic-perfectly plastic interface model is demonstrated to adequately reproduce the cyclic behavior of typical geomembrane-geotextile and geomembrane-geomembrane interfaces provided the appropriate interface properties are used. New constitutive models are developed for the in-plane cyclic shear behavior of textured geomembrane/geosynthetic clay liner (GMX/GCL) interfaces and GCLs. The GMX/GCL model is an empirical model and the GCL model is a kinematic hardening, isotropic softening multi yield surface plasticity model. Both new models allows for degradation in the cyclic shear resistance from a peak to a large displacement shear strength. The ability of the finite difference model to predict forces and strains in a geosynthetic element modeled as a beam element with zero moment of inertia sandwiched between two interface elements is demonstrated using hypothetical models of a heap leach pad and two typical landfill configurations. The numerical model is then used to conduct back analyses of the performance of two lined municipal solid waste (MSW) landfills subjected to strong ground motions in the Northridge earthquake. The modulus reduction "backbone curve" employed with the Masing criterion and 2% Rayleigh damping to model the cyclic behavior of MSW was established by back-analysis of the response of the Operating Industries Inc. landfill to five different earthquakes, three small magnitude nearby events and two larger magnitude distant events. The numerical back analysis was able to predict the tears observed in the Chiquita Canyon Landfill liner system after the earthquake if strain concentrations due to seams and scratches in the geomembrane are taken into account. The apparent good performance of the Lopez Canyon landfill geomembrane and the observed tension in the overlying geotextile after the Northridge event was also successfully predicted using the numerical model.
ContributorsArab, Mohamed G (Author) / Kavazanjian, Edward (Thesis advisor) / Zapata, Claudia (Committee member) / Houston, Sandra (Committee member) / Arizona State University (Publisher)
Created2011
150212-Thumbnail Image.png
Description
This thesis addresses the problem of online schema updates where the goal is to be able to update relational database schemas without reducing the database system's availability. Unlike some other work in this area, this thesis presents an approach which is completely client-driven and does not require specialized database management

This thesis addresses the problem of online schema updates where the goal is to be able to update relational database schemas without reducing the database system's availability. Unlike some other work in this area, this thesis presents an approach which is completely client-driven and does not require specialized database management systems (DBMS). Also, unlike other client-driven work, this approach provides support for a richer set of schema updates including vertical split (normalization), horizontal split, vertical and horizontal merge (union), difference and intersection. The update process automatically generates a runtime update client from a mapping between the old the new schemas. The solution has been validated by testing it on a relatively small database of around 300,000 records per table and less than 1 Gb, but with limited memory buffer size of 24 Mb. This thesis presents the study of the overhead of the update process as a function of the transaction rates and the batch size used to copy data from the old to the new schema. It shows that the overhead introduced is minimal for medium size applications and that the update can be achieved with no more than one minute of downtime.
ContributorsTyagi, Preetika (Author) / Bazzi, Rida (Thesis advisor) / Candan, Kasim S (Committee member) / Davulcu, Hasan (Committee member) / Arizona State University (Publisher)
Created2011