This collection includes most of the ASU Theses and Dissertations from 2011 to present. ASU Theses and Dissertations are available in downloadable PDF format; however, a small percentage of items are under embargo. Information about the dissertations/theses includes degree information, committee members, an abstract, supporting data or media.

In addition to the electronic theses found in the ASU Digital Repository, ASU Theses and Dissertations can be found in the ASU Library Catalog.

Dissertations and Theses granted by Arizona State University are archived and made available through a joint effort of the ASU Graduate College and the ASU Libraries. For more information or questions about this collection contact or visit the Digital Repository ETD Library Guide or contact the ASU Graduate College at gradformat@asu.edu.

Displaying 1 - 10 of 92
151718-Thumbnail Image.png
Description
The increasing popularity of Twitter renders improved trustworthiness and relevance assessment of tweets much more important for search. However, given the limitations on the size of tweets, it is hard to extract measures for ranking from the tweet's content alone. I propose a method of ranking tweets by generating a

The increasing popularity of Twitter renders improved trustworthiness and relevance assessment of tweets much more important for search. However, given the limitations on the size of tweets, it is hard to extract measures for ranking from the tweet's content alone. I propose a method of ranking tweets by generating a reputation score for each tweet that is based not just on content, but also additional information from the Twitter ecosystem that consists of users, tweets, and the web pages that tweets link to. This information is obtained by modeling the Twitter ecosystem as a three-layer graph. The reputation score is used to power two novel methods of ranking tweets by propagating the reputation over an agreement graph based on tweets' content similarity. Additionally, I show how the agreement graph helps counter tweet spam. An evaluation of my method on 16~million tweets from the TREC 2011 Microblog Dataset shows that it doubles the precision over baseline Twitter Search and achieves higher precision than current state of the art method. I present a detailed internal empirical evaluation of RAProp in comparison to several alternative approaches proposed by me, as well as external evaluation in comparison to the current state of the art method.
ContributorsRavikumar, Srijith (Author) / Kambhampati, Subbarao (Thesis advisor) / Davulcu, Hasan (Committee member) / Liu, Huan (Committee member) / Arizona State University (Publisher)
Created2013
152110-Thumbnail Image.png
Description
In a laboratory setting, the soil volume change behavior is best represented by using various testing standards on undisturbed or remolded samples. Whenever possible, it is most precise to use undisturbed samples to assess the volume change behavior but in the absence of undisturbed specimens, remodeled samples can be used.

In a laboratory setting, the soil volume change behavior is best represented by using various testing standards on undisturbed or remolded samples. Whenever possible, it is most precise to use undisturbed samples to assess the volume change behavior but in the absence of undisturbed specimens, remodeled samples can be used. If that is the case, the soil is compacted to in-situ density and water content (or matric suction), which should best represent the expansive profile in question. It is standard practice to subject the specimen to a wetting process at a particular net normal stress. Even though currently accepted laboratory testing standard procedures provide insight on how the profile conditions changes with time, these procedures do not assess the long term effects on the soil due to climatic changes. In this experimental study, an assessment and quantification of the effect of multiple wetting/drying cycles on the volume change behavior of two different naturally occurring soils was performed. The changes in wetting and drying cycles were extreme when comparing the swings in matric suction. During the drying cycle, the expansive soil was subjected to extreme conditions, which decreased the moisture content less than the shrinkage limit. Nevertheless, both soils were remolded at five different compacted conditions and loaded to five different net normal stresses. Each sample was subjected to six wetting and drying cycles. During the assessment, it was evident from the results that the swell/collapse strain is highly non-linear at low stress levels. The strain-net normal stress relationship cannot be defined by one single function without transforming the data. Therefore, the dataset needs to be fitted to a bi-modal logarithmic function or to a logarithmic transformation of net normal stress in order to use a third order polynomial fit. It was also determined that the moisture content changes with time are best fit by non-linear functions. For the drying cycle, the radial strain was determined to have a constant rate of change with respect to the axial strain. However, for the wetting cycle, there was not enough radial strain data to develop correlations and therefore, an assumption was made based on 55 different test measurements/observations, for the wetting cycles. In general, it was observed that after each subsequent cycle, higher swelling was exhibited for lower net normal stress values; while higher collapse potential was observed for higher net normal stress values, once the net normal stress was less than/greater than a threshold net normal stress value. Furthermore, the swelling pressure underwent a reduction in all cases. Particularly, the Anthem soil exhibited a reduction in swelling pressure by at least 20 percent after the first wetting/drying cycle; while Colorado soil exhibited a reduction of 50 percent. After about the fourth cycle, the swelling pressure seemed to stabilized to an equilibrium value at which a reduction of 46 percent was observed for the Anthem soil and 68 percent reduction for the Colorado soil. The impact of the initial compacted conditions on heave characteristics was studied. Results indicated that materials compacted at higher densities exhibited greater swell potential. When comparing specimens compacted at the same density but at different moisture content (matric suction), it was observed that specimens compacted at higher suction would exhibit higher swelling potential, when subjected to the same net normal stress. The least amount of swelling strain was observed on specimens compacted at the lowest dry density and the lowest matric suction (higher water content). The results from the laboratory testing were used to develop ultimate heave profiles for both soils. This analysis showed that even though the swell pressure for each soil decreased with cycles, the amount of heave would increase or decrease depending upon the initial compaction condition. When the specimen was compacted at 110% of optimum moisture content and 90% of maximum dry density, it resulted in an ultimate heave reduction of 92 percent for Anthem and 685 percent for Colorado soil. On the other hand, when the soils were compacted at 90% optimum moisture content and 100% of the maximum dry density, Anthem specimens heave 78% more and Colorado specimens heave was reduced by 69%. Based on the results obtained, it is evident that the current methods to estimate heave and swelling pressure do not consider the effect of wetting/drying cycles; and seem to fail capturing the free swell potential of the soil. Recommendations for improvement current methods of practice are provided.
ContributorsRosenbalm, Daniel Curtis (Author) / Zapata, Claudia E (Thesis advisor) / Houston, Sandra L. (Committee member) / Kavazanjian, Edward (Committee member) / Witczak, Mathew W (Committee member) / Arizona State University (Publisher)
Created2013
151867-Thumbnail Image.png
Description
Automating aspects of biocuration through biomedical information extraction could significantly impact biomedical research by enabling greater biocuration throughput and improving the feasibility of a wider scope. An important step in biomedical information extraction systems is named entity recognition (NER), where mentions of entities such as proteins and diseases are located

Automating aspects of biocuration through biomedical information extraction could significantly impact biomedical research by enabling greater biocuration throughput and improving the feasibility of a wider scope. An important step in biomedical information extraction systems is named entity recognition (NER), where mentions of entities such as proteins and diseases are located within natural-language text and their semantic type is determined. This step is critical for later tasks in an information extraction pipeline, including normalization and relationship extraction. BANNER is a benchmark biomedical NER system using linear-chain conditional random fields and the rich feature set approach. A case study with BANNER locating genes and proteins in biomedical literature is described. The first corpus for disease NER adequate for use as training data is introduced, and employed in a case study of disease NER. The first corpus locating adverse drug reactions (ADRs) in user posts to a health-related social website is also described, and a system to locate and identify ADRs in social media text is created and evaluated. The rich feature set approach to creating NER feature sets is argued to be subject to diminishing returns, implying that additional improvements may require more sophisticated methods for creating the feature set. This motivates the first application of multivariate feature selection with filters and false discovery rate analysis to biomedical NER, resulting in a feature set at least 3 orders of magnitude smaller than the set created by the rich feature set approach. Finally, two novel approaches to NER by modeling the semantics of token sequences are introduced. The first method focuses on the sequence content by using language models to determine whether a sequence resembles entries in a lexicon of entity names or text from an unlabeled corpus more closely. The second method models the distributional semantics of token sequences, determining the similarity between a potential mention and the token sequences from the training data by analyzing the contexts where each sequence appears in a large unlabeled corpus. The second method is shown to improve the performance of BANNER on multiple data sets.
ContributorsLeaman, James Robert (Author) / Gonzalez, Graciela (Thesis advisor) / Baral, Chitta (Thesis advisor) / Cohen, Kevin B (Committee member) / Liu, Huan (Committee member) / Ye, Jieping (Committee member) / Arizona State University (Publisher)
Created2013
151835-Thumbnail Image.png
Description
Unsaturated soil mechanics is becoming a part of geotechnical engineering practice, particularly in applications to moisture sensitive soils such as expansive and collapsible soils and in geoenvironmental applications. The soil water characteristic curve, which describes the amount of water in a soil versus soil suction, is perhaps the most important

Unsaturated soil mechanics is becoming a part of geotechnical engineering practice, particularly in applications to moisture sensitive soils such as expansive and collapsible soils and in geoenvironmental applications. The soil water characteristic curve, which describes the amount of water in a soil versus soil suction, is perhaps the most important soil property function for application of unsaturated soil mechanics. The soil water characteristic curve has been used extensively for estimating unsaturated soil properties, and a number of fitting equations for development of soil water characteristic curves from laboratory data have been proposed by researchers. Although not always mentioned, the underlying assumption of soil water characteristic curve fitting equations is that the soil is sufficiently stiff so that there is no change in total volume of the soil while measuring the soil water characteristic curve in the laboratory, and researchers rarely take volume change of soils into account when generating or using the soil water characteristic curve. Further, there has been little attention to the applied net normal stress during laboratory soil water characteristic curve measurement, and often zero to only token net normal stress is applied. The applied net normal stress also affects the volume change of the specimen during soil suction change. When a soil changes volume in response to suction change, failure to consider the volume change of the soil leads to errors in the estimated air-entry value and the slope of the soil water characteristic curve between the air-entry value and the residual moisture state. Inaccuracies in the soil water characteristic curve may lead to inaccuracies in estimated soil property functions such as unsaturated hydraulic conductivity. A number of researchers have recently recognized the importance of considering soil volume change in soil water characteristic curves. The study of correct methods of soil water characteristic curve measurement and determination considering soil volume change, and impacts on the unsaturated hydraulic conductivity function was of the primary focus of this study. Emphasis was placed upon study of the effect of volume change consideration on soil water characteristic curves, for expansive clays and other high volume change soils. The research involved extensive literature review and laboratory soil water characteristic curve testing on expansive soils. The effect of the initial state of the specimen (i.e. slurry versus compacted) on soil water characteristic curves, with regard to volume change effects, and effect of net normal stress on volume change for determination of these curves, was studied for expansive clays. Hysteresis effects were included in laboratory measurements of soil water characteristic curves as both wetting and drying paths were used. Impacts of soil water characteristic curve volume change considerations on fluid flow computations and associated suction-change induced soil deformations were studied through numerical simulations. The study includes both coupled and uncoupled flow and stress-deformation analyses, demonstrating that the impact of volume change consideration on the soil water characteristic curve and the estimated unsaturated hydraulic conductivity function can be quite substantial for high volume change soils.
ContributorsBani Hashem, Elham (Author) / Houston, Sandra L. (Thesis advisor) / Kavazanjian, Edward (Committee member) / Zapata, Claudia (Committee member) / Arizona State University (Publisher)
Created2013
151517-Thumbnail Image.png
Description
Data mining is increasing in importance in solving a variety of industry problems. Our initiative involves the estimation of resource requirements by skill set for future projects by mining and analyzing actual resource consumption data from past projects in the semiconductor industry. To achieve this goal we face difficulties like

Data mining is increasing in importance in solving a variety of industry problems. Our initiative involves the estimation of resource requirements by skill set for future projects by mining and analyzing actual resource consumption data from past projects in the semiconductor industry. To achieve this goal we face difficulties like data with relevant consumption information but stored in different format and insufficient data about project attributes to interpret consumption data. Our first goal is to clean the historical data and organize it into meaningful structures for analysis. Once the preprocessing on data is completed, different data mining techniques like clustering is applied to find projects which involve resources of similar skillsets and which involve similar complexities and size. This results in "resource utilization templates" for groups of related projects from a resource consumption perspective. Then project characteristics are identified which generate this diversity in headcounts and skillsets. These characteristics are not currently contained in the data base and are elicited from the managers of historical projects. This represents an opportunity to improve the usefulness of the data collection system for the future. The ultimate goal is to match the product technical features with the resource requirement for projects in the past as a model to forecast resource requirements by skill set for future projects. The forecasting model is developed using linear regression with cross validation of the training data as the past project execution are relatively few in number. Acceptable levels of forecast accuracy are achieved relative to human experts' results and the tool is applied to forecast some future projects' resource demand.
ContributorsBhattacharya, Indrani (Author) / Sen, Arunabha (Thesis advisor) / Kempf, Karl G. (Thesis advisor) / Liu, Huan (Committee member) / Arizona State University (Publisher)
Created2013
152541-Thumbnail Image.png
Description
Contemporary online social platforms present individuals with social signals in the form of news feed on their peers' activities. On networks such as Facebook, Quora, network operator decides how that information is shown to an individual. Then the user, with her own interests and resource constraints selectively acts on a

Contemporary online social platforms present individuals with social signals in the form of news feed on their peers' activities. On networks such as Facebook, Quora, network operator decides how that information is shown to an individual. Then the user, with her own interests and resource constraints selectively acts on a subset of items presented to her. The network operator again, shows that activity to a selection of peers, and thus creating a behavioral loop. That mechanism of interaction and information flow raises some very interesting questions such as: can network operator design social signals to promote a particular activity like sustainability, public health care awareness, or to promote a specific product? The focus of my thesis is to answer that question. In this thesis, I develop a framework to personalize social signals for users to guide their activities on an online platform. As the result, we gradually nudge the activity distribution on the platform from the initial distribution p to the target distribution q. My work is particularly applicable to guiding collaborations, guiding collective actions, and online advertising. In particular, I first propose a probabilistic model on how users behave and how information flows on the platform. The main part of this thesis after that discusses the Influence Individuals through Social Signals (IISS) framework. IISS consists of four main components: (1) Learner: it learns users' interests and characteristics from their historical activities using Bayesian model, (2) Calculator: it uses gradient descent method to compute the intermediate activity distributions, (3) Selector: it selects users who can be influenced to adopt or drop specific activities, (4) Designer: it personalizes social signals for each user. I evaluate the performance of IISS framework by simulation on several network topologies such as preferential attachment, small world, and random. I show that the framework gradually nudges users' activities to approach the target distribution. I use both simulation and mathematical method to analyse convergence properties such as how fast and how close we can approach the target distribution. When the number of activities is 3, I show that for about 45% of target distributions, we can achieve KL-divergence as low as 0.05. But for some other distributions KL-divergence can be as large as 0.5.
ContributorsLe, Tien D (Author) / Sundaram, Hari (Thesis advisor) / Davulcu, Hasan (Thesis advisor) / Liu, Huan (Committee member) / Arizona State University (Publisher)
Created2014
152596-Thumbnail Image.png
Description
This thesis presents a probabilistic evaluation of multiple laterally loaded drilled pier foundation design approaches using extensive data from a geotechnical investigation for a high voltage electric transmission line. A series of Monte Carlo simulations provide insight about the computed level of reliability considering site standard penetration test blow count

This thesis presents a probabilistic evaluation of multiple laterally loaded drilled pier foundation design approaches using extensive data from a geotechnical investigation for a high voltage electric transmission line. A series of Monte Carlo simulations provide insight about the computed level of reliability considering site standard penetration test blow count value variability alone (i.e., assuming all other aspects of the design problem do not contribute error or bias). Evaluated methods include Eurocode 7 Geotechnical Design procedures, the Federal Highway Administration drilled shaft LRFD design method, the Electric Power Research Institute transmission foundation design procedure and a site specific variability based approach previously suggested by the author of this thesis and others. The analysis method is defined by three phases: a) Evaluate the spatial variability of an existing subsurface database. b) Derive theoretical foundation designs from the database in accordance with the various design methods identified. c) Conduct Monti Carlo Simulations to compute the reliability of the theoretical foundation designs. Over several decades, reliability-based foundation design (RBD) methods have been developed and implemented to varying degrees for buildings, bridges, electric systems and other structures. In recent years, an effort has been made by researchers, professional societies and other standard-developing organizations to publish design guidelines, manuals and standards concerning RBD for foundations. Most of these approaches rely on statistical methods for quantifying load and resistance probability distribution functions with defined reliability levels. However, each varies with regard to the influence of site-specific variability on resistance. An examination of the influence of site-specific variability is required to provide direction for incorporating the concept into practical RBD design methods. Recent surveys of transmission line engineers by the Electric Power Research Institute (EPRI) demonstrate RBD methods for the design of transmission line foundations have not been widely adopted. In the absence of a unifying design document with established reliability goals, transmission line foundations have historically performed very well, with relatively few failures. However, such a track record with no set reliability goals suggests, at least in some cases, a financial premium has likely been paid.
ContributorsHeim, Zackary (Author) / Houston, Sandra (Thesis advisor) / Witczak, Matthew (Committee member) / Kavazanjian, Edward (Committee member) / Zapata, Claudia (Committee member) / Arizona State University (Publisher)
Created2014
152650-Thumbnail Image.png
Description
Hydrocarbon spill site cleanup is challenging when contaminants are present in lower permeability layers. These are difficult to remediate and may result in long-term groundwater impacts. The research goal is to investigate strategies for long-term reduction of contaminant emissions from sources in low permeability layers through partial source treatment at

Hydrocarbon spill site cleanup is challenging when contaminants are present in lower permeability layers. These are difficult to remediate and may result in long-term groundwater impacts. The research goal is to investigate strategies for long-term reduction of contaminant emissions from sources in low permeability layers through partial source treatment at higher/lower permeability interfaces. Conceptually, this provides a clean/reduced concentration zone near the interface, and consequently a reduced concentration gradient and flux from the lower permeability layer. Treatment by in-situ chemical oxidation (ISCO) was evaluated using hydrogen peroxide (H2O2) and sodium persulfate (Na2S2O8). H2O2 studies included lab and field-scale distribution studies and lab emission reduction experiments. The reaction rate of H2O2 in soils was so fast it did not travel far (<1 m) from delivery points under typical flow conditions. Oxygen gas generated and partially trapped in soil pores served as a dissolved oxygen (DO) source for >60 days in field and lab studies. During that period, the laboratory studies had reduced hydrocarbon impacts, presumably from aerobic biodegradation, which rebounded once the O2 source depleted. Therefore field monitoring should extend beyond the post-treatment elevated DO. Na2S2O8 use was studied in two-dimensional tanks (122-cm tall, 122-cm wide, and 5-cm thick) containing two contrasting permeability layers (three orders of magnitude difference). The lower permeability layer initially contained a dissolved-sorbed contaminant source throughout this layer, or a 10-cm thick non-aqueous phase liquid (NAPL)-impacted zone below the higher/lower permeability interface. The dissolved-sorbed source tank was actively treated for 14 d. Two hundred days after treatment, the emission reduction of benzene, toluene, ethylbenzene, and p-xylene (BTEX) were 95-99% and methyl tert-butyl ether (MTBE) was 63%. The LNAPL-source tank had three Na2S2O8 and two sodium hydroxide (NaOH) applications for S2O82- base activation. The resulting emission reductions for BTEX, n-propylbenzene, and 1,3,5 trymethylbenzene were 55-73%. While less effective at reducing emissions from LNAPL sources, the 14-d treatment delivered sufficient S2O82- though diffusion to remediate BTEX from the 60 cm dissolved-sorbed source. The overall S2O82- utilization in the dissolved source experiment was calculated by mass balance to be 108-125 g S2O82-/g hydrocarbon treated.
ContributorsCavanagh, Bridget (Author) / Johnson, Paul C (Thesis advisor) / Westerhoff, Paul (Committee member) / Kavazanjian, Edward (Committee member) / Bruce, Cristin (Committee member) / Arizona State University (Publisher)
Created2014
152158-Thumbnail Image.png
Description
Most data cleaning systems aim to go from a given deterministic dirty database to another deterministic but clean database. Such an enterprise pre–supposes that it is in fact possible for the cleaning process to uniquely recover the clean versions of each dirty data tuple. This is not possible in many

Most data cleaning systems aim to go from a given deterministic dirty database to another deterministic but clean database. Such an enterprise pre–supposes that it is in fact possible for the cleaning process to uniquely recover the clean versions of each dirty data tuple. This is not possible in many cases, where the most a cleaning system can do is to generate a (hopefully small) set of clean candidates for each dirty tuple. When the cleaning system is required to output a deterministic database, it is forced to pick one clean candidate (say the "most likely" candidate) per tuple. Such an approach can lead to loss of information. For example, consider a situation where there are three equally likely clean candidates of a dirty tuple. An appealing alternative that avoids such an information loss is to abandon the requirement that the output database be deterministic. In other words, even though the input (dirty) database is deterministic, I allow the reconstructed database to be probabilistic. Although such an approach does avoid the information loss, it also brings forth several challenges. For example, how many alternatives should be kept per tuple in the reconstructed database? Maintaining too many alternatives increases the size of the reconstructed database, and hence the query processing time. Second, while processing queries on the probabilistic database may well increase recall, how would they affect the precision of the query processing? In this thesis, I investigate these questions. My investigation is done in the context of a data cleaning system called BayesWipe that has the capability of producing multiple clean candidates per each dirty tuple, along with the probability that they are the correct cleaned version. I represent these alternatives as tuples in a tuple disjoint probabilistic database, and use the Mystiq system to process queries on it. This probabilistic reconstruction (called BayesWipe–PDB) is compared to a deterministic reconstruction (called BayesWipe–DET)—where the most likely clean candidate for each tuple is chosen, and the rest of the alternatives discarded.
ContributorsRihan, Preet Inder Singh (Author) / Kambhampati, Subbarao (Thesis advisor) / Liu, Huan (Committee member) / Davulcu, Hasan (Committee member) / Arizona State University (Publisher)
Created2013
152514-Thumbnail Image.png
Description
As the size and scope of valuable datasets has exploded across many industries and fields of research in recent years, an increasingly diverse audience has sought out effective tools for their large-scale data analytics needs. Over this period, machine learning researchers have also been very prolific in designing improved algorithms

As the size and scope of valuable datasets has exploded across many industries and fields of research in recent years, an increasingly diverse audience has sought out effective tools for their large-scale data analytics needs. Over this period, machine learning researchers have also been very prolific in designing improved algorithms which are capable of finding the hidden structure within these datasets. As consumers of popular Big Data frameworks have sought to apply and benefit from these improved learning algorithms, the problems encountered with the frameworks have motivated a new generation of Big Data tools to address the shortcomings of the previous generation. One important example of this is the improved performance in the newer tools with the large class of machine learning algorithms which are highly iterative in nature. In this thesis project, I set about to implement a low-rank matrix completion algorithm (as an example of a highly iterative algorithm) within a popular Big Data framework, and to evaluate its performance processing the Netflix Prize dataset. I begin by describing several approaches which I attempted, but which did not perform adequately. These include an implementation of the Singular Value Thresholding (SVT) algorithm within the Apache Mahout framework, which runs on top of the Apache Hadoop MapReduce engine. I then describe an approach which uses the Divide-Factor-Combine (DFC) algorithmic framework to parallelize the state-of-the-art low-rank completion algorithm Orthogoal Rank-One Matrix Pursuit (OR1MP) within the Apache Spark engine. I describe the results of a series of tests running this implementation with the Netflix dataset on clusters of various sizes, with various degrees of parallelism. For these experiments, I utilized the Amazon Elastic Compute Cloud (EC2) web service. In the final analysis, I conclude that the Spark DFC + OR1MP implementation does indeed produce competitive results, in both accuracy and performance. In particular, the Spark implementation performs nearly as well as the MATLAB implementation of OR1MP without any parallelism, and improves performance to a significant degree as the parallelism increases. In addition, the experience demonstrates how Spark's flexible programming model makes it straightforward to implement this parallel and iterative machine learning algorithm.
ContributorsKrouse, Brian (Author) / Ye, Jieping (Thesis advisor) / Liu, Huan (Committee member) / Davulcu, Hasan (Committee member) / Arizona State University (Publisher)
Created2014