This collection includes most of the ASU Theses and Dissertations from 2011 to present. ASU Theses and Dissertations are available in downloadable PDF format; however, a small percentage of items are under embargo. Information about the dissertations/theses includes degree information, committee members, an abstract, supporting data or media.

In addition to the electronic theses found in the ASU Digital Repository, ASU Theses and Dissertations can be found in the ASU Library Catalog.

Dissertations and Theses granted by Arizona State University are archived and made available through a joint effort of the ASU Graduate College and the ASU Libraries. For more information or questions about this collection contact or visit the Digital Repository ETD Library Guide or contact the ASU Graduate College at gradformat@asu.edu.

Displaying 1 - 10 of 91
151718-Thumbnail Image.png
Description
The increasing popularity of Twitter renders improved trustworthiness and relevance assessment of tweets much more important for search. However, given the limitations on the size of tweets, it is hard to extract measures for ranking from the tweet's content alone. I propose a method of ranking tweets by generating a

The increasing popularity of Twitter renders improved trustworthiness and relevance assessment of tweets much more important for search. However, given the limitations on the size of tweets, it is hard to extract measures for ranking from the tweet's content alone. I propose a method of ranking tweets by generating a reputation score for each tweet that is based not just on content, but also additional information from the Twitter ecosystem that consists of users, tweets, and the web pages that tweets link to. This information is obtained by modeling the Twitter ecosystem as a three-layer graph. The reputation score is used to power two novel methods of ranking tweets by propagating the reputation over an agreement graph based on tweets' content similarity. Additionally, I show how the agreement graph helps counter tweet spam. An evaluation of my method on 16~million tweets from the TREC 2011 Microblog Dataset shows that it doubles the precision over baseline Twitter Search and achieves higher precision than current state of the art method. I present a detailed internal empirical evaluation of RAProp in comparison to several alternative approaches proposed by me, as well as external evaluation in comparison to the current state of the art method.
ContributorsRavikumar, Srijith (Author) / Kambhampati, Subbarao (Thesis advisor) / Davulcu, Hasan (Committee member) / Liu, Huan (Committee member) / Arizona State University (Publisher)
Created2013
151867-Thumbnail Image.png
Description
Automating aspects of biocuration through biomedical information extraction could significantly impact biomedical research by enabling greater biocuration throughput and improving the feasibility of a wider scope. An important step in biomedical information extraction systems is named entity recognition (NER), where mentions of entities such as proteins and diseases are located

Automating aspects of biocuration through biomedical information extraction could significantly impact biomedical research by enabling greater biocuration throughput and improving the feasibility of a wider scope. An important step in biomedical information extraction systems is named entity recognition (NER), where mentions of entities such as proteins and diseases are located within natural-language text and their semantic type is determined. This step is critical for later tasks in an information extraction pipeline, including normalization and relationship extraction. BANNER is a benchmark biomedical NER system using linear-chain conditional random fields and the rich feature set approach. A case study with BANNER locating genes and proteins in biomedical literature is described. The first corpus for disease NER adequate for use as training data is introduced, and employed in a case study of disease NER. The first corpus locating adverse drug reactions (ADRs) in user posts to a health-related social website is also described, and a system to locate and identify ADRs in social media text is created and evaluated. The rich feature set approach to creating NER feature sets is argued to be subject to diminishing returns, implying that additional improvements may require more sophisticated methods for creating the feature set. This motivates the first application of multivariate feature selection with filters and false discovery rate analysis to biomedical NER, resulting in a feature set at least 3 orders of magnitude smaller than the set created by the rich feature set approach. Finally, two novel approaches to NER by modeling the semantics of token sequences are introduced. The first method focuses on the sequence content by using language models to determine whether a sequence resembles entries in a lexicon of entity names or text from an unlabeled corpus more closely. The second method models the distributional semantics of token sequences, determining the similarity between a potential mention and the token sequences from the training data by analyzing the contexts where each sequence appears in a large unlabeled corpus. The second method is shown to improve the performance of BANNER on multiple data sets.
ContributorsLeaman, James Robert (Author) / Gonzalez, Graciela (Thesis advisor) / Baral, Chitta (Thesis advisor) / Cohen, Kevin B (Committee member) / Liu, Huan (Committee member) / Ye, Jieping (Committee member) / Arizona State University (Publisher)
Created2013
151376-Thumbnail Image.png
Description
Spinal muscular atrophy (SMA) is a neurodegenerative disease that results in the loss of lower body muscle function. SMA is the second leading genetic cause of death in infants and arises from the loss of the Survival of Motor Neuron (SMN) protein. SMN is produced by two genes, smn1 and

Spinal muscular atrophy (SMA) is a neurodegenerative disease that results in the loss of lower body muscle function. SMA is the second leading genetic cause of death in infants and arises from the loss of the Survival of Motor Neuron (SMN) protein. SMN is produced by two genes, smn1 and smn2, that are identical with the exception of a C to T conversion in exon 7 of the smn2 gene. SMA patients lacking the smn1 gene, rely on smn2 for production of SMN. Due to an alternative splicing event, smn2 primarily encodes a non-functional SMN lacking exon 7 (SMN D7) as well as a low amount of functional full-length SMN (SMN WT). SMN WT is ubiquitously expressed in all cell types, and it remains unclear how low levels of SMN WT in motor neurons lead to motor neuron degradation and SMA. SMN and its associated proteins, Gemin2-8 and Unrip, make up a large dynamic complex that functions to assemble ribonucleoproteins. The aim of this project was to characterize the interactions of the core SMN-Gemin2 complex, and to identify differences between SMN WT and SMN D7. SMN and Gemin2 proteins were expressed, purified and characterized via size exclusion chromatography. A stable N-terminal deleted Gemin2 protein (N45-G2) was characterized. The SMN WT expression system was optimized resulting in a 10-fold increase of protein expression. Lastly, the oligomeric states of SMN and SMN bound to Gemin2 were determined. SMN WT formed a mixture of oligomeric states, while SMN D7 did not. Both SMN WT and D7 bound to Gemin2 with a one-to-one ratio forming a heterodimer and several higher-order oligomeric states. The SMN WT-Gemin2 complex favored high molecular weight oligomers whereas the SMN D7-Gemin2 complex formed low molecular weight oligomers. These results indicate that the SMA mutant protein, SMN D7, was still able to associate with Gemin2, but was not able to form higher-order oligomeric complexes. The observed multiple oligomerization states of SMN and SMN bound to Gemin2 may play a crucial role in regulating one or several functions of the SMN protein. The inability of SMN D7 to form higher-order oligomers may inhibit or alter those functions leading to the SMA disease phenotype.
ContributorsNiday, Tracy (Author) / Allen, James P. (Thesis advisor) / Wachter, Rebekka (Committee member) / Ghirlanda, Giovanna (Committee member) / Arizona State University (Publisher)
Created2012
151517-Thumbnail Image.png
Description
Data mining is increasing in importance in solving a variety of industry problems. Our initiative involves the estimation of resource requirements by skill set for future projects by mining and analyzing actual resource consumption data from past projects in the semiconductor industry. To achieve this goal we face difficulties like

Data mining is increasing in importance in solving a variety of industry problems. Our initiative involves the estimation of resource requirements by skill set for future projects by mining and analyzing actual resource consumption data from past projects in the semiconductor industry. To achieve this goal we face difficulties like data with relevant consumption information but stored in different format and insufficient data about project attributes to interpret consumption data. Our first goal is to clean the historical data and organize it into meaningful structures for analysis. Once the preprocessing on data is completed, different data mining techniques like clustering is applied to find projects which involve resources of similar skillsets and which involve similar complexities and size. This results in "resource utilization templates" for groups of related projects from a resource consumption perspective. Then project characteristics are identified which generate this diversity in headcounts and skillsets. These characteristics are not currently contained in the data base and are elicited from the managers of historical projects. This represents an opportunity to improve the usefulness of the data collection system for the future. The ultimate goal is to match the product technical features with the resource requirement for projects in the past as a model to forecast resource requirements by skill set for future projects. The forecasting model is developed using linear regression with cross validation of the training data as the past project execution are relatively few in number. Acceptable levels of forecast accuracy are achieved relative to human experts' results and the tool is applied to forecast some future projects' resource demand.
ContributorsBhattacharya, Indrani (Author) / Sen, Arunabha (Thesis advisor) / Kempf, Karl G. (Thesis advisor) / Liu, Huan (Committee member) / Arizona State University (Publisher)
Created2013
152541-Thumbnail Image.png
Description
Contemporary online social platforms present individuals with social signals in the form of news feed on their peers' activities. On networks such as Facebook, Quora, network operator decides how that information is shown to an individual. Then the user, with her own interests and resource constraints selectively acts on a

Contemporary online social platforms present individuals with social signals in the form of news feed on their peers' activities. On networks such as Facebook, Quora, network operator decides how that information is shown to an individual. Then the user, with her own interests and resource constraints selectively acts on a subset of items presented to her. The network operator again, shows that activity to a selection of peers, and thus creating a behavioral loop. That mechanism of interaction and information flow raises some very interesting questions such as: can network operator design social signals to promote a particular activity like sustainability, public health care awareness, or to promote a specific product? The focus of my thesis is to answer that question. In this thesis, I develop a framework to personalize social signals for users to guide their activities on an online platform. As the result, we gradually nudge the activity distribution on the platform from the initial distribution p to the target distribution q. My work is particularly applicable to guiding collaborations, guiding collective actions, and online advertising. In particular, I first propose a probabilistic model on how users behave and how information flows on the platform. The main part of this thesis after that discusses the Influence Individuals through Social Signals (IISS) framework. IISS consists of four main components: (1) Learner: it learns users' interests and characteristics from their historical activities using Bayesian model, (2) Calculator: it uses gradient descent method to compute the intermediate activity distributions, (3) Selector: it selects users who can be influenced to adopt or drop specific activities, (4) Designer: it personalizes social signals for each user. I evaluate the performance of IISS framework by simulation on several network topologies such as preferential attachment, small world, and random. I show that the framework gradually nudges users' activities to approach the target distribution. I use both simulation and mathematical method to analyse convergence properties such as how fast and how close we can approach the target distribution. When the number of activities is 3, I show that for about 45% of target distributions, we can achieve KL-divergence as low as 0.05. But for some other distributions KL-divergence can be as large as 0.5.
ContributorsLe, Tien D (Author) / Sundaram, Hari (Thesis advisor) / Davulcu, Hasan (Thesis advisor) / Liu, Huan (Committee member) / Arizona State University (Publisher)
Created2014
152662-Thumbnail Image.png
Description
This thesis explores a wide array of topics related to the protein folding problem, ranging from the folding mechanism, ab initio structure prediction and protein design, to the mechanism of protein functional evolution, using multi-scale approaches. To investigate the role of native topology on folding mechanism, the native topology is

This thesis explores a wide array of topics related to the protein folding problem, ranging from the folding mechanism, ab initio structure prediction and protein design, to the mechanism of protein functional evolution, using multi-scale approaches. To investigate the role of native topology on folding mechanism, the native topology is dissected into non-local and local contacts. The number of non-local contacts and non-local contact orders are both negatively correlated with folding rates, suggesting that the non-local contacts dominate the barrier-crossing process. However, local contact orders show positive correlation with folding rates, indicating the role of a diffusive search in the denatured basin. Additionally, the folding rate distribution of E. coli and Yeast proteomes are predicted from native topology. The distribution is fitted well by a diffusion-drift population model and also directly compared with experimentally measured half life. The results indicate that proteome folding kinetics is limited by protein half life. The crucial role of local contacts in protein folding is further explored by the simulations of WW domains using Zipping and Assembly Method. The correct formation of N-terminal β-turn turns out important for the folding of WW domains. A classification model based on contact probabilities of five critical local contacts is constructed to predict the foldability of WW domains with 81% accuracy. By introducing mutations to stabilize those critical local contacts, a new protein design approach is developed to re-design the unfoldable WW domains and make them foldable. After folding, proteins exhibit inherent conformational dynamics to be functional. Using molecular dynamics simulations in conjunction with Perturbation Response Scanning, it is demonstrated that the divergence of functions can occur through the modification of conformational dynamics within existing fold for β-lactmases and GFP-like proteins: i) the modern TEM-1 lactamase shows a comparatively rigid active-site region, likely reflecting adaptation for efficient degradation of a specific substrate, while the resurrected ancient lactamases indicate enhanced active-site flexibility, which likely allows for the binding and subsequent degradation of different antibiotic molecules; ii) the chromophore and attached peptides of photocoversion-competent GFP-like protein exhibits higher flexibility than the photocoversion-incompetent one, consistent with the evolution of photocoversion capacity.
ContributorsZou, Taisong (Author) / Ozkan, Sefika B (Thesis advisor) / Thorpe, Michael F (Committee member) / Woodbury, Neal W (Committee member) / Vaiana, Sara M (Committee member) / Ghirlanda, Giovanna (Committee member) / Arizona State University (Publisher)
Created2014
152158-Thumbnail Image.png
Description
Most data cleaning systems aim to go from a given deterministic dirty database to another deterministic but clean database. Such an enterprise pre–supposes that it is in fact possible for the cleaning process to uniquely recover the clean versions of each dirty data tuple. This is not possible in many

Most data cleaning systems aim to go from a given deterministic dirty database to another deterministic but clean database. Such an enterprise pre–supposes that it is in fact possible for the cleaning process to uniquely recover the clean versions of each dirty data tuple. This is not possible in many cases, where the most a cleaning system can do is to generate a (hopefully small) set of clean candidates for each dirty tuple. When the cleaning system is required to output a deterministic database, it is forced to pick one clean candidate (say the "most likely" candidate) per tuple. Such an approach can lead to loss of information. For example, consider a situation where there are three equally likely clean candidates of a dirty tuple. An appealing alternative that avoids such an information loss is to abandon the requirement that the output database be deterministic. In other words, even though the input (dirty) database is deterministic, I allow the reconstructed database to be probabilistic. Although such an approach does avoid the information loss, it also brings forth several challenges. For example, how many alternatives should be kept per tuple in the reconstructed database? Maintaining too many alternatives increases the size of the reconstructed database, and hence the query processing time. Second, while processing queries on the probabilistic database may well increase recall, how would they affect the precision of the query processing? In this thesis, I investigate these questions. My investigation is done in the context of a data cleaning system called BayesWipe that has the capability of producing multiple clean candidates per each dirty tuple, along with the probability that they are the correct cleaned version. I represent these alternatives as tuples in a tuple disjoint probabilistic database, and use the Mystiq system to process queries on it. This probabilistic reconstruction (called BayesWipe–PDB) is compared to a deterministic reconstruction (called BayesWipe–DET)—where the most likely clean candidate for each tuple is chosen, and the rest of the alternatives discarded.
ContributorsRihan, Preet Inder Singh (Author) / Kambhampati, Subbarao (Thesis advisor) / Liu, Huan (Committee member) / Davulcu, Hasan (Committee member) / Arizona State University (Publisher)
Created2013
152514-Thumbnail Image.png
Description
As the size and scope of valuable datasets has exploded across many industries and fields of research in recent years, an increasingly diverse audience has sought out effective tools for their large-scale data analytics needs. Over this period, machine learning researchers have also been very prolific in designing improved algorithms

As the size and scope of valuable datasets has exploded across many industries and fields of research in recent years, an increasingly diverse audience has sought out effective tools for their large-scale data analytics needs. Over this period, machine learning researchers have also been very prolific in designing improved algorithms which are capable of finding the hidden structure within these datasets. As consumers of popular Big Data frameworks have sought to apply and benefit from these improved learning algorithms, the problems encountered with the frameworks have motivated a new generation of Big Data tools to address the shortcomings of the previous generation. One important example of this is the improved performance in the newer tools with the large class of machine learning algorithms which are highly iterative in nature. In this thesis project, I set about to implement a low-rank matrix completion algorithm (as an example of a highly iterative algorithm) within a popular Big Data framework, and to evaluate its performance processing the Netflix Prize dataset. I begin by describing several approaches which I attempted, but which did not perform adequately. These include an implementation of the Singular Value Thresholding (SVT) algorithm within the Apache Mahout framework, which runs on top of the Apache Hadoop MapReduce engine. I then describe an approach which uses the Divide-Factor-Combine (DFC) algorithmic framework to parallelize the state-of-the-art low-rank completion algorithm Orthogoal Rank-One Matrix Pursuit (OR1MP) within the Apache Spark engine. I describe the results of a series of tests running this implementation with the Netflix dataset on clusters of various sizes, with various degrees of parallelism. For these experiments, I utilized the Amazon Elastic Compute Cloud (EC2) web service. In the final analysis, I conclude that the Spark DFC + OR1MP implementation does indeed produce competitive results, in both accuracy and performance. In particular, the Spark implementation performs nearly as well as the MATLAB implementation of OR1MP without any parallelism, and improves performance to a significant degree as the parallelism increases. In addition, the experience demonstrates how Spark's flexible programming model makes it straightforward to implement this parallel and iterative machine learning algorithm.
ContributorsKrouse, Brian (Author) / Ye, Jieping (Thesis advisor) / Liu, Huan (Committee member) / Davulcu, Hasan (Committee member) / Arizona State University (Publisher)
Created2014
152327-Thumbnail Image.png
Description
Human islet amyloid polypeptide (hIAPP), also known as amylin, is a 37-residue intrinsically disordered hormone involved in glucose regulation and gastric emptying. The aggregation of hIAPP into amyloid fibrils is believed to play a causal role in type 2 diabetes. To date, not much is known about the monomeric state

Human islet amyloid polypeptide (hIAPP), also known as amylin, is a 37-residue intrinsically disordered hormone involved in glucose regulation and gastric emptying. The aggregation of hIAPP into amyloid fibrils is believed to play a causal role in type 2 diabetes. To date, not much is known about the monomeric state of hIAPP or how it undergoes an irreversible transformation from disordered peptide to insoluble aggregate. IAPP contains a highly conserved disulfide bond that restricts hIAPP(1-8) into a short ring-like structure: N_loop. Removal or chemical reduction of N_loop not only prevents cell response upon binding to the CGRP receptor, but also alters the mass per length distribution of hIAPP fibers and the kinetics of fibril formation. The mechanism by which N_loop affects hIAPP aggregation is not yet understood, but is important for rationalizing kinetics and developing potential inhibitors. By measuring end-to-end contact formation rates, Vaiana et al. showed that N_loop induces collapsed states in IAPP monomers, implying attractive interactions between N_loop and other regions of the disordered polypeptide chain . We show that in addition to being involved in intra-protein interactions, the N_loop is involved in inter-protein interactions, which lead to the formation of extremely long and stable β-turn fibers. These non-amyloid fibers are present in the 10 μM concentration range, under the same solution conditions in which hIAPP forms amyloid fibers. We discuss the effect of peptide cyclization on both intra- and inter-protein interactions, and its possible implications for aggregation. Our findings indicate a potential role of N_loop-N_loop interactions in hIAPP aggregation, which has not previously been explored. Though our findings suggest that N_loop plays an important role in the pathway of amyloid formation, other naturally occurring IAPP variants that contain this structural feature are incapable of forming amyloids. For example, hIAPP readily forms amyloid fibrils in vitro, whereas the rat variant (rIAPP), differing by six amino acids, does not. In addition to being highly soluble, rIAPP is an effective inhibitor of hIAPP fibril formation . Both of these properties have been attributed to rIAPP's three proline residues: A25P, S28P and S29P. Single proline mutants of hIAPP have also been shown to kinetically inhibit hIAPP fibril formation. Because of their intrinsic dihedral angle preferences, prolines are expected to affect conformational ensembles of intrinsically disordered proteins. The specific effect of proline substitutions on IAPP structure and dynamics has not yet been explored, as the detection of such properties is experimentally challenging due to the low molecular weight, fast reconfiguration times, and very low solubility of IAPP peptides. High-resolution techniques able to measure tertiary contact formations are needed to address this issue. We employ a nanosecond laser spectroscopy technique to measure end-to-end contact formation rates in IAPP mutants. We explore the proline substitutions in IAPP and quantify their effects in terms of intrinsic chain stiffness. We find that the three proline mutations found in rIAPP increase chain stiffness. Interestingly, we also find that residue R18 plays an important role in rIAPP's unique chain stiffness and, together with the proline residues, is a determinant for its non-amyloidogenic properties. We discuss the implications of our findings on the role of prolines in IDPs.
ContributorsCope, Stephanie M (Author) / Vaiana, Sara M (Thesis advisor) / Ghirlanda, Giovanna (Committee member) / Ros, Robert (Committee member) / Lindsay, Stuart M (Committee member) / Ozkan, Sefika B (Committee member) / Arizona State University (Publisher)
Created2013
152761-Thumbnail Image.png
Description
Telomerase is a unique reverse transcriptase that has evolved specifically to extend the single stranded DNA at the 3' ends of chromosomes. To achieve this, telomerase uses a small section of its integral RNA subunit (TR) to reiteratively copy a short, canonically 6-nt, sequence repeatedly in a processive manner using

Telomerase is a unique reverse transcriptase that has evolved specifically to extend the single stranded DNA at the 3' ends of chromosomes. To achieve this, telomerase uses a small section of its integral RNA subunit (TR) to reiteratively copy a short, canonically 6-nt, sequence repeatedly in a processive manner using a complex and currently poorly understood mechanism of template translocation to stop nucleotide addition, regenerate its template, and then synthesize a new repeat. In this study, several novel interactions between the telomerase protein and RNA components along with the DNA substrate are identified and characterized which come together to allow active telomerase repeat addition. First, this study shows that the sequence of the RNA/DNA duplex holds a unique, single nucleotide signal which pauses DNA synthesis at the end of the canonical template sequence. Further characterization of this sequence dependent pause signal reveals that the template sequence alone can produce telomerase products with the characteristic 6-nt pattern, but also works cooperatively with another RNA structural element for proper template boundary definition. Finally, mutational analysis is used on several regions of the protein and RNA components of telomerase to identify crucial determinates of telomerase assembly and processive repeat synthesis. Together, these results shed new light on how telomerase coordinates its complex catalytic cycle.
ContributorsBrown, Andrew F (Author) / Chen, Julian J. L. (Thesis advisor) / Jones, Anne (Committee member) / Ghirlanda, Giovanna (Committee member) / Arizona State University (Publisher)
Created2014