This collection includes most of the ASU Theses and Dissertations from 2011 to present. ASU Theses and Dissertations are available in downloadable PDF format; however, a small percentage of items are under embargo. Information about the dissertations/theses includes degree information, committee members, an abstract, supporting data or media.

In addition to the electronic theses found in the ASU Digital Repository, ASU Theses and Dissertations can be found in the ASU Library Catalog.

Dissertations and Theses granted by Arizona State University are archived and made available through a joint effort of the ASU Graduate College and the ASU Libraries. For more information or questions about this collection contact or visit the Digital Repository ETD Library Guide or contact the ASU Graduate College at gradformat@asu.edu.

Displaying 1 - 10 of 85
151718-Thumbnail Image.png
Description
The increasing popularity of Twitter renders improved trustworthiness and relevance assessment of tweets much more important for search. However, given the limitations on the size of tweets, it is hard to extract measures for ranking from the tweet's content alone. I propose a method of ranking tweets by generating a

The increasing popularity of Twitter renders improved trustworthiness and relevance assessment of tweets much more important for search. However, given the limitations on the size of tweets, it is hard to extract measures for ranking from the tweet's content alone. I propose a method of ranking tweets by generating a reputation score for each tweet that is based not just on content, but also additional information from the Twitter ecosystem that consists of users, tweets, and the web pages that tweets link to. This information is obtained by modeling the Twitter ecosystem as a three-layer graph. The reputation score is used to power two novel methods of ranking tweets by propagating the reputation over an agreement graph based on tweets' content similarity. Additionally, I show how the agreement graph helps counter tweet spam. An evaluation of my method on 16~million tweets from the TREC 2011 Microblog Dataset shows that it doubles the precision over baseline Twitter Search and achieves higher precision than current state of the art method. I present a detailed internal empirical evaluation of RAProp in comparison to several alternative approaches proposed by me, as well as external evaluation in comparison to the current state of the art method.
ContributorsRavikumar, Srijith (Author) / Kambhampati, Subbarao (Thesis advisor) / Davulcu, Hasan (Committee member) / Liu, Huan (Committee member) / Arizona State University (Publisher)
Created2013
151867-Thumbnail Image.png
Description
Automating aspects of biocuration through biomedical information extraction could significantly impact biomedical research by enabling greater biocuration throughput and improving the feasibility of a wider scope. An important step in biomedical information extraction systems is named entity recognition (NER), where mentions of entities such as proteins and diseases are located

Automating aspects of biocuration through biomedical information extraction could significantly impact biomedical research by enabling greater biocuration throughput and improving the feasibility of a wider scope. An important step in biomedical information extraction systems is named entity recognition (NER), where mentions of entities such as proteins and diseases are located within natural-language text and their semantic type is determined. This step is critical for later tasks in an information extraction pipeline, including normalization and relationship extraction. BANNER is a benchmark biomedical NER system using linear-chain conditional random fields and the rich feature set approach. A case study with BANNER locating genes and proteins in biomedical literature is described. The first corpus for disease NER adequate for use as training data is introduced, and employed in a case study of disease NER. The first corpus locating adverse drug reactions (ADRs) in user posts to a health-related social website is also described, and a system to locate and identify ADRs in social media text is created and evaluated. The rich feature set approach to creating NER feature sets is argued to be subject to diminishing returns, implying that additional improvements may require more sophisticated methods for creating the feature set. This motivates the first application of multivariate feature selection with filters and false discovery rate analysis to biomedical NER, resulting in a feature set at least 3 orders of magnitude smaller than the set created by the rich feature set approach. Finally, two novel approaches to NER by modeling the semantics of token sequences are introduced. The first method focuses on the sequence content by using language models to determine whether a sequence resembles entries in a lexicon of entity names or text from an unlabeled corpus more closely. The second method models the distributional semantics of token sequences, determining the similarity between a potential mention and the token sequences from the training data by analyzing the contexts where each sequence appears in a large unlabeled corpus. The second method is shown to improve the performance of BANNER on multiple data sets.
ContributorsLeaman, James Robert (Author) / Gonzalez, Graciela (Thesis advisor) / Baral, Chitta (Thesis advisor) / Cohen, Kevin B (Committee member) / Liu, Huan (Committee member) / Ye, Jieping (Committee member) / Arizona State University (Publisher)
Created2013
151937-Thumbnail Image.png
Description
Integrated photonics requires high gain optical materials in the telecom wavelength range for optical amplifiers and coherent light sources. Erbium (Er) containing materials are ideal candidates due to the 1.5 μm emission from Er3+ ions. However, the Er density in typical Er-doped materials is less than 1 x 1020 cm-3,

Integrated photonics requires high gain optical materials in the telecom wavelength range for optical amplifiers and coherent light sources. Erbium (Er) containing materials are ideal candidates due to the 1.5 μm emission from Er3+ ions. However, the Er density in typical Er-doped materials is less than 1 x 1020 cm-3, thus limiting the maximum optical gain to a few dB/cm, too small to be useful for integrated photonics applications. Er compounds could potentially solve this problem since they contain much higher Er density. So far the existing Er compounds suffer from short lifetime and strong upconversion effects, mainly due to poor quality of crystals produced by various methods of thin film growth and deposition. This dissertation explores a new Er compound: erbium chloride silicate (ECS, Er3(SiO4)2Cl ) in the nanowire form, which facilitates the growth of high quality single crystals. Growth methods for such single crystal ECS nanowires have been established. Various structural and optical characterizations have been carried out. The high crystal quality of ECS material leads to a long lifetime of the first excited state of Er3+ ions up to 1 ms at Er density higher than 1022 cm-3. This Er lifetime-density product was found to be the largest among all Er containing materials. A unique integrating sphere method was developed to measure the absorption cross section of ECS nanowires from 440 to 1580 nm. Pump-probe experiments demonstrated a 644 dB/cm signal enhancement from a single ECS wire. It was estimated that such large signal enhancement can overcome the absorption to result in a net material gain, but not sufficient to compensate waveguide propagation loss. In order to suppress the upconversion process in ECS, Ytterbium (Yb) and Yttrium (Y) ions are introduced as substituent ions of Er in the ECS crystal structure to reduce Er density. While the addition of Yb ions only partially succeeded, erbium yttrium chloride silicate (EYCS) with controllable Er density was synthesized successfully. EYCS with 30 at. % Er was found to be the best. It shows the strongest PL emission at 1.5 μm, and thus can be potentially used as a high gain material.
ContributorsYin, Leijun (Author) / Ning, Cun-Zheng (Thesis advisor) / Chamberlin, Ralph (Committee member) / Yu, Hongbin (Committee member) / Menéndez, Jose (Committee member) / Ponce, Fernando (Committee member) / Arizona State University (Publisher)
Created2013
151517-Thumbnail Image.png
Description
Data mining is increasing in importance in solving a variety of industry problems. Our initiative involves the estimation of resource requirements by skill set for future projects by mining and analyzing actual resource consumption data from past projects in the semiconductor industry. To achieve this goal we face difficulties like

Data mining is increasing in importance in solving a variety of industry problems. Our initiative involves the estimation of resource requirements by skill set for future projects by mining and analyzing actual resource consumption data from past projects in the semiconductor industry. To achieve this goal we face difficulties like data with relevant consumption information but stored in different format and insufficient data about project attributes to interpret consumption data. Our first goal is to clean the historical data and organize it into meaningful structures for analysis. Once the preprocessing on data is completed, different data mining techniques like clustering is applied to find projects which involve resources of similar skillsets and which involve similar complexities and size. This results in "resource utilization templates" for groups of related projects from a resource consumption perspective. Then project characteristics are identified which generate this diversity in headcounts and skillsets. These characteristics are not currently contained in the data base and are elicited from the managers of historical projects. This represents an opportunity to improve the usefulness of the data collection system for the future. The ultimate goal is to match the product technical features with the resource requirement for projects in the past as a model to forecast resource requirements by skill set for future projects. The forecasting model is developed using linear regression with cross validation of the training data as the past project execution are relatively few in number. Acceptable levels of forecast accuracy are achieved relative to human experts' results and the tool is applied to forecast some future projects' resource demand.
ContributorsBhattacharya, Indrani (Author) / Sen, Arunabha (Thesis advisor) / Kempf, Karl G. (Thesis advisor) / Liu, Huan (Committee member) / Arizona State University (Publisher)
Created2013
151415-Thumbnail Image.png
Description
In this dissertation, remote plasma interactions with the surfaces of low-k interlayer dielectric (ILD), Cu and Cu adhesion layers are investigated. The first part of the study focuses on the simultaneous plasma treatment of ILD and chemical mechanical polishing (CMP) Cu surfaces using N2/H2 plasma processes. H atoms and radicals

In this dissertation, remote plasma interactions with the surfaces of low-k interlayer dielectric (ILD), Cu and Cu adhesion layers are investigated. The first part of the study focuses on the simultaneous plasma treatment of ILD and chemical mechanical polishing (CMP) Cu surfaces using N2/H2 plasma processes. H atoms and radicals in the plasma react with the carbon groups leading to carbon removal for the ILD films. Results indicate that an N2 plasma forms an amide-like layer on the surface which apparently leads to reduced carbon abstraction from an H2 plasma process. In addition, FTIR spectra indicate the formation of hydroxyl (Si-OH) groups following the plasma exposure. Increased temperature (380 °C) processing leads to a reduction of the hydroxyl group formation compared to ambient temperature processes, resulting in reduced changes of the dielectric constant. For CMP Cu surfaces, the carbonate contamination was removed by an H2 plasma process at elevated temperature while the C-C and C-H contamination was removed by an N2 plasma process at elevated temperature. The second part of this study examined oxide stability and cleaning of Ru surfaces as well as consequent Cu film thermal stability with the Ru layers. The ~2 monolayer native Ru oxide was reduced after H-plasma processing. The thermal stability or islanding of the Cu film on the Ru substrate was characterized by in-situ XPS. After plasma cleaning of the Ru adhesion layer, the deposited Cu exhibited full coverage. In contrast, for Cu deposition on the Ru native oxide substrate, Cu islanding was detected and was described in terms of grain boundary grooving and surface and interface energies. The thermal stability of 7 nm Ti, Pt and Ru ii interfacial adhesion layers between a Cu film (10 nm) and a Ta barrier layer (4 nm) have been investigated in the third part. The barrier properties and interfacial stability have been evaluated by Rutherford backscattering spectrometry (RBS). Atomic force microscopy (AFM) was used to measure the surfaces before and after annealing, and all the surfaces are relatively smooth excluding islanding or de-wetting phenomena as a cause of the instability. The RBS showed no discernible diffusion across the adhesion layer/Ta and Ta/Si interfaces which provides a stable underlying layer. For a Ti interfacial layer RBS indicates that during 400 °C annealing Ti interdiffuses through the Cu film and accumulates at the surface. For the Pt/Cu system Pt interdiffuion is detected which is less evident than Ti. Among the three adhesion layer candidates, Ru shows negligible diffusion into the Cu film indicating thermal stability at 400 °C.
ContributorsLiu, Xin (Author) / Nemanich, Robert (Thesis advisor) / Chamberlin, Ralph (Committee member) / Chen, Tingyong (Committee member) / Smith, David (Committee member) / Ponce, Fernando (Committee member) / Arizona State University (Publisher)
Created2012
152541-Thumbnail Image.png
Description
Contemporary online social platforms present individuals with social signals in the form of news feed on their peers' activities. On networks such as Facebook, Quora, network operator decides how that information is shown to an individual. Then the user, with her own interests and resource constraints selectively acts on a

Contemporary online social platforms present individuals with social signals in the form of news feed on their peers' activities. On networks such as Facebook, Quora, network operator decides how that information is shown to an individual. Then the user, with her own interests and resource constraints selectively acts on a subset of items presented to her. The network operator again, shows that activity to a selection of peers, and thus creating a behavioral loop. That mechanism of interaction and information flow raises some very interesting questions such as: can network operator design social signals to promote a particular activity like sustainability, public health care awareness, or to promote a specific product? The focus of my thesis is to answer that question. In this thesis, I develop a framework to personalize social signals for users to guide their activities on an online platform. As the result, we gradually nudge the activity distribution on the platform from the initial distribution p to the target distribution q. My work is particularly applicable to guiding collaborations, guiding collective actions, and online advertising. In particular, I first propose a probabilistic model on how users behave and how information flows on the platform. The main part of this thesis after that discusses the Influence Individuals through Social Signals (IISS) framework. IISS consists of four main components: (1) Learner: it learns users' interests and characteristics from their historical activities using Bayesian model, (2) Calculator: it uses gradient descent method to compute the intermediate activity distributions, (3) Selector: it selects users who can be influenced to adopt or drop specific activities, (4) Designer: it personalizes social signals for each user. I evaluate the performance of IISS framework by simulation on several network topologies such as preferential attachment, small world, and random. I show that the framework gradually nudges users' activities to approach the target distribution. I use both simulation and mathematical method to analyse convergence properties such as how fast and how close we can approach the target distribution. When the number of activities is 3, I show that for about 45% of target distributions, we can achieve KL-divergence as low as 0.05. But for some other distributions KL-divergence can be as large as 0.5.
ContributorsLe, Tien D (Author) / Sundaram, Hari (Thesis advisor) / Davulcu, Hasan (Thesis advisor) / Liu, Huan (Committee member) / Arizona State University (Publisher)
Created2014
152484-Thumbnail Image.png
Description
In this dissertation, the interface chemistry and electronic structure of plasma-enhanced atomic layer deposited (PEALD) dielectrics on GaN are investigated with x-ray and ultraviolet photoemission spectroscopy (XPS and UPS). Three interrelated issues are discussed in this study: (1) PEALD dielectric growth process optimization, (2) interface electronic structure of comparative PEALD

In this dissertation, the interface chemistry and electronic structure of plasma-enhanced atomic layer deposited (PEALD) dielectrics on GaN are investigated with x-ray and ultraviolet photoemission spectroscopy (XPS and UPS). Three interrelated issues are discussed in this study: (1) PEALD dielectric growth process optimization, (2) interface electronic structure of comparative PEALD dielectrics on GaN, and (3) interface electronic structure of PEALD dielectrics on Ga- and N-face GaN. The first study involved an in-depth case study of PEALD Al2O3 growth using dimethylaluminum isopropoxide, with a special focus on oxygen plasma effects. Saturated and self-limiting growth of Al2O3 films were obtained with an enhanced growth rate within the PEALD temperature window (25-220 ºC). The properties of Al2O3 deposited at various temperatures were characterized to better understand the relation between the growth parameters and film properties. In the second study, the interface electronic structures of PEALD dielectrics on Ga-face GaN films were measured. Five promising dielectrics (Al2O3, HfO2, SiO2, La2O3, and ZnO) with a range of band gap energies were chosen. Prior to dielectric growth, a combined wet chemical and in-situ H2/N2 plasma clean process was employed to remove the carbon contamination and prepare the surface for dielectric deposition. The surface band bending and band offsets were measured by XPS and UPS for dielectrics on GaN. The trends of the experimental band offsets on GaN were related to the dielectric band gap energies. In addition, the experimental band offsets were near the calculated values based on the charge neutrality level model. The third study focused on the effect of the polarization bound charge of the Ga- and N-face GaN on interface electronic structures. A surface pretreatment process consisting of a NH4OH wet chemical and an in-situ NH3 plasma treatment was applied to remove carbon contamination, retain monolayer oxygen coverage, and potentially passivate N-vacancy related defects. The surface band bending and polarization charge compensation of Ga- and N-face GaN were investigated. The surface band bending and band offsets were determined for Al2O3, HfO2, and SiO2 on Ga- and N-face GaN. Different dielectric thicknesses and post deposition processing were investigated to understand process related defect formation and/or reduction.
ContributorsYang, Jialing (Author) / Nemanich, Robert J (Thesis advisor) / Chen, Tingyong (Committee member) / Peng, Xihong (Committee member) / Ponce, Fernando (Committee member) / Smith, David (Committee member) / Arizona State University (Publisher)
Created2014
152158-Thumbnail Image.png
Description
Most data cleaning systems aim to go from a given deterministic dirty database to another deterministic but clean database. Such an enterprise pre–supposes that it is in fact possible for the cleaning process to uniquely recover the clean versions of each dirty data tuple. This is not possible in many

Most data cleaning systems aim to go from a given deterministic dirty database to another deterministic but clean database. Such an enterprise pre–supposes that it is in fact possible for the cleaning process to uniquely recover the clean versions of each dirty data tuple. This is not possible in many cases, where the most a cleaning system can do is to generate a (hopefully small) set of clean candidates for each dirty tuple. When the cleaning system is required to output a deterministic database, it is forced to pick one clean candidate (say the "most likely" candidate) per tuple. Such an approach can lead to loss of information. For example, consider a situation where there are three equally likely clean candidates of a dirty tuple. An appealing alternative that avoids such an information loss is to abandon the requirement that the output database be deterministic. In other words, even though the input (dirty) database is deterministic, I allow the reconstructed database to be probabilistic. Although such an approach does avoid the information loss, it also brings forth several challenges. For example, how many alternatives should be kept per tuple in the reconstructed database? Maintaining too many alternatives increases the size of the reconstructed database, and hence the query processing time. Second, while processing queries on the probabilistic database may well increase recall, how would they affect the precision of the query processing? In this thesis, I investigate these questions. My investigation is done in the context of a data cleaning system called BayesWipe that has the capability of producing multiple clean candidates per each dirty tuple, along with the probability that they are the correct cleaned version. I represent these alternatives as tuples in a tuple disjoint probabilistic database, and use the Mystiq system to process queries on it. This probabilistic reconstruction (called BayesWipe–PDB) is compared to a deterministic reconstruction (called BayesWipe–DET)—where the most likely clean candidate for each tuple is chosen, and the rest of the alternatives discarded.
ContributorsRihan, Preet Inder Singh (Author) / Kambhampati, Subbarao (Thesis advisor) / Liu, Huan (Committee member) / Davulcu, Hasan (Committee member) / Arizona State University (Publisher)
Created2013
152514-Thumbnail Image.png
Description
As the size and scope of valuable datasets has exploded across many industries and fields of research in recent years, an increasingly diverse audience has sought out effective tools for their large-scale data analytics needs. Over this period, machine learning researchers have also been very prolific in designing improved algorithms

As the size and scope of valuable datasets has exploded across many industries and fields of research in recent years, an increasingly diverse audience has sought out effective tools for their large-scale data analytics needs. Over this period, machine learning researchers have also been very prolific in designing improved algorithms which are capable of finding the hidden structure within these datasets. As consumers of popular Big Data frameworks have sought to apply and benefit from these improved learning algorithms, the problems encountered with the frameworks have motivated a new generation of Big Data tools to address the shortcomings of the previous generation. One important example of this is the improved performance in the newer tools with the large class of machine learning algorithms which are highly iterative in nature. In this thesis project, I set about to implement a low-rank matrix completion algorithm (as an example of a highly iterative algorithm) within a popular Big Data framework, and to evaluate its performance processing the Netflix Prize dataset. I begin by describing several approaches which I attempted, but which did not perform adequately. These include an implementation of the Singular Value Thresholding (SVT) algorithm within the Apache Mahout framework, which runs on top of the Apache Hadoop MapReduce engine. I then describe an approach which uses the Divide-Factor-Combine (DFC) algorithmic framework to parallelize the state-of-the-art low-rank completion algorithm Orthogoal Rank-One Matrix Pursuit (OR1MP) within the Apache Spark engine. I describe the results of a series of tests running this implementation with the Netflix dataset on clusters of various sizes, with various degrees of parallelism. For these experiments, I utilized the Amazon Elastic Compute Cloud (EC2) web service. In the final analysis, I conclude that the Spark DFC + OR1MP implementation does indeed produce competitive results, in both accuracy and performance. In particular, the Spark implementation performs nearly as well as the MATLAB implementation of OR1MP without any parallelism, and improves performance to a significant degree as the parallelism increases. In addition, the experience demonstrates how Spark's flexible programming model makes it straightforward to implement this parallel and iterative machine learning algorithm.
ContributorsKrouse, Brian (Author) / Ye, Jieping (Thesis advisor) / Liu, Huan (Committee member) / Davulcu, Hasan (Committee member) / Arizona State University (Publisher)
Created2014
152386-Thumbnail Image.png
Description
In this dissertation, combined photo-induced and thermionic electron emission from low work function diamond films is studied through low energy electron spectroscopy analysis and other associated techniques. Nitrogen-doped, hydrogen-terminated diamond films prepared by the microwave plasma chemical vapor deposition method have been the most focused material. The theme of this

In this dissertation, combined photo-induced and thermionic electron emission from low work function diamond films is studied through low energy electron spectroscopy analysis and other associated techniques. Nitrogen-doped, hydrogen-terminated diamond films prepared by the microwave plasma chemical vapor deposition method have been the most focused material. The theme of this research is represented by four interrelated issues. (1) An in-depth study describes combined photo-induced and thermionic emission from nitrogen-doped diamond films on molybdenum substrates, which were illuminated with visible light photons, and the electron emission spectra were recorded as a function of temperature. The diamond films displayed significant emissivity with a low work function of ~ 1.5 eV. The results indicate that these diamond emitters can be applied in combined solar and thermal energy conversion. (2) The nitrogen-doped diamond was further investigated to understand the physical mechanism and material-related properties that enable the combined electron emission. Through analysis of the spectroscopy, optical absorbance and photoelectron microscopy results from sample sets prepared with different configurations, it was deduced that the photo-induced electron generation involves both the ultra-nanocrystalline diamond and the interface between the diamond film and metal substrate. (3) Based on results from the first two studies, possible photon-enhanced thermionic emission was examined from nitrogen-doped diamond films deposited on silicon substrates, which could provide the basis for a novel approach for concentrated solar energy conversion. A significant increase of emission intensity was observed at elevated temperatures, which was analyzed using computer-based modeling and a combination of different emission mechanisms. (4) In addition, the electronic structure of vanadium-oxide-terminated diamond surfaces was studied through in-situ photoemission spectroscopy. Thin layers of vanadium were deposited on oxygen-terminated diamond surfaces which led to oxide formation. After thermal annealing, a negative electron affinity was found on boron-doped diamond, while a positive electron affinity was found on nitrogen-doped diamond. A model based on the barrier at the diamond-oxide interface was employed to analyze the results. Based on results of this dissertation, applications of diamond-based energy conversion devices for combined solar- and thermal energy conversion are proposed.
ContributorsSun, Tianyin (Author) / Nemanich, Robert (Thesis advisor) / Ponce, Fernando (Committee member) / Peng, Xihong (Committee member) / Spence, John (Committee member) / Treacy, Michael (Committee member) / Arizona State University (Publisher)
Created2013