This collection includes most of the ASU Theses and Dissertations from 2011 to present. ASU Theses and Dissertations are available in downloadable PDF format; however, a small percentage of items are under embargo. Information about the dissertations/theses includes degree information, committee members, an abstract, supporting data or media.

In addition to the electronic theses found in the ASU Digital Repository, ASU Theses and Dissertations can be found in the ASU Library Catalog.

Dissertations and Theses granted by Arizona State University are archived and made available through a joint effort of the ASU Graduate College and the ASU Libraries. For more information or questions about this collection contact or visit the Digital Repository ETD Library Guide or contact the ASU Graduate College at gradformat@asu.edu.

Displaying 1 - 10 of 102
151718-Thumbnail Image.png
Description
The increasing popularity of Twitter renders improved trustworthiness and relevance assessment of tweets much more important for search. However, given the limitations on the size of tweets, it is hard to extract measures for ranking from the tweet's content alone. I propose a method of ranking tweets by generating a

The increasing popularity of Twitter renders improved trustworthiness and relevance assessment of tweets much more important for search. However, given the limitations on the size of tweets, it is hard to extract measures for ranking from the tweet's content alone. I propose a method of ranking tweets by generating a reputation score for each tweet that is based not just on content, but also additional information from the Twitter ecosystem that consists of users, tweets, and the web pages that tweets link to. This information is obtained by modeling the Twitter ecosystem as a three-layer graph. The reputation score is used to power two novel methods of ranking tweets by propagating the reputation over an agreement graph based on tweets' content similarity. Additionally, I show how the agreement graph helps counter tweet spam. An evaluation of my method on 16~million tweets from the TREC 2011 Microblog Dataset shows that it doubles the precision over baseline Twitter Search and achieves higher precision than current state of the art method. I present a detailed internal empirical evaluation of RAProp in comparison to several alternative approaches proposed by me, as well as external evaluation in comparison to the current state of the art method.
ContributorsRavikumar, Srijith (Author) / Kambhampati, Subbarao (Thesis advisor) / Davulcu, Hasan (Committee member) / Liu, Huan (Committee member) / Arizona State University (Publisher)
Created2013
151867-Thumbnail Image.png
Description
Automating aspects of biocuration through biomedical information extraction could significantly impact biomedical research by enabling greater biocuration throughput and improving the feasibility of a wider scope. An important step in biomedical information extraction systems is named entity recognition (NER), where mentions of entities such as proteins and diseases are located

Automating aspects of biocuration through biomedical information extraction could significantly impact biomedical research by enabling greater biocuration throughput and improving the feasibility of a wider scope. An important step in biomedical information extraction systems is named entity recognition (NER), where mentions of entities such as proteins and diseases are located within natural-language text and their semantic type is determined. This step is critical for later tasks in an information extraction pipeline, including normalization and relationship extraction. BANNER is a benchmark biomedical NER system using linear-chain conditional random fields and the rich feature set approach. A case study with BANNER locating genes and proteins in biomedical literature is described. The first corpus for disease NER adequate for use as training data is introduced, and employed in a case study of disease NER. The first corpus locating adverse drug reactions (ADRs) in user posts to a health-related social website is also described, and a system to locate and identify ADRs in social media text is created and evaluated. The rich feature set approach to creating NER feature sets is argued to be subject to diminishing returns, implying that additional improvements may require more sophisticated methods for creating the feature set. This motivates the first application of multivariate feature selection with filters and false discovery rate analysis to biomedical NER, resulting in a feature set at least 3 orders of magnitude smaller than the set created by the rich feature set approach. Finally, two novel approaches to NER by modeling the semantics of token sequences are introduced. The first method focuses on the sequence content by using language models to determine whether a sequence resembles entries in a lexicon of entity names or text from an unlabeled corpus more closely. The second method models the distributional semantics of token sequences, determining the similarity between a potential mention and the token sequences from the training data by analyzing the contexts where each sequence appears in a large unlabeled corpus. The second method is shown to improve the performance of BANNER on multiple data sets.
ContributorsLeaman, James Robert (Author) / Gonzalez, Graciela (Thesis advisor) / Baral, Chitta (Thesis advisor) / Cohen, Kevin B (Committee member) / Liu, Huan (Committee member) / Ye, Jieping (Committee member) / Arizona State University (Publisher)
Created2013
151517-Thumbnail Image.png
Description
Data mining is increasing in importance in solving a variety of industry problems. Our initiative involves the estimation of resource requirements by skill set for future projects by mining and analyzing actual resource consumption data from past projects in the semiconductor industry. To achieve this goal we face difficulties like

Data mining is increasing in importance in solving a variety of industry problems. Our initiative involves the estimation of resource requirements by skill set for future projects by mining and analyzing actual resource consumption data from past projects in the semiconductor industry. To achieve this goal we face difficulties like data with relevant consumption information but stored in different format and insufficient data about project attributes to interpret consumption data. Our first goal is to clean the historical data and organize it into meaningful structures for analysis. Once the preprocessing on data is completed, different data mining techniques like clustering is applied to find projects which involve resources of similar skillsets and which involve similar complexities and size. This results in "resource utilization templates" for groups of related projects from a resource consumption perspective. Then project characteristics are identified which generate this diversity in headcounts and skillsets. These characteristics are not currently contained in the data base and are elicited from the managers of historical projects. This represents an opportunity to improve the usefulness of the data collection system for the future. The ultimate goal is to match the product technical features with the resource requirement for projects in the past as a model to forecast resource requirements by skill set for future projects. The forecasting model is developed using linear regression with cross validation of the training data as the past project execution are relatively few in number. Acceptable levels of forecast accuracy are achieved relative to human experts' results and the tool is applied to forecast some future projects' resource demand.
ContributorsBhattacharya, Indrani (Author) / Sen, Arunabha (Thesis advisor) / Kempf, Karl G. (Thesis advisor) / Liu, Huan (Committee member) / Arizona State University (Publisher)
Created2013
151418-Thumbnail Image.png
Description
ABSTRACT This work seeks to develop a practical solution for short range ultrasonic communications and produce an integrated array of acoustic transmitters on a flexible substrate. This is done using flexible thin film transistor (TFT) and micro electromechanical systems (MEMS). The goal is to develop a flexible system capable of

ABSTRACT This work seeks to develop a practical solution for short range ultrasonic communications and produce an integrated array of acoustic transmitters on a flexible substrate. This is done using flexible thin film transistor (TFT) and micro electromechanical systems (MEMS). The goal is to develop a flexible system capable of communicating in the ultrasonic frequency range at a distance of 10 - 100 meters. This requires a great deal of innovation on the part of the FDC team developing the TFT driving circuitry and the MEMS team adapting the technology for fabrication on a flexible substrate. The technologies required for this research are independently developed. The TFT development is driven primarily by research into flexible displays. The MEMS development is driving by research in biosensors and micro actuators. This project involves the integration of TFT flexible circuit capabilities with MEMS micro actuators in the novel area of flexible acoustic transmitter arrays. This thesis focuses on the design, testing and analysis of the circuit components required for this project.
ContributorsDaugherty, Robin (Author) / Allee, David R. (Thesis advisor) / Chae, Junseok (Thesis advisor) / Aberle, James T (Committee member) / Vasileska, Dragica (Committee member) / Arizona State University (Publisher)
Created2012
152541-Thumbnail Image.png
Description
Contemporary online social platforms present individuals with social signals in the form of news feed on their peers' activities. On networks such as Facebook, Quora, network operator decides how that information is shown to an individual. Then the user, with her own interests and resource constraints selectively acts on a

Contemporary online social platforms present individuals with social signals in the form of news feed on their peers' activities. On networks such as Facebook, Quora, network operator decides how that information is shown to an individual. Then the user, with her own interests and resource constraints selectively acts on a subset of items presented to her. The network operator again, shows that activity to a selection of peers, and thus creating a behavioral loop. That mechanism of interaction and information flow raises some very interesting questions such as: can network operator design social signals to promote a particular activity like sustainability, public health care awareness, or to promote a specific product? The focus of my thesis is to answer that question. In this thesis, I develop a framework to personalize social signals for users to guide their activities on an online platform. As the result, we gradually nudge the activity distribution on the platform from the initial distribution p to the target distribution q. My work is particularly applicable to guiding collaborations, guiding collective actions, and online advertising. In particular, I first propose a probabilistic model on how users behave and how information flows on the platform. The main part of this thesis after that discusses the Influence Individuals through Social Signals (IISS) framework. IISS consists of four main components: (1) Learner: it learns users' interests and characteristics from their historical activities using Bayesian model, (2) Calculator: it uses gradient descent method to compute the intermediate activity distributions, (3) Selector: it selects users who can be influenced to adopt or drop specific activities, (4) Designer: it personalizes social signals for each user. I evaluate the performance of IISS framework by simulation on several network topologies such as preferential attachment, small world, and random. I show that the framework gradually nudges users' activities to approach the target distribution. I use both simulation and mathematical method to analyse convergence properties such as how fast and how close we can approach the target distribution. When the number of activities is 3, I show that for about 45% of target distributions, we can achieve KL-divergence as low as 0.05. But for some other distributions KL-divergence can be as large as 0.5.
ContributorsLe, Tien D (Author) / Sundaram, Hari (Thesis advisor) / Davulcu, Hasan (Thesis advisor) / Liu, Huan (Committee member) / Arizona State University (Publisher)
Created2014
152158-Thumbnail Image.png
Description
Most data cleaning systems aim to go from a given deterministic dirty database to another deterministic but clean database. Such an enterprise pre–supposes that it is in fact possible for the cleaning process to uniquely recover the clean versions of each dirty data tuple. This is not possible in many

Most data cleaning systems aim to go from a given deterministic dirty database to another deterministic but clean database. Such an enterprise pre–supposes that it is in fact possible for the cleaning process to uniquely recover the clean versions of each dirty data tuple. This is not possible in many cases, where the most a cleaning system can do is to generate a (hopefully small) set of clean candidates for each dirty tuple. When the cleaning system is required to output a deterministic database, it is forced to pick one clean candidate (say the "most likely" candidate) per tuple. Such an approach can lead to loss of information. For example, consider a situation where there are three equally likely clean candidates of a dirty tuple. An appealing alternative that avoids such an information loss is to abandon the requirement that the output database be deterministic. In other words, even though the input (dirty) database is deterministic, I allow the reconstructed database to be probabilistic. Although such an approach does avoid the information loss, it also brings forth several challenges. For example, how many alternatives should be kept per tuple in the reconstructed database? Maintaining too many alternatives increases the size of the reconstructed database, and hence the query processing time. Second, while processing queries on the probabilistic database may well increase recall, how would they affect the precision of the query processing? In this thesis, I investigate these questions. My investigation is done in the context of a data cleaning system called BayesWipe that has the capability of producing multiple clean candidates per each dirty tuple, along with the probability that they are the correct cleaned version. I represent these alternatives as tuples in a tuple disjoint probabilistic database, and use the Mystiq system to process queries on it. This probabilistic reconstruction (called BayesWipe–PDB) is compared to a deterministic reconstruction (called BayesWipe–DET)—where the most likely clean candidate for each tuple is chosen, and the rest of the alternatives discarded.
ContributorsRihan, Preet Inder Singh (Author) / Kambhampati, Subbarao (Thesis advisor) / Liu, Huan (Committee member) / Davulcu, Hasan (Committee member) / Arizona State University (Publisher)
Created2013
152460-Thumbnail Image.png
Description
New technologies enable the exploration of space, high-fidelity defense systems, lighting fast intercontinental communication systems as well as medical technologies that extend and improve patient lives. The basis for these technologies is high reliability electronics devised to meet stringent design goals and to operate consistently for many years deployed in

New technologies enable the exploration of space, high-fidelity defense systems, lighting fast intercontinental communication systems as well as medical technologies that extend and improve patient lives. The basis for these technologies is high reliability electronics devised to meet stringent design goals and to operate consistently for many years deployed in the field. An on-going concern for engineers is the consequences of ionizing radiation exposure, specifically total dose effects. For many of the different applications, there is a likelihood of exposure to radiation, which can result in device degradation and potentially failure. While the total dose effects and the resulting degradation are a well-studied field and methodologies to help mitigate degradation have been developed, there is still a need for simulation techniques to help designers understand total dose effects within their design. To that end, the work presented here details simulation techniques to analyze as well as predict the total dose response of a circuit. In this dissertation the total dose effects are broken into two sub-categories, intra-device and inter-device effects in CMOS technology. Intra-device effects degrade the performance of both n-channel and p-channel transistors, while inter-device effects result in loss of device isolation. In this work, multiple case studies are presented for which total dose degradation is of concern. Through the simulation techniques, the individual device and circuit responses are modeled post-irradiation. The use of these simulation techniques by circuit designers allow predictive simulation of total dose effects, allowing focused design changes to be implemented to increase radiation tolerance of high reliability electronics.
ContributorsSchlenvogt, Garrett (Author) / Barnaby, Hugh (Thesis advisor) / Goodnick, Stephen (Committee member) / Vasileska, Dragica (Committee member) / Holbert, Keith E. (Committee member) / Arizona State University (Publisher)
Created2014
152277-Thumbnail Image.png
Description
Silicon solar cells with heterojunction carrier collectors based on a-Si/c-Si heterojunction (SHJ) have a potential to overcome the limitations of the conventional diffused junction solar cells and become the next industry standard manufacturing technology of solar cells. A brand feature of SHJ technology is ultrapassivated surfaces with already demonstrated 750

Silicon solar cells with heterojunction carrier collectors based on a-Si/c-Si heterojunction (SHJ) have a potential to overcome the limitations of the conventional diffused junction solar cells and become the next industry standard manufacturing technology of solar cells. A brand feature of SHJ technology is ultrapassivated surfaces with already demonstrated 750 mV open circuit voltages (Voc) and 24.7% efficiency on large area solar cell. Despite very good results achieved in research and development, large volume manufacturing of high efficiency SHJ cells remains a fundamental challenge. The main objectives of this work were to develop a SHJ solar cell fabrication flow using industry compatible tools and processes in a pilot production environment, study the interactions between the used fabrication steps, identify the minimum set of optimization parameters and characterization techniques needed to achieve 20% baseline efficiency, and analyze the losses of power in fabricated SHJ cells by numerical and analytical modeling. This manuscript presents a detailed description of a SHJ solar cell fabrication flow developed at ASU Solar Power Laboratory (SPL) which allows large area solar cells with >750 mV Voc. SHJ cells on 135 um thick 153 cm2 area wafers with 19.5% efficiency were fabricated. Passivation quality of (i)a-Si:H film, bulk conductivity of doped a-Si films, bulk conductivity of ITO, transmission of ITO and the thickness of all films were identified as the minimum set of optimization parameters necessary to set up a baseline high efficiency SHJ fabrication flow. The preparation of randomly textured wafers to minimize the concentration of surface impurities and to avoid epitaxial growth of a-Si films was found to be a key challenge in achieving a repeatable and uniform passivation. This work resolved this issue by using a multi-step cleaning process based on sequential oxidation in nitric/acetic acids, Piranha and RCA-b solutions. The developed process allowed state of the art surface passivation with perfect repeatability and negligible reflectance losses. Two additional studies demonstrated 750 mV local Voc on 50 micron thick SHJ solar cell and < 1 cm/s effective surface recombination velocity on n-type wafers passivated by a-Si/SiO2/SiNx stack.
ContributorsHerasimenka, Stanislau Yur'yevich (Author) / Honsberg, C. (Christiana B.) (Thesis advisor) / Bowden, Stuart G (Thesis advisor) / Tracy, Clarence (Committee member) / Vasileska, Dragica (Committee member) / Holman, Zachary (Committee member) / Sinton, Ron (Committee member) / Arizona State University (Publisher)
Created2013
152285-Thumbnail Image.png
Description
Zinc oxide (ZnO), a naturally n-type semiconductor has been identified as a promising candidate to replace indium tin oxide (ITO) as the transparent electrode in solar cells, because of its wide bandgap (3.37 eV), abundant source materials and suitable refractive index (2.0 at 600 nm). Spray deposition is a convenient

Zinc oxide (ZnO), a naturally n-type semiconductor has been identified as a promising candidate to replace indium tin oxide (ITO) as the transparent electrode in solar cells, because of its wide bandgap (3.37 eV), abundant source materials and suitable refractive index (2.0 at 600 nm). Spray deposition is a convenient and low cost technique for large area and uniform deposition of semiconductor thin films. In particular, it provides an easier way to dope the film by simply adding the dopant precursor into the starting solution. In order to reduce the resistivity of undoped ZnO, many works have been done by doping in the ZnO with either group IIIA elements or VIIA elements using spray pyrolysis. However, the resistivity is still too high to meet TCO's resistivity requirement. In the present work, a novel co-spray deposition technique is developed to bypass a fundamental limitation in the conventional spray deposition technique, i.e. the deposition of metal oxides from incompatible precursors in the starting solution. With this technique, ZnO films codoped with one cationic dopant, Al, Cr, or Fe, and an anionic dopant, F, have been successfully synthesized, in which F is incompatible with all these three cationic dopants. Two starting solutions were prepared and co-sprayed through two separate spray heads. One solution contained only the F precursor, NH 4F. The second solution contained the Zn and one cationic dopant precursors, Zn(O 2CCH 3) 2 and AlCl 3, CrCl 3, or FeCl 3. The deposition was carried out at 500 &degC; on soda-lime glass in air. Compared to singly-doped ZnO thin films, codoped ZnO samples showed better electrical properties. Besides, a minimum sheet resistance, 55.4 Ω/sq, was obtained for Al and F codoped ZnO films after vacuum annealing at 400 &degC;, which was lower than singly-doped ZnO with either Al or F. The transmittance for the Al and F codoped ZnO samples was above 90% in the visible range. This co-spray deposition technique provides a simple and cost-effective way to synthesize metal oxides from incompatible precursors with improved properties.
ContributorsZhou, Bin (Author) / Tao, Meng (Thesis advisor) / Goryll, Michael (Committee member) / Vasileska, Dragica (Committee member) / Yu, Hongbin (Committee member) / Arizona State University (Publisher)
Created2013
152514-Thumbnail Image.png
Description
As the size and scope of valuable datasets has exploded across many industries and fields of research in recent years, an increasingly diverse audience has sought out effective tools for their large-scale data analytics needs. Over this period, machine learning researchers have also been very prolific in designing improved algorithms

As the size and scope of valuable datasets has exploded across many industries and fields of research in recent years, an increasingly diverse audience has sought out effective tools for their large-scale data analytics needs. Over this period, machine learning researchers have also been very prolific in designing improved algorithms which are capable of finding the hidden structure within these datasets. As consumers of popular Big Data frameworks have sought to apply and benefit from these improved learning algorithms, the problems encountered with the frameworks have motivated a new generation of Big Data tools to address the shortcomings of the previous generation. One important example of this is the improved performance in the newer tools with the large class of machine learning algorithms which are highly iterative in nature. In this thesis project, I set about to implement a low-rank matrix completion algorithm (as an example of a highly iterative algorithm) within a popular Big Data framework, and to evaluate its performance processing the Netflix Prize dataset. I begin by describing several approaches which I attempted, but which did not perform adequately. These include an implementation of the Singular Value Thresholding (SVT) algorithm within the Apache Mahout framework, which runs on top of the Apache Hadoop MapReduce engine. I then describe an approach which uses the Divide-Factor-Combine (DFC) algorithmic framework to parallelize the state-of-the-art low-rank completion algorithm Orthogoal Rank-One Matrix Pursuit (OR1MP) within the Apache Spark engine. I describe the results of a series of tests running this implementation with the Netflix dataset on clusters of various sizes, with various degrees of parallelism. For these experiments, I utilized the Amazon Elastic Compute Cloud (EC2) web service. In the final analysis, I conclude that the Spark DFC + OR1MP implementation does indeed produce competitive results, in both accuracy and performance. In particular, the Spark implementation performs nearly as well as the MATLAB implementation of OR1MP without any parallelism, and improves performance to a significant degree as the parallelism increases. In addition, the experience demonstrates how Spark's flexible programming model makes it straightforward to implement this parallel and iterative machine learning algorithm.
ContributorsKrouse, Brian (Author) / Ye, Jieping (Thesis advisor) / Liu, Huan (Committee member) / Davulcu, Hasan (Committee member) / Arizona State University (Publisher)
Created2014