This collection includes most of the ASU Theses and Dissertations from 2011 to present. ASU Theses and Dissertations are available in downloadable PDF format; however, a small percentage of items are under embargo. Information about the dissertations/theses includes degree information, committee members, an abstract, supporting data or media.

In addition to the electronic theses found in the ASU Digital Repository, ASU Theses and Dissertations can be found in the ASU Library Catalog.

Dissertations and Theses granted by Arizona State University are archived and made available through a joint effort of the ASU Graduate College and the ASU Libraries. For more information or questions about this collection contact or visit the Digital Repository ETD Library Guide or contact the ASU Graduate College at gradformat@asu.edu.

Displaying 1 - 10 of 150
Filtering by

Clear all filters

152131-Thumbnail Image.png
Description
The overall goal of this research project was to assess the feasibility of investigating the effects of microgravity on mineralization systems in unit gravity environments. If possible to perform these studies in unit gravity earth environments, such as earth, such systems can offer markedly less costly and more concerted research

The overall goal of this research project was to assess the feasibility of investigating the effects of microgravity on mineralization systems in unit gravity environments. If possible to perform these studies in unit gravity earth environments, such as earth, such systems can offer markedly less costly and more concerted research efforts to study these vitally important systems. Expected outcomes from easily accessible test environments and more tractable studies include the development of more advanced and adaptive material systems, including biological systems, particularly as humans ponder human exploration in deep space. The specific focus of the research was the design and development of a prototypical experimental test system that could preliminarily meet the challenging design specifications required of such test systems. Guided by a more unified theoretical foundation and building upon concept design and development heuristics, assessment of the feasibility of two experimental test systems was explored. Test System I was a rotating wall reactor experimental system that closely followed the specifications of a similar test system, Synthecon, designed by NASA contractors and thus closely mimicked microgravity conditions of the space shuttle and station. The latter includes terminal velocity conditions experienced by both innate material systems, as well as, biological systems, including living tissue and humans but has the ability to extend to include those material test systems associated with mineralization processes. Test System II is comprised of a unique vertical column design that offered more easily controlled fluid mechanical test conditions over a much wider flow regime that was necessary to achieving terminal velocities under free convection-less conditions that are important in mineralization processes. Preliminary results indicate that Test System II offers distinct advantages in studying microgravity effects in test systems operating in unit gravity environments and particularly when investigating mineralization and related processes. Verification of the Test System II was performed on validating microgravity effects on calcite mineralization processes reported earlier others. There studies were conducted on calcite mineralization in fixed-wing, reduced gravity aircraft, known as the `vomit comet' where reduced gravity conditions are include for very short (~20second) time periods. Preliminary results indicate that test systems, such as test system II, can be devised to assess microgravity conditions in unit gravity environments, such as earth. Furthermore, the preliminary data obtained on calcite formation suggest that strictly physicochemical mechanisms may be the dominant factors that control adaptation in materials processes, a theory first proposed by Liu et al. Thus the result of this study may also help shine a light on the problem of early osteoporosis in astronauts and long term interest in deep space exploration.
ContributorsSeyedmadani, Kimia (Author) / Pizziconi, Vincent (Thesis advisor) / Towe, Bruce (Committee member) / Alford, Terry (Committee member) / Arizona State University (Publisher)
Created2013
152154-Thumbnail Image.png
Description
As crystalline silicon solar cells continue to get thinner, the recombination of carriers at the surfaces of the cell plays an ever-important role in controlling the cell efficiency. One tool to minimize surface recombination is field effect passivation from the charges present in the thin films applied on the cell

As crystalline silicon solar cells continue to get thinner, the recombination of carriers at the surfaces of the cell plays an ever-important role in controlling the cell efficiency. One tool to minimize surface recombination is field effect passivation from the charges present in the thin films applied on the cell surfaces. The focus of this work is to understand the properties of charges present in the SiNx films and then to develop a mechanism to manipulate the polarity of charges to either negative or positive based on the end-application. Specific silicon-nitrogen dangling bonds (·Si-N), known as K center defects, are the primary charge trapping defects present in the SiNx films. A custom built corona charging tool was used to externally inject positive or negative charges in the SiNx film. Detailed Capacitance-Voltage (C-V) measurements taken on corona charged SiNx samples confirmed the presence of a net positive or negative charge density, as high as +/- 8 x 1012 cm-2, present in the SiNx film. High-energy (~ 4.9 eV) UV radiation was used to control and neutralize the charges in the SiNx films. Electron-Spin-Resonance (ESR) technique was used to detect and quantify the density of neutral K0 defects that are paramagnetically active. The density of the neutral K0 defects increased after UV treatment and decreased after high temperature annealing and charging treatments. Etch-back C-V measurements on SiNx films showed that the K centers are spread throughout the bulk of the SiNx film and not just near the SiNx-Si interface. It was also shown that the negative injected charges in the SiNx film were stable and present even after 1 year under indoor room-temperature conditions. Lastly, a stack of SiO2/SiNx dielectric layers applicable to standard commercial solar cells was developed using a low temperature (< 400 °C) PECVD process. Excellent surface passivation on FZ and CZ Si substrates for both n- and p-type samples was achieved by manipulating and controlling the charge in SiNx films.
ContributorsSharma, Vivek (Author) / Bowden, Stuart (Thesis advisor) / Schroder, Dieter (Committee member) / Honsberg, Christiana (Committee member) / Roedel, Ronald (Committee member) / Alford, Terry (Committee member) / Arizona State University (Publisher)
Created2013
151867-Thumbnail Image.png
Description
Automating aspects of biocuration through biomedical information extraction could significantly impact biomedical research by enabling greater biocuration throughput and improving the feasibility of a wider scope. An important step in biomedical information extraction systems is named entity recognition (NER), where mentions of entities such as proteins and diseases are located

Automating aspects of biocuration through biomedical information extraction could significantly impact biomedical research by enabling greater biocuration throughput and improving the feasibility of a wider scope. An important step in biomedical information extraction systems is named entity recognition (NER), where mentions of entities such as proteins and diseases are located within natural-language text and their semantic type is determined. This step is critical for later tasks in an information extraction pipeline, including normalization and relationship extraction. BANNER is a benchmark biomedical NER system using linear-chain conditional random fields and the rich feature set approach. A case study with BANNER locating genes and proteins in biomedical literature is described. The first corpus for disease NER adequate for use as training data is introduced, and employed in a case study of disease NER. The first corpus locating adverse drug reactions (ADRs) in user posts to a health-related social website is also described, and a system to locate and identify ADRs in social media text is created and evaluated. The rich feature set approach to creating NER feature sets is argued to be subject to diminishing returns, implying that additional improvements may require more sophisticated methods for creating the feature set. This motivates the first application of multivariate feature selection with filters and false discovery rate analysis to biomedical NER, resulting in a feature set at least 3 orders of magnitude smaller than the set created by the rich feature set approach. Finally, two novel approaches to NER by modeling the semantics of token sequences are introduced. The first method focuses on the sequence content by using language models to determine whether a sequence resembles entries in a lexicon of entity names or text from an unlabeled corpus more closely. The second method models the distributional semantics of token sequences, determining the similarity between a potential mention and the token sequences from the training data by analyzing the contexts where each sequence appears in a large unlabeled corpus. The second method is shown to improve the performance of BANNER on multiple data sets.
ContributorsLeaman, James Robert (Author) / Gonzalez, Graciela (Thesis advisor) / Baral, Chitta (Thesis advisor) / Cohen, Kevin B (Committee member) / Liu, Huan (Committee member) / Ye, Jieping (Committee member) / Arizona State University (Publisher)
Created2013
151940-Thumbnail Image.png
Description
Biological systems are complex in many dimensions as endless transportation and communication networks all function simultaneously. Our ability to intervene within both healthy and diseased systems is tied directly to our ability to understand and model core functionality. The progress in increasingly accurate and thorough high-throughput measurement technologies has provided

Biological systems are complex in many dimensions as endless transportation and communication networks all function simultaneously. Our ability to intervene within both healthy and diseased systems is tied directly to our ability to understand and model core functionality. The progress in increasingly accurate and thorough high-throughput measurement technologies has provided a deluge of data from which we may attempt to infer a representation of the true genetic regulatory system. A gene regulatory network model, if accurate enough, may allow us to perform hypothesis testing in the form of computational experiments. Of great importance to modeling accuracy is the acknowledgment of biological contexts within the models -- i.e. recognizing the heterogeneous nature of the true biological system and the data it generates. This marriage of engineering, mathematics and computer science with systems biology creates a cycle of progress between computer simulation and lab experimentation, rapidly translating interventions and treatments for patients from the bench to the bedside. This dissertation will first discuss the landscape for modeling the biological system, explore the identification of targets for intervention in Boolean network models of biological interactions, and explore context specificity both in new graphical depictions of models embodying context-specific genomic regulation and in novel analysis approaches designed to reveal embedded contextual information. Overall, the dissertation will explore a spectrum of biological modeling with a goal towards therapeutic intervention, with both formal and informal notions of biological context, in such a way that will enable future work to have an even greater impact in terms of direct patient benefit on an individualized level.
ContributorsVerdicchio, Michael (Author) / Kim, Seungchan (Thesis advisor) / Baral, Chitta (Committee member) / Stolovitzky, Gustavo (Committee member) / Collofello, James (Committee member) / Arizona State University (Publisher)
Created2013
151952-Thumbnail Image.png
Description
Microwave dielectrics are widely used to make resonators and filters in telecommunication systems. The production of thin films with high dielectric constant and low loss could potentially enable a marked reduction in the size of devices and systems. However, studies of these materials in thin film form are very sparse.

Microwave dielectrics are widely used to make resonators and filters in telecommunication systems. The production of thin films with high dielectric constant and low loss could potentially enable a marked reduction in the size of devices and systems. However, studies of these materials in thin film form are very sparse. In this research, experiments were carried out on practical high-performance dielectrics including ZrTiO4-ZnNb2O6 (ZTZN) and Ba(Co,Zn)1/3Nb2/3O3 (BCZN) with high dielectric constant and low loss tangent. Thin films were deposited by laser ablation on various substrates, with a systematical study of growth conditions like substrate temperature, oxygen pressure and annealing to optimize the film quality, and the compositional, microstructural, optical and electric properties were characterized. The deposited ZTZN films were randomly oriented polycrystalline on Si substrate and textured on MgO substrate with a tetragonal lattice change at elevated temperature. The BCZN films deposited on MgO substrate showed superior film quality relative to that on other substrates, which grow epitaxially with an orientation of (001) // MgO (001) and (100) // MgO (100) when substrate temperature was above 500 oC. In-situ annealing at growth temperature in 200 mTorr oxygen pressure was found to enhance the quality of the films, reducing the peak width of the X-ray Diffraction (XRD) rocking curve to 0.53o and the χmin of channeling Rutherford Backscattering Spectrometry (RBS) to 8.8% when grown at 800oC. Atomic Force Microscopy (AFM) was used to study the topography and found a monotonic decrease in the surface roughness when the growth temperature increased. Optical absorption and transmission measurements were used to determine the energy bandgap and the refractive index respectively. A low-frequency dielectric constant of 34 was measured using a planar interdigital measurement structure. The resistivity of the film is ~3×1010 ohm·cm at room temperature and has an activation energy of thermal activated current of 0.66 eV.
ContributorsLi, You (Author) / Newman, Nathan (Thesis advisor) / Alford, Terry (Committee member) / Singh, Rakesh (Committee member) / Arizona State University (Publisher)
Created2013
151963-Thumbnail Image.png
Description
Currently, to interact with computer based systems one needs to learn the specific interface language of that system. In most cases, interaction would be much easier if it could be done in natural language. For that, we will need a module which understands natural language and automatically translates it to

Currently, to interact with computer based systems one needs to learn the specific interface language of that system. In most cases, interaction would be much easier if it could be done in natural language. For that, we will need a module which understands natural language and automatically translates it to the interface language of the system. NL2KR (Natural language to knowledge representation) v.1 system is a prototype of such a system. It is a learning based system that learns new meanings of words in terms of lambda-calculus formulas given an initial lexicon of some words and their meanings and a training corpus of sentences with their translations. As a part of this thesis, we take the prototype NL2KR v.1 system and enhance various components of it to make it usable for somewhat substantial and useful interface languages. We revamped the lexicon learning components, Inverse-lambda and Generalization modules, and redesigned the lexicon learning algorithm which uses these components to learn new meanings of words. Similarly, we re-developed an inbuilt parser of the system in Answer Set Programming (ASP) and also integrated external parser with the system. Apart from this, we added some new rich features like various system configurations and memory cache in the learning component of the NL2KR system. These enhancements helped in learning more meanings of the words, boosted performance of the system by reducing the computation time by a factor of 8 and improved the usability of the system. We evaluated the NL2KR system on iRODS domain. iRODS is a rule-oriented data system, which helps in managing large set of computer files using policies. This system provides a Rule-Oriented interface langauge whose syntactic structure is like any procedural programming language (eg. C). However, direct translation of natural language (NL) to this interface language is difficult. So, for automatic translation of NL to this language, we define a simple intermediate Policy Declarative Language (IPDL) to represent the knowledge in the policies, which then can be directly translated to iRODS rules. We develop a corpus of 100 policy statements and manually translate them to IPDL langauge. This corpus is then used for the evaluation of NL2KR system. We performed 10 fold cross validation on the system. Furthermore, using this corpus, we illustrate how different components of our NL2KR system work.
ContributorsKumbhare, Kanchan Ravishankar (Author) / Baral, Chitta (Thesis advisor) / Ye, Jieping (Committee member) / Li, Baoxin (Committee member) / Arizona State University (Publisher)
Created2013
151793-Thumbnail Image.png
Description
Linear Temporal Logic is gaining increasing popularity as a high level specification language for robot motion planning due to its expressive power and scalability of LTL control synthesis algorithms. This formalism, however, requires expert knowledge and makes it inaccessible to non-expert users. This thesis introduces a graphical specification environment to

Linear Temporal Logic is gaining increasing popularity as a high level specification language for robot motion planning due to its expressive power and scalability of LTL control synthesis algorithms. This formalism, however, requires expert knowledge and makes it inaccessible to non-expert users. This thesis introduces a graphical specification environment to create high level motion plans to control robots in the field by converting a visual representation of the motion/task plan into a Linear Temporal Logic (LTL) specification. The visual interface is built on the Android tablet platform and provides functionality to create task plans through a set of well defined gestures and on screen controls. It uses the notion of waypoints to quickly and efficiently describe the motion plan and enables a variety of complex Linear Temporal Logic specifications to be described succinctly and intuitively by the user without the need for the knowledge and understanding of LTL specification. Thus, it opens avenues for its use by personnel in military, warehouse management, and search and rescue missions. This thesis describes the construction of LTL for various scenarios used for robot navigation using the visual interface developed and leverages the use of existing LTL based motion planners to carry out the task plan by a robot.
ContributorsSrinivas, Shashank (Author) / Fainekos, Georgios (Thesis advisor) / Baral, Chitta (Committee member) / Burleson, Winslow (Committee member) / Arizona State University (Publisher)
Created2013
151653-Thumbnail Image.png
Description
Answer Set Programming (ASP) is one of the most prominent and successful knowledge representation paradigms. The success of ASP is due to its expressive non-monotonic modeling language and its efficient computational methods originating from building propositional satisfiability solvers. The wide adoption of ASP has motivated several extensions to its modeling

Answer Set Programming (ASP) is one of the most prominent and successful knowledge representation paradigms. The success of ASP is due to its expressive non-monotonic modeling language and its efficient computational methods originating from building propositional satisfiability solvers. The wide adoption of ASP has motivated several extensions to its modeling language in order to enhance expressivity, such as incorporating aggregates and interfaces with ontologies. Also, in order to overcome the grounding bottleneck of computation in ASP, there are increasing interests in integrating ASP with other computing paradigms, such as Constraint Programming (CP) and Satisfiability Modulo Theories (SMT). Due to the non-monotonic nature of the ASP semantics, such enhancements turned out to be non-trivial and the existing extensions are not fully satisfactory. We observe that one main reason for the difficulties rooted in the propositional semantics of ASP, which is limited in handling first-order constructs (such as aggregates and ontologies) and functions (such as constraint variables in CP and SMT) in natural ways. This dissertation presents a unifying view on these extensions by viewing them as instances of formulas with generalized quantifiers and intensional functions. We extend the first-order stable model semantics by by Ferraris, Lee, and Lifschitz to allow generalized quantifiers, which cover aggregate, DL-atoms, constraints and SMT theory atoms as special cases. Using this unifying framework, we study and relate different extensions of ASP. We also present a tight integration of ASP with SMT, based on which we enhance action language C+ to handle reasoning about continuous changes. Our framework yields a systematic approach to study and extend non-monotonic languages.
ContributorsMeng, Yunsong (Author) / Lee, Joohyung (Thesis advisor) / Ahn, Gail-Joon (Committee member) / Baral, Chitta (Committee member) / Fainekos, Georgios (Committee member) / Lifschitz, Vladimir (Committee member) / Arizona State University (Publisher)
Created2013
151596-Thumbnail Image.png
Description
Carrier lifetime is one of the few parameters which can give information about the low defect densities in today's semiconductors. In principle there is no lower limit to the defect density determined by lifetime measurements. No other technique can easily detect defect densities as low as 10-9 - 10-10 cm-3

Carrier lifetime is one of the few parameters which can give information about the low defect densities in today's semiconductors. In principle there is no lower limit to the defect density determined by lifetime measurements. No other technique can easily detect defect densities as low as 10-9 - 10-10 cm-3 in a simple, contactless room temperature measurement. However in practice, recombination lifetime τr measurements such as photoconductance decay (PCD) and surface photovoltage (SPV) that are widely used for characterization of bulk wafers face serious limitations when applied to thin epitaxial layers, where the layer thickness is smaller than the minority carrier diffusion length Ln. Other methods such as microwave photoconductance decay (µ-PCD), photoluminescence (PL), and frequency-dependent SPV, where the generated excess carriers are confined to the epitaxial layer width by using short excitation wavelengths, require complicated configuration and extensive surface passivation processes that make them time-consuming and not suitable for process screening purposes. Generation lifetime τg, typically measured with pulsed MOS capacitors (MOS-C) as test structures, has been shown to be an eminently suitable technique for characterization of thin epitaxial layers. It is for these reasons that the IC community, largely concerned with unipolar MOS devices, uses lifetime measurements as a "process cleanliness monitor." However when dealing with ultraclean epitaxial wafers, the classic MOS-C technique measures an effective generation lifetime τg eff which is dominated by the surface generation and hence cannot be used for screening impurity densities. I have developed a modified pulsed MOS technique for measuring generation lifetime in ultraclean thin p/p+ epitaxial layers which can be used to detect metallic impurities with densities as low as 10-10 cm-3. The widely used classic version has been shown to be unable to effectively detect such low impurity densities due to the domination of surface generation; whereas, the modified version can be used suitably as a metallic impurity density monitoring tool for such cases.
ContributorsElhami Khorasani, Arash (Author) / Alford, Terry (Thesis advisor) / Goryll, Michael (Committee member) / Bertoni, Mariana (Committee member) / Arizona State University (Publisher)
Created2013
151471-Thumbnail Image.png
Description
In this dissertation I develop a deep theory of temporal planning well-suited to analyzing, understanding, and improving the state of the art implementations (as of 2012). At face-value the work is strictly theoretical; nonetheless its impact is entirely real and practical. The easiest portion of that impact to highlight concerns

In this dissertation I develop a deep theory of temporal planning well-suited to analyzing, understanding, and improving the state of the art implementations (as of 2012). At face-value the work is strictly theoretical; nonetheless its impact is entirely real and practical. The easiest portion of that impact to highlight concerns the notable improvements to the format of the temporal fragment of the International Planning Competitions (IPCs). Particularly: the theory I expound upon here is the primary cause of--and justification for--the altered (i) selection of benchmark problems, and (ii) notion of "winning temporal planner". For higher level motivation: robotics, web service composition, industrial manufacturing, business process management, cybersecurity, space exploration, deep ocean exploration, and logistics all benefit from applying domain-independent automated planning technique. Naturally, actually carrying out such case studies has much to offer. For example, we may extract the lesson that reasoning carefully about deadlines is rather crucial to planning in practice. More generally, effectively automating specifically temporal planning is well-motivated from applications. Entirely abstractly, the aim is to improve the theory of automated temporal planning by distilling from its practice. My thesis is that the key feature of computational interest is concurrency. To support, I demonstrate by way of compilation methods, worst-case counting arguments, and analysis of algorithmic properties such as completeness that the more immediately pressing computational obstacles (facing would-be temporal generalizations of classical planning systems) can be dealt with in theoretically efficient manner. So more accurately the technical contribution here is to demonstrate: The computationally significant obstacle to automated temporal planning that remains is just concurrency.
ContributorsCushing, William Albemarle (Author) / Kambhampati, Subbarao (Thesis advisor) / Weld, Daniel S. (Committee member) / Smith, David E. (Committee member) / Baral, Chitta (Committee member) / Davalcu, Hasan (Committee member) / Arizona State University (Publisher)
Created2012