This collection includes most of the ASU Theses and Dissertations from 2011 to present. ASU Theses and Dissertations are available in downloadable PDF format; however, a small percentage of items are under embargo. Information about the dissertations/theses includes degree information, committee members, an abstract, supporting data or media.

In addition to the electronic theses found in the ASU Digital Repository, ASU Theses and Dissertations can be found in the ASU Library Catalog.

Dissertations and Theses granted by Arizona State University are archived and made available through a joint effort of the ASU Graduate College and the ASU Libraries. For more information or questions about this collection contact or visit the Digital Repository ETD Library Guide or contact the ASU Graduate College at gradformat@asu.edu.

Displaying 1 - 10 of 231
Filtering by

Clear all filters

152233-Thumbnail Image.png
Description
Continuous monitoring in the adequate temporal and spatial scale is necessary for a better understanding of environmental variations. But field deployments of molecular biological analysis platforms in that scale are currently hindered because of issues with power, throughput and automation. Currently, such analysis is performed by the collection of large

Continuous monitoring in the adequate temporal and spatial scale is necessary for a better understanding of environmental variations. But field deployments of molecular biological analysis platforms in that scale are currently hindered because of issues with power, throughput and automation. Currently, such analysis is performed by the collection of large sample volumes from over a wide area and transporting them to laboratory testing facilities, which fail to provide any real-time information. This dissertation evaluates the systems currently utilized for in-situ field analyses and the issues hampering the successful deployment of such bioanalytial instruments for environmental applications. The design and development of high throughput, low power, and autonomous Polymerase Chain Reaction (PCR) instruments, amenable for portable field operations capable of providing quantitative results is presented here as part of this dissertation. A number of novel innovations have been reported here as part of this work in microfluidic design, PCR thermocycler design, optical design and systems integration. Emulsion microfluidics in conjunction with fluorinated oils and Teflon tubing have been used for the fluidic module that reduces cross-contamination eliminating the need for disposable components or constant cleaning. A cylindrical heater has been designed with the tubing wrapped around fixed temperature zones enabling continuous operation. Fluorescence excitation and detection have been achieved by using a light emitting diode (LED) as the excitation source and a photomultiplier tube (PMT) as the detector. Real-time quantitative PCR results were obtained by using multi-channel fluorescence excitation and detection using LED, optical fibers and a 64-channel multi-anode PMT for measuring continuous real-time fluorescence. The instrument was evaluated by comparing the results obtained with those obtained from a commercial instrument and found to be comparable. To further improve the design and enhance its field portability, this dissertation also presents a framework for the instrumentation necessary for a portable digital PCR platform to achieve higher throughputs with lower power. Both systems were designed such that it can easily couple with any upstream platform capable of providing nucleic acid for analysis using standard fluidic connections. Consequently, these instruments can be used not only in environmental applications, but portable diagnostics applications as well.
ContributorsRay, Tathagata (Author) / Youngbull, Cody (Thesis advisor) / Goryll, Michael (Thesis advisor) / Blain Christen, Jennifer (Committee member) / Yu, Hongyu (Committee member) / Arizona State University (Publisher)
Created2013
151867-Thumbnail Image.png
Description
Automating aspects of biocuration through biomedical information extraction could significantly impact biomedical research by enabling greater biocuration throughput and improving the feasibility of a wider scope. An important step in biomedical information extraction systems is named entity recognition (NER), where mentions of entities such as proteins and diseases are located

Automating aspects of biocuration through biomedical information extraction could significantly impact biomedical research by enabling greater biocuration throughput and improving the feasibility of a wider scope. An important step in biomedical information extraction systems is named entity recognition (NER), where mentions of entities such as proteins and diseases are located within natural-language text and their semantic type is determined. This step is critical for later tasks in an information extraction pipeline, including normalization and relationship extraction. BANNER is a benchmark biomedical NER system using linear-chain conditional random fields and the rich feature set approach. A case study with BANNER locating genes and proteins in biomedical literature is described. The first corpus for disease NER adequate for use as training data is introduced, and employed in a case study of disease NER. The first corpus locating adverse drug reactions (ADRs) in user posts to a health-related social website is also described, and a system to locate and identify ADRs in social media text is created and evaluated. The rich feature set approach to creating NER feature sets is argued to be subject to diminishing returns, implying that additional improvements may require more sophisticated methods for creating the feature set. This motivates the first application of multivariate feature selection with filters and false discovery rate analysis to biomedical NER, resulting in a feature set at least 3 orders of magnitude smaller than the set created by the rich feature set approach. Finally, two novel approaches to NER by modeling the semantics of token sequences are introduced. The first method focuses on the sequence content by using language models to determine whether a sequence resembles entries in a lexicon of entity names or text from an unlabeled corpus more closely. The second method models the distributional semantics of token sequences, determining the similarity between a potential mention and the token sequences from the training data by analyzing the contexts where each sequence appears in a large unlabeled corpus. The second method is shown to improve the performance of BANNER on multiple data sets.
ContributorsLeaman, James Robert (Author) / Gonzalez, Graciela (Thesis advisor) / Baral, Chitta (Thesis advisor) / Cohen, Kevin B (Committee member) / Liu, Huan (Committee member) / Ye, Jieping (Committee member) / Arizona State University (Publisher)
Created2013
151940-Thumbnail Image.png
Description
Biological systems are complex in many dimensions as endless transportation and communication networks all function simultaneously. Our ability to intervene within both healthy and diseased systems is tied directly to our ability to understand and model core functionality. The progress in increasingly accurate and thorough high-throughput measurement technologies has provided

Biological systems are complex in many dimensions as endless transportation and communication networks all function simultaneously. Our ability to intervene within both healthy and diseased systems is tied directly to our ability to understand and model core functionality. The progress in increasingly accurate and thorough high-throughput measurement technologies has provided a deluge of data from which we may attempt to infer a representation of the true genetic regulatory system. A gene regulatory network model, if accurate enough, may allow us to perform hypothesis testing in the form of computational experiments. Of great importance to modeling accuracy is the acknowledgment of biological contexts within the models -- i.e. recognizing the heterogeneous nature of the true biological system and the data it generates. This marriage of engineering, mathematics and computer science with systems biology creates a cycle of progress between computer simulation and lab experimentation, rapidly translating interventions and treatments for patients from the bench to the bedside. This dissertation will first discuss the landscape for modeling the biological system, explore the identification of targets for intervention in Boolean network models of biological interactions, and explore context specificity both in new graphical depictions of models embodying context-specific genomic regulation and in novel analysis approaches designed to reveal embedded contextual information. Overall, the dissertation will explore a spectrum of biological modeling with a goal towards therapeutic intervention, with both formal and informal notions of biological context, in such a way that will enable future work to have an even greater impact in terms of direct patient benefit on an individualized level.
ContributorsVerdicchio, Michael (Author) / Kim, Seungchan (Thesis advisor) / Baral, Chitta (Committee member) / Stolovitzky, Gustavo (Committee member) / Collofello, James (Committee member) / Arizona State University (Publisher)
Created2013
151947-Thumbnail Image.png
Description
GaN high electron mobility transistors (HEMTs) based on the III-V nitride material system have been under extensive investigation because of their superb performance as high power RF devices. Two dimensional electron gas(2-DEG) with charge density ten times higher than that of GaAs-based HEMT and mobility much higher than Si enables

GaN high electron mobility transistors (HEMTs) based on the III-V nitride material system have been under extensive investigation because of their superb performance as high power RF devices. Two dimensional electron gas(2-DEG) with charge density ten times higher than that of GaAs-based HEMT and mobility much higher than Si enables a low on-resistance required for RF devices. Self-heating issues with GaN HEMT and lack of understanding of various phenomena are hindering their widespread commercial development. There is a need to understand device operation by developing a model which could be used to optimize electrical and thermal characteristics of GaN HEMT design for high power and high frequency operation. In this thesis work a physical simulation model of AlGaN/GaN HEMT is developed using commercially available software ATLAS from SILVACO Int. based on the energy balance/hydrodynamic carrier transport equations. The model is calibrated against experimental data. Transfer and output characteristics are the key focus in the analysis along with saturation drain current. The resultant IV curves showed a close correspondence with experimental results. Various combinations of electron mobility, velocity saturation, momentum and energy relaxation times and gate work functions were attempted to improve IV curve correlation. Thermal effects were also investigated to get a better understanding on the role of self-heating effects on the electrical characteristics of GaN HEMTs. The temperature profiles across the device were observed. Hot spots were found along the channel in the gate-drain spacing. These preliminary results indicate that the thermal effects do have an impact on the electrical device characteristics at large biases even though the amount of self-heating is underestimated with respect to thermal particle-based simulations that solve the energy balance equations for acoustic and optical phonons as well (thus take proper account of the formation of the hot-spot). The decrease in drain current is due to decrease in saturation carrier velocity. The necessity of including hydrodynamic/energy balance transport models for accurate simulations is demonstrated. Possible ways for improving model accuracy are discussed in conjunction with future research.
ContributorsChowdhury, Towhid (Author) / Vasileska, Dragica (Thesis advisor) / Goodnick, Stephen (Committee member) / Goryll, Michael (Committee member) / Arizona State University (Publisher)
Created2013
151953-Thumbnail Image.png
Description
Distributed inference has applications in a wide range of fields such as source localization, target detection, environment monitoring, and healthcare. In this dissertation, distributed inference schemes which use bounded transmit power are considered. The performance of the proposed schemes are studied for a variety of inference problems. In the first

Distributed inference has applications in a wide range of fields such as source localization, target detection, environment monitoring, and healthcare. In this dissertation, distributed inference schemes which use bounded transmit power are considered. The performance of the proposed schemes are studied for a variety of inference problems. In the first part of the dissertation, a distributed detection scheme where the sensors transmit with constant modulus signals over a Gaussian multiple access channel is considered. The deflection coefficient of the proposed scheme is shown to depend on the characteristic function of the sensing noise, and the error exponent for the system is derived using large deviation theory. Optimization of the deflection coefficient and error exponent are considered with respect to a transmission phase parameter for a variety of sensing noise distributions including impulsive ones. The proposed scheme is also favorably compared with existing amplify-and-forward (AF) and detect-and-forward (DF) schemes. The effect of fading is shown to be detrimental to the detection performance and simulations are provided to corroborate the analytical results. The second part of the dissertation studies a distributed inference scheme which uses bounded transmission functions over a Gaussian multiple access channel. The conditions on the transmission functions under which consistent estimation and reliable detection are possible is characterized. For the distributed estimation problem, an estimation scheme that uses bounded transmission functions is proved to be strongly consistent provided that the variance of the noise samples are bounded and that the transmission function is one-to-one. The proposed estimation scheme is compared with the amplify and forward technique and its robustness to impulsive sensing noise distributions is highlighted. It is also shown that bounded transmissions suffer from inconsistent estimates if the sensing noise variance goes to infinity. For the distributed detection problem, similar results are obtained by studying the deflection coefficient. Simulations corroborate our analytical results. In the third part of this dissertation, the problem of estimating the average of samples distributed at the nodes of a sensor network is considered. A distributed average consensus algorithm in which every sensor transmits with bounded peak power is proposed. In the presence of communication noise, it is shown that the nodes reach consensus asymptotically to a finite random variable whose expectation is the desired sample average of the initial observations with a variance that depends on the step size of the algorithm and the variance of the communication noise. The asymptotic performance is characterized by deriving the asymptotic covariance matrix using results from stochastic approximation theory. It is shown that using bounded transmissions results in slower convergence compared to the linear consensus algorithm based on the Laplacian heuristic. Simulations corroborate our analytical findings. Finally, a robust distributed average consensus algorithm in which every sensor performs a nonlinear processing at the receiver is proposed. It is shown that non-linearity at the receiver nodes makes the algorithm robust to a wide range of channel noise distributions including the impulsive ones. It is shown that the nodes reach consensus asymptotically and similar results are obtained as in the case of transmit non-linearity. Simulations corroborate our analytical findings and highlight the robustness of the proposed algorithm.
ContributorsDasarathan, Sivaraman (Author) / Tepedelenlioğlu, Cihan (Thesis advisor) / Papandreou-Suppappola, Antonia (Committee member) / Reisslein, Martin (Committee member) / Goryll, Michael (Committee member) / Arizona State University (Publisher)
Created2013
151963-Thumbnail Image.png
Description
Currently, to interact with computer based systems one needs to learn the specific interface language of that system. In most cases, interaction would be much easier if it could be done in natural language. For that, we will need a module which understands natural language and automatically translates it to

Currently, to interact with computer based systems one needs to learn the specific interface language of that system. In most cases, interaction would be much easier if it could be done in natural language. For that, we will need a module which understands natural language and automatically translates it to the interface language of the system. NL2KR (Natural language to knowledge representation) v.1 system is a prototype of such a system. It is a learning based system that learns new meanings of words in terms of lambda-calculus formulas given an initial lexicon of some words and their meanings and a training corpus of sentences with their translations. As a part of this thesis, we take the prototype NL2KR v.1 system and enhance various components of it to make it usable for somewhat substantial and useful interface languages. We revamped the lexicon learning components, Inverse-lambda and Generalization modules, and redesigned the lexicon learning algorithm which uses these components to learn new meanings of words. Similarly, we re-developed an inbuilt parser of the system in Answer Set Programming (ASP) and also integrated external parser with the system. Apart from this, we added some new rich features like various system configurations and memory cache in the learning component of the NL2KR system. These enhancements helped in learning more meanings of the words, boosted performance of the system by reducing the computation time by a factor of 8 and improved the usability of the system. We evaluated the NL2KR system on iRODS domain. iRODS is a rule-oriented data system, which helps in managing large set of computer files using policies. This system provides a Rule-Oriented interface langauge whose syntactic structure is like any procedural programming language (eg. C). However, direct translation of natural language (NL) to this interface language is difficult. So, for automatic translation of NL to this language, we define a simple intermediate Policy Declarative Language (IPDL) to represent the knowledge in the policies, which then can be directly translated to iRODS rules. We develop a corpus of 100 policy statements and manually translate them to IPDL langauge. This corpus is then used for the evaluation of NL2KR system. We performed 10 fold cross validation on the system. Furthermore, using this corpus, we illustrate how different components of our NL2KR system work.
ContributorsKumbhare, Kanchan Ravishankar (Author) / Baral, Chitta (Thesis advisor) / Ye, Jieping (Committee member) / Li, Baoxin (Committee member) / Arizona State University (Publisher)
Created2013
151793-Thumbnail Image.png
Description
Linear Temporal Logic is gaining increasing popularity as a high level specification language for robot motion planning due to its expressive power and scalability of LTL control synthesis algorithms. This formalism, however, requires expert knowledge and makes it inaccessible to non-expert users. This thesis introduces a graphical specification environment to

Linear Temporal Logic is gaining increasing popularity as a high level specification language for robot motion planning due to its expressive power and scalability of LTL control synthesis algorithms. This formalism, however, requires expert knowledge and makes it inaccessible to non-expert users. This thesis introduces a graphical specification environment to create high level motion plans to control robots in the field by converting a visual representation of the motion/task plan into a Linear Temporal Logic (LTL) specification. The visual interface is built on the Android tablet platform and provides functionality to create task plans through a set of well defined gestures and on screen controls. It uses the notion of waypoints to quickly and efficiently describe the motion plan and enables a variety of complex Linear Temporal Logic specifications to be described succinctly and intuitively by the user without the need for the knowledge and understanding of LTL specification. Thus, it opens avenues for its use by personnel in military, warehouse management, and search and rescue missions. This thesis describes the construction of LTL for various scenarios used for robot navigation using the visual interface developed and leverages the use of existing LTL based motion planners to carry out the task plan by a robot.
ContributorsSrinivas, Shashank (Author) / Fainekos, Georgios (Thesis advisor) / Baral, Chitta (Committee member) / Burleson, Winslow (Committee member) / Arizona State University (Publisher)
Created2013
151814-Thumbnail Image.png
Description
This research emphasizes the use of low energy and low temperature post processing to improve the performance and lifetime of thin films and thin film transistors, by applying the fundamentals of interaction of materials with conductive heating and electromagnetic radiation. Single frequency microwave anneal is used to rapidly recrystallize the

This research emphasizes the use of low energy and low temperature post processing to improve the performance and lifetime of thin films and thin film transistors, by applying the fundamentals of interaction of materials with conductive heating and electromagnetic radiation. Single frequency microwave anneal is used to rapidly recrystallize the damage induced during ion implantation in Si substrates. Volumetric heating of the sample in the presence of the microwave field facilitates quick absorption of radiation to promote recrystallization at the amorphous-crystalline interface, apart from electrical activation of the dopants due to relocation to the substitutional sites. Structural and electrical characterization confirm recrystallization of heavily implanted Si within 40 seconds anneal time with minimum dopant diffusion compared to rapid thermal annealed samples. The use of microwave anneal to improve performance of multilayer thin film devices, e.g. thin film transistors (TFTs) requires extensive study of interaction of individual layers with electromagnetic radiation. This issue has been addressed by developing detail understanding of thin films and interfaces in TFTs by studying reliability and failure mechanisms upon extensive stress test. Electrical and ambient stresses such as illumination, thermal, and mechanical stresses are inflicted on the mixed oxide based thin film transistors, which are explored due to high mobilities of the mixed oxide (indium zinc oxide, indium gallium zinc oxide) channel layer material. Semiconductor parameter analyzer is employed to extract transfer characteristics, useful to derive mobility, subthreshold, and threshold voltage parameters of the transistors. Low temperature post processing anneals compatible with polymer substrates are performed in several ambients (oxygen, forming gas and vacuum) at 150 °C as a preliminary step. The analysis of the results pre and post low temperature anneals using device physics fundamentals assists in categorizing defects leading to failure/degradation as: oxygen vacancies, thermally activated defects within the bandgap, channel-dielectric interface defects, and acceptor-like or donor-like trap states. Microwave anneal has been confirmed to enhance the quality of thin films, however future work entails extending the use of electromagnetic radiation in controlled ambient to facilitate quick post fabrication anneal to improve the functionality and lifetime of these low temperature fabricated TFTs.
ContributorsVemuri, Rajitha (Author) / Alford, Terry L. (Thesis advisor) / Theodore, N David (Committee member) / Goryll, Michael (Committee member) / Arizona State University (Publisher)
Created2013
151653-Thumbnail Image.png
Description
Answer Set Programming (ASP) is one of the most prominent and successful knowledge representation paradigms. The success of ASP is due to its expressive non-monotonic modeling language and its efficient computational methods originating from building propositional satisfiability solvers. The wide adoption of ASP has motivated several extensions to its modeling

Answer Set Programming (ASP) is one of the most prominent and successful knowledge representation paradigms. The success of ASP is due to its expressive non-monotonic modeling language and its efficient computational methods originating from building propositional satisfiability solvers. The wide adoption of ASP has motivated several extensions to its modeling language in order to enhance expressivity, such as incorporating aggregates and interfaces with ontologies. Also, in order to overcome the grounding bottleneck of computation in ASP, there are increasing interests in integrating ASP with other computing paradigms, such as Constraint Programming (CP) and Satisfiability Modulo Theories (SMT). Due to the non-monotonic nature of the ASP semantics, such enhancements turned out to be non-trivial and the existing extensions are not fully satisfactory. We observe that one main reason for the difficulties rooted in the propositional semantics of ASP, which is limited in handling first-order constructs (such as aggregates and ontologies) and functions (such as constraint variables in CP and SMT) in natural ways. This dissertation presents a unifying view on these extensions by viewing them as instances of formulas with generalized quantifiers and intensional functions. We extend the first-order stable model semantics by by Ferraris, Lee, and Lifschitz to allow generalized quantifiers, which cover aggregate, DL-atoms, constraints and SMT theory atoms as special cases. Using this unifying framework, we study and relate different extensions of ASP. We also present a tight integration of ASP with SMT, based on which we enhance action language C+ to handle reasoning about continuous changes. Our framework yields a systematic approach to study and extend non-monotonic languages.
ContributorsMeng, Yunsong (Author) / Lee, Joohyung (Thesis advisor) / Ahn, Gail-Joon (Committee member) / Baral, Chitta (Committee member) / Fainekos, Georgios (Committee member) / Lifschitz, Vladimir (Committee member) / Arizona State University (Publisher)
Created2013
151596-Thumbnail Image.png
Description
Carrier lifetime is one of the few parameters which can give information about the low defect densities in today's semiconductors. In principle there is no lower limit to the defect density determined by lifetime measurements. No other technique can easily detect defect densities as low as 10-9 - 10-10 cm-3

Carrier lifetime is one of the few parameters which can give information about the low defect densities in today's semiconductors. In principle there is no lower limit to the defect density determined by lifetime measurements. No other technique can easily detect defect densities as low as 10-9 - 10-10 cm-3 in a simple, contactless room temperature measurement. However in practice, recombination lifetime τr measurements such as photoconductance decay (PCD) and surface photovoltage (SPV) that are widely used for characterization of bulk wafers face serious limitations when applied to thin epitaxial layers, where the layer thickness is smaller than the minority carrier diffusion length Ln. Other methods such as microwave photoconductance decay (µ-PCD), photoluminescence (PL), and frequency-dependent SPV, where the generated excess carriers are confined to the epitaxial layer width by using short excitation wavelengths, require complicated configuration and extensive surface passivation processes that make them time-consuming and not suitable for process screening purposes. Generation lifetime τg, typically measured with pulsed MOS capacitors (MOS-C) as test structures, has been shown to be an eminently suitable technique for characterization of thin epitaxial layers. It is for these reasons that the IC community, largely concerned with unipolar MOS devices, uses lifetime measurements as a "process cleanliness monitor." However when dealing with ultraclean epitaxial wafers, the classic MOS-C technique measures an effective generation lifetime τg eff which is dominated by the surface generation and hence cannot be used for screening impurity densities. I have developed a modified pulsed MOS technique for measuring generation lifetime in ultraclean thin p/p+ epitaxial layers which can be used to detect metallic impurities with densities as low as 10-10 cm-3. The widely used classic version has been shown to be unable to effectively detect such low impurity densities due to the domination of surface generation; whereas, the modified version can be used suitably as a metallic impurity density monitoring tool for such cases.
ContributorsElhami Khorasani, Arash (Author) / Alford, Terry (Thesis advisor) / Goryll, Michael (Committee member) / Bertoni, Mariana (Committee member) / Arizona State University (Publisher)
Created2013