This collection includes most of the ASU Theses and Dissertations from 2011 to present. ASU Theses and Dissertations are available in downloadable PDF format; however, a small percentage of items are under embargo. Information about the dissertations/theses includes degree information, committee members, an abstract, supporting data or media.

In addition to the electronic theses found in the ASU Digital Repository, ASU Theses and Dissertations can be found in the ASU Library Catalog.

Dissertations and Theses granted by Arizona State University are archived and made available through a joint effort of the ASU Graduate College and the ASU Libraries. For more information or questions about this collection contact or visit the Digital Repository ETD Library Guide or contact the ASU Graduate College at gradformat@asu.edu.

Displaying 1 - 10 of 183
Filtering by

Clear all filters

152139-Thumbnail Image.png
Description
ABSTRACT Developing new non-traditional device models is gaining popularity as the silicon-based electrical device approaches its limitation when it scales down. Membrane systems, also called P systems, are a new class of biological computation model inspired by the way cells process chemical signals. Spiking Neural P systems (SNP systems), a

ABSTRACT Developing new non-traditional device models is gaining popularity as the silicon-based electrical device approaches its limitation when it scales down. Membrane systems, also called P systems, are a new class of biological computation model inspired by the way cells process chemical signals. Spiking Neural P systems (SNP systems), a certain kind of membrane systems, is inspired by the way the neurons in brain interact using electrical spikes. Compared to the traditional Boolean logic, SNP systems not only perform similar functions but also provide a more promising solution for reliable computation. Two basic neuron types, Low Pass (LP) neurons and High Pass (HP) neurons, are introduced. These two basic types of neurons are capable to build an arbitrary SNP neuron. This leads to the conclusion that these two basic neuron types are Turing complete since SNP systems has been proved Turing complete. These two basic types of neurons are further used as the elements to construct general-purpose arithmetic circuits, such as adder, subtractor and comparator. In this thesis, erroneous behaviors of neurons are discussed. Transmission error (spike loss) is proved to be equivalent to threshold error, which makes threshold error discussion more universal. To improve the reliability, a new structure called motif is proposed. Compared to Triple Modular Redundancy improvement, motif design presents its efficiency and effectiveness in both single neuron and arithmetic circuit analysis. DRAM-based CMOS circuits are used to implement the two basic types of neurons. Functionality of basic type neurons is proved using the SPICE simulations. The motif improved adder and the comparator, as compared to conventional Boolean logic design, are much more reliable with lower leakage, and smaller silicon area. This leads to the conclusion that SNP system could provide a more promising solution for reliable computation than the conventional Boolean logic.
ContributorsAn, Pei (Author) / Cao, Yu (Thesis advisor) / Barnaby, Hugh (Committee member) / Chakrabarti, Chaitali (Committee member) / Arizona State University (Publisher)
Created2013
151653-Thumbnail Image.png
Description
Answer Set Programming (ASP) is one of the most prominent and successful knowledge representation paradigms. The success of ASP is due to its expressive non-monotonic modeling language and its efficient computational methods originating from building propositional satisfiability solvers. The wide adoption of ASP has motivated several extensions to its modeling

Answer Set Programming (ASP) is one of the most prominent and successful knowledge representation paradigms. The success of ASP is due to its expressive non-monotonic modeling language and its efficient computational methods originating from building propositional satisfiability solvers. The wide adoption of ASP has motivated several extensions to its modeling language in order to enhance expressivity, such as incorporating aggregates and interfaces with ontologies. Also, in order to overcome the grounding bottleneck of computation in ASP, there are increasing interests in integrating ASP with other computing paradigms, such as Constraint Programming (CP) and Satisfiability Modulo Theories (SMT). Due to the non-monotonic nature of the ASP semantics, such enhancements turned out to be non-trivial and the existing extensions are not fully satisfactory. We observe that one main reason for the difficulties rooted in the propositional semantics of ASP, which is limited in handling first-order constructs (such as aggregates and ontologies) and functions (such as constraint variables in CP and SMT) in natural ways. This dissertation presents a unifying view on these extensions by viewing them as instances of formulas with generalized quantifiers and intensional functions. We extend the first-order stable model semantics by by Ferraris, Lee, and Lifschitz to allow generalized quantifiers, which cover aggregate, DL-atoms, constraints and SMT theory atoms as special cases. Using this unifying framework, we study and relate different extensions of ASP. We also present a tight integration of ASP with SMT, based on which we enhance action language C+ to handle reasoning about continuous changes. Our framework yields a systematic approach to study and extend non-monotonic languages.
ContributorsMeng, Yunsong (Author) / Lee, Joohyung (Thesis advisor) / Ahn, Gail-Joon (Committee member) / Baral, Chitta (Committee member) / Fainekos, Georgios (Committee member) / Lifschitz, Vladimir (Committee member) / Arizona State University (Publisher)
Created2013
151352-Thumbnail Image.png
Description
A fully automated logic design methodology for radiation hardened by design (RHBD) high speed logic using fine grained triple modular redundancy (TMR) is presented. The hardening techniques used in the cell library are described and evaluated, with a focus on both layout techniques that mitigate total ionizing dose (TID) and

A fully automated logic design methodology for radiation hardened by design (RHBD) high speed logic using fine grained triple modular redundancy (TMR) is presented. The hardening techniques used in the cell library are described and evaluated, with a focus on both layout techniques that mitigate total ionizing dose (TID) and latchup issues and flip-flop designs that mitigate single event transient (SET) and single event upset (SEU) issues. The base TMR self-correcting master-slave flip-flop is described and compared to more traditional hardening techniques. Additional refinements are presented, including testability features that disable the self-correction to allow detection of manufacturing defects. The circuit approach is validated for hardness using both heavy ion and proton broad beam testing. For synthesis and auto place and route, the methodology and circuits leverage commercial logic design automation tools. These tools are glued together with custom CAD tools designed to enable easy conversion of standard single redundant hardware description language (HDL) files into hardened TMR circuitry. The flow allows hardening of any synthesizable logic at clock frequencies comparable to unhardened designs and supports standard low-power techniques, e.g. clock gating and supply voltage scaling.
ContributorsHindman, Nathan (Author) / Clark, Lawrence T (Thesis advisor) / Holbert, Keith E. (Committee member) / Barnaby, Hugh (Committee member) / Allee, David (Committee member) / Arizona State University (Publisher)
Created2012
151381-Thumbnail Image.png
Description
The dissolution of metal layers such as silver into chalcogenide glass layers such as germanium selenide changes the resistivity of the metal and chalcogenide films by a great extent. It is known that the incorporation of the metal can be achieved by ultra violet light exposure or thermal processes. In

The dissolution of metal layers such as silver into chalcogenide glass layers such as germanium selenide changes the resistivity of the metal and chalcogenide films by a great extent. It is known that the incorporation of the metal can be achieved by ultra violet light exposure or thermal processes. In this work, the use of metal dissolution by exposure to gamma radiation has been explored for radiation sensor applications. Test structures were designed and a process flow was developed for prototype sensor fabrication. The test structures were designed such that sensitivity to radiation could be studied. The focus is on the effect of gamma rays as well as ultra violet light on silver dissolution in germanium selenide (Ge30Se70) chalcogenide glass. Ultra violet radiation testing was used prior to gamma exposure to assess the basic mechanism. The test structures were electrically characterized prior to and post irradiation to assess resistance change due to metal dissolution. A change in resistance was observed post irradiation and was found to be dependent on the radiation dose. The structures were also characterized using atomic force microscopy and roughness measurements were made prior to and post irradiation. A change in roughness of the silver films on Ge30Se70 was observed following exposure. This indicated the loss of continuity of the film which causes the increase in silver film resistance following irradiation. Recovery of initial resistance in the structures was also observed after the radiation stress was removed. This recovery was explained with photo-stimulated deposition of silver from the chalcogenide at room temperature confirmed with the re-appearance of silver dendrites on the chalcogenide surface. The results demonstrate that it is possible to use the metal dissolution effect in radiation sensing applications.
ContributorsChandran, Ankitha (Author) / Kozicki, Michael N (Thesis advisor) / Holbert, Keith E. (Committee member) / Barnaby, Hugh (Committee member) / Arizona State University (Publisher)
Created2012
151296-Thumbnail Image.png
Description
Scaling of the classical planar MOSFET below 20 nm gate length is facing not only technological difficulties but also limitations imposed by short channel effects, gate and junction leakage current due to quantum tunneling, high body doping induced threshold voltage variation, and carrier mobility degradation. Non-classical multiple-gate structures such as

Scaling of the classical planar MOSFET below 20 nm gate length is facing not only technological difficulties but also limitations imposed by short channel effects, gate and junction leakage current due to quantum tunneling, high body doping induced threshold voltage variation, and carrier mobility degradation. Non-classical multiple-gate structures such as double-gate (DG) FinFETs and surrounding gate field-effect-transistors (SGFETs) have good electrostatic integrity and are an alternative to planar MOSFETs for below 20 nm technology nodes. Circuit design with these devices need compact models for SPICE simulation. In this work physics based compact models for the common-gate symmetric DG-FinFET, independent-gate asymmetric DG-FinFET, and SGFET are developed. Despite the complex device structure and boundary conditions for the Poisson-Boltzmann equation, the core structure of the DG-FinFET and SGFET models, are maintained similar to the surface potential based compact models for planar MOSFETs such as SP and PSP. TCAD simulations show differences between the transient behavior and the capacitance-voltage characteristics of bulk and SOI FinFETs if the gate-voltage swing includes the accumulation region. This effect can be captured by a compact model of FinFETs only if it includes the contribution of both types of carriers in the Poisson-Boltzmann equation. An accurate implicit input voltage equation valid in all regions of operation is proposed for common-gate symmetric DG-FinFETs with intrinsic or lightly doped bodies. A closed-form algorithm is developed for solving the new input voltage equation including ambipolar effects. The algorithm is verified for both the surface potential and its derivatives and includes a previously published analytical approximation for surface potential as a special case when ambipolar effects can be neglected. The symmetric linearization method for common-gate symmetric DG-FinFETs is developed in a form free of the charge-sheet approximation present in its original formulation for bulk MOSFETs. The accuracy of the proposed technique is verified by comparison with exact results. An alternative and computationally efficient description of the boundary between the trigonometric and hyperbolic solutions of the Poisson-Boltzmann equation for the independent-gate asymmetric DG-FinFET is developed in terms of the Lambert W function. Efficient numerical algorithm is proposed for solving the input voltage equation. Analytical expressions for terminal charges of an independent-gate asymmetric DG-FinFET are derived. The new charge model is C-infinity continuous, valid for weak as well as for strong inversion condition of both the channels and does not involve the charge-sheet approximation. This is accomplished by developing the symmetric linearization method in a form that does not require identical boundary conditions at the two Si-SiO2 interfaces and allows for volume inversion in the DG-FinFET. Verification of the model is performed with both numerical computations and 2D TCAD simulations under a wide range of biasing conditions. The model is implemented in a standard circuit simulator through Verilog-A code. Simulation examples for both digital and analog circuits verify good model convergence and demonstrate the capabilities of new circuit topologies that can be implemented using independent-gate asymmetric DG-FinFETs.
ContributorsDessai, Gajanan (Author) / Gildenblat, Gennady (Committee member) / McAndrew, Colin (Committee member) / Cao, Yu (Committee member) / Barnaby, Hugh (Committee member) / Arizona State University (Publisher)
Created2012
151474-Thumbnail Image.png
Description
The medical industry has benefited greatly by electronic integration resulting in the explosive growth of active medical implants. These devices often treat and monitor chronic health conditions and require very minimal power usage. A key part of these medical implants is an ultra-low power two way wireless communication system. This

The medical industry has benefited greatly by electronic integration resulting in the explosive growth of active medical implants. These devices often treat and monitor chronic health conditions and require very minimal power usage. A key part of these medical implants is an ultra-low power two way wireless communication system. This enables both control of the implant as well as relay of information collected. This research has focused on a high performance receiver for medical implant applications. One commonly quoted specification to compare receivers is energy per bit required. This metric is useful, but incomplete in that it ignores Sensitivity level, bit error rate, and immunity to interferers. In this study exploration of receiver architectures and convergence upon a comprehensive solution is done. This analysis is used to design and build a system for validation. The Direct Conversion Receiver architecture implemented for the MICS standard in 0.18 µm CMOS process consumes approximately 2 mW is competitive with published research.
ContributorsStevens, Mark (Author) / Kiaei, Sayfe (Thesis advisor) / Bakkaloglu, Bertan (Committee member) / Aberle, James T., 1961- (Committee member) / Barnaby, Hugh (Committee member) / Arizona State University (Publisher)
Created2012
152422-Thumbnail Image.png
Description
With the growth of IT products and sophisticated software in various operating systems, I observe that security risks in systems are skyrocketing constantly. Consequently, Security Assessment is now considered as one of primary security mechanisms to measure assurance of systems since systems that are not compliant with security requirements may

With the growth of IT products and sophisticated software in various operating systems, I observe that security risks in systems are skyrocketing constantly. Consequently, Security Assessment is now considered as one of primary security mechanisms to measure assurance of systems since systems that are not compliant with security requirements may lead adversaries to access critical information by circumventing security practices. In order to ensure security, considerable efforts have been spent to develop security regulations by facilitating security best-practices. Applying shared security standards to the system is critical to understand vulnerabilities and prevent well-known threats from exploiting vulnerabilities. However, many end users tend to change configurations of their systems without paying attention to the security. Hence, it is not straightforward to protect systems from being changed by unconscious users in a timely manner. Detecting the installation of harmful applications is not sufficient since attackers may exploit risky software as well as commonly used software. In addition, checking the assurance of security configurations periodically is disadvantageous in terms of time and cost due to zero-day attacks and the timing attacks that can leverage the window between each security checks. Therefore, event-driven monitoring approach is critical to continuously assess security of a target system without ignoring a particular window between security checks and lessen the burden of exhausted task to inspect the entire configurations in the system. Furthermore, the system should be able to generate a vulnerability report for any change initiated by a user if such changes refer to the requirements in the standards and turn out to be vulnerable. Assessing various systems in distributed environments also requires to consistently applying standards to each environment. Such a uniformed consistent assessment is important because the way of assessment approach for detecting security vulnerabilities may vary across applications and operating systems. In this thesis, I introduce an automated event-driven security assessment framework to overcome and accommodate the aforementioned issues. I also discuss the implementation details that are based on the commercial-off-the-self technologies and testbed being established to evaluate approach. Besides, I describe evaluation results that demonstrate the effectiveness and practicality of the approaches.
ContributorsSeo, Jeong-Jin (Author) / Ahn, Gail-Joon (Thesis advisor) / Yau, Stephen S. (Committee member) / Lee, Joohyung (Committee member) / Arizona State University (Publisher)
Created2014
152590-Thumbnail Image.png
Description
Access control is necessary for information assurance in many of today's applications such as banking and electronic health record. Access control breaches are critical security problems that can result from unintended and improper implementation of security policies. Security testing can help identify security vulnerabilities early and avoid unexpected expensive cost

Access control is necessary for information assurance in many of today's applications such as banking and electronic health record. Access control breaches are critical security problems that can result from unintended and improper implementation of security policies. Security testing can help identify security vulnerabilities early and avoid unexpected expensive cost in handling breaches for security architects and security engineers. The process of security testing which involves creating tests that effectively examine vulnerabilities is a challenging task. Role-Based Access Control (RBAC) has been widely adopted to support fine-grained access control. However, in practice, due to its complexity including role management, role hierarchy with hundreds of roles, and their associated privileges and users, systematically testing RBAC systems is crucial to ensure the security in various domains ranging from cyber-infrastructure to mission-critical applications. In this thesis, we introduce i) a security testing technique for RBAC systems considering the principle of maximum privileges, the structure of the role hierarchy, and a new security test coverage criterion; ii) a MTBDD (Multi-Terminal Binary Decision Diagram) based representation of RBAC security policy including RHMTBDD (Role Hierarchy MTBDD) to efficiently generate effective positive and negative security test cases; and iii) a security testing framework which takes an XACML-based RBAC security policy as an input, parses it into a RHMTBDD representation and then generates positive and negative test cases. We also demonstrate the efficacy of our approach through case studies.
ContributorsGupta, Poonam (Author) / Ahn, Gail-Joon (Thesis advisor) / Collofello, James (Committee member) / Huang, Dijiang (Committee member) / Arizona State University (Publisher)
Created2014
152166-Thumbnail Image.png
Description
The advent of threshold logic simplifies the traditional Boolean logic to the single level multi-input function. Threshold logic latch (TLL), among implementations of threshold logic, is functionally equivalent to a multi-input function with an edge triggered flip-flop, which stands out to improve area and both dynamic and leakage power consumption,

The advent of threshold logic simplifies the traditional Boolean logic to the single level multi-input function. Threshold logic latch (TLL), among implementations of threshold logic, is functionally equivalent to a multi-input function with an edge triggered flip-flop, which stands out to improve area and both dynamic and leakage power consumption, providing an appropriate design alternative. Accordingly, the TLL standard cell library is designed. Through technology mapping, hybrid circuit is generated by absorbing the logic cone backward from each flip-flip to get the smallest remaining feeder. With the scan test methodology adopted, design for testability (DFT) is proposed, including scan element design and scan chain insertion. Test synthesis flow is then introduced, according to the Cadence tool, RTL compiler. Test application is the process of applying vectors and the response analysis, which is mainly about the testbench design. A parameterized generic self-checking Verilog testbench is designed for static fault detection. Test development refers to the fault modeling, and test generation. Firstly, functional truth table test generation on TLL cells is proposed. Before the truth table test of the threshold function, the dependence of sequence of vectors applied, i.e., the dependence of current state on the previous state, should be eliminated. Transition test (dynamic pattern) on all weak inputs is proved to be able to test the reset function, which is supposed to erase the history in the reset phase before every evaluation phase. Remaining vectors in the truth table except the weak inputs are then applied statically (static pattern). Secondly, dynamic patterns for all weak inputs are proposed to detect structural transistor level faults analyzed in the TLL cell, with single fault assumption and stuck-at faults, stuck-on faults, and stuck-open faults under consideration. Containing those patterns, the functional test covers all testable structural faults inside the TLL. Thirdly, with the scope of the whole hybrid netlist, the procedure of test generation is proposed with three steps: scan chain test; test of feeders and other scan elements except TLLs; functional pattern test of TLL cells. Implementation of this procedure is discussed in the automatic test pattern generation (ATPG) chapter.
ContributorsHu, Yang (Author) / Vrudhula, Sarma (Thesis advisor) / Barnaby, Hugh (Committee member) / Yu, Shimeng (Committee member) / Arizona State University (Publisher)
Created2013
152460-Thumbnail Image.png
Description
New technologies enable the exploration of space, high-fidelity defense systems, lighting fast intercontinental communication systems as well as medical technologies that extend and improve patient lives. The basis for these technologies is high reliability electronics devised to meet stringent design goals and to operate consistently for many years deployed in

New technologies enable the exploration of space, high-fidelity defense systems, lighting fast intercontinental communication systems as well as medical technologies that extend and improve patient lives. The basis for these technologies is high reliability electronics devised to meet stringent design goals and to operate consistently for many years deployed in the field. An on-going concern for engineers is the consequences of ionizing radiation exposure, specifically total dose effects. For many of the different applications, there is a likelihood of exposure to radiation, which can result in device degradation and potentially failure. While the total dose effects and the resulting degradation are a well-studied field and methodologies to help mitigate degradation have been developed, there is still a need for simulation techniques to help designers understand total dose effects within their design. To that end, the work presented here details simulation techniques to analyze as well as predict the total dose response of a circuit. In this dissertation the total dose effects are broken into two sub-categories, intra-device and inter-device effects in CMOS technology. Intra-device effects degrade the performance of both n-channel and p-channel transistors, while inter-device effects result in loss of device isolation. In this work, multiple case studies are presented for which total dose degradation is of concern. Through the simulation techniques, the individual device and circuit responses are modeled post-irradiation. The use of these simulation techniques by circuit designers allow predictive simulation of total dose effects, allowing focused design changes to be implemented to increase radiation tolerance of high reliability electronics.
ContributorsSchlenvogt, Garrett (Author) / Barnaby, Hugh (Thesis advisor) / Goodnick, Stephen (Committee member) / Vasileska, Dragica (Committee member) / Holbert, Keith E. (Committee member) / Arizona State University (Publisher)
Created2014