This collection includes most of the ASU Theses and Dissertations from 2011 to present. ASU Theses and Dissertations are available in downloadable PDF format; however, a small percentage of items are under embargo. Information about the dissertations/theses includes degree information, committee members, an abstract, supporting data or media.

In addition to the electronic theses found in the ASU Digital Repository, ASU Theses and Dissertations can be found in the ASU Library Catalog.

Dissertations and Theses granted by Arizona State University are archived and made available through a joint effort of the ASU Graduate College and the ASU Libraries. For more information or questions about this collection contact or visit the Digital Repository ETD Library Guide or contact the ASU Graduate College at gradformat@asu.edu.

Displaying 1 - 10 of 81
151945-Thumbnail Image.png
Description
In recent years we have witnessed a shift towards multi-processor system-on-chips (MPSoCs) to address the demands of embedded devices (such as cell phones, GPS devices, luxury car features, etc.). Highly optimized MPSoCs are well-suited to tackle the complex application demands desired by the end user customer. These MPSoCs incorporate a

In recent years we have witnessed a shift towards multi-processor system-on-chips (MPSoCs) to address the demands of embedded devices (such as cell phones, GPS devices, luxury car features, etc.). Highly optimized MPSoCs are well-suited to tackle the complex application demands desired by the end user customer. These MPSoCs incorporate a constellation of heterogeneous processing elements (PEs) (general purpose PEs and application-specific integrated circuits (ASICS)). A typical MPSoC will be composed of a application processor, such as an ARM Coretex-A9 with cache coherent memory hierarchy, and several application sub-systems. Each of these sub-systems are composed of highly optimized instruction processors, graphics/DSP processors, and custom hardware accelerators. Typically, these sub-systems utilize scratchpad memories (SPM) rather than support cache coherency. The overall architecture is an integration of the various sub-systems through a high bandwidth system-level interconnect (such as a Network-on-Chip (NoC)). The shift to MPSoCs has been fueled by three major factors: demand for high performance, the use of component libraries, and short design turn around time. As customers continue to desire more and more complex applications on their embedded devices the performance demand for these devices continues to increase. Designers have turned to using MPSoCs to address this demand. By using pre-made IP libraries designers can quickly piece together a MPSoC that will meet the application demands of the end user with minimal time spent designing new hardware. Additionally, the use of MPSoCs allows designers to generate new devices very quickly and thus reducing the time to market. In this work, a complete MPSoC synthesis design flow is presented. We first present a technique \cite{leary1_intro} to address the synthesis of the interconnect architecture (particularly Network-on-Chip (NoC)). We then address the synthesis of the memory architecture of a MPSoC sub-system \cite{leary2_intro}. Lastly, we present a co-synthesis technique to generate the functional and memory architectures simultaneously. The validity and quality of each synthesis technique is demonstrated through extensive experimentation.
ContributorsLeary, Glenn (Author) / Chatha, Karamvir S (Thesis advisor) / Vrudhula, Sarma (Committee member) / Shrivastava, Aviral (Committee member) / Beraha, Rudy (Committee member) / Arizona State University (Publisher)
Created2013
151296-Thumbnail Image.png
Description
Scaling of the classical planar MOSFET below 20 nm gate length is facing not only technological difficulties but also limitations imposed by short channel effects, gate and junction leakage current due to quantum tunneling, high body doping induced threshold voltage variation, and carrier mobility degradation. Non-classical multiple-gate structures such as

Scaling of the classical planar MOSFET below 20 nm gate length is facing not only technological difficulties but also limitations imposed by short channel effects, gate and junction leakage current due to quantum tunneling, high body doping induced threshold voltage variation, and carrier mobility degradation. Non-classical multiple-gate structures such as double-gate (DG) FinFETs and surrounding gate field-effect-transistors (SGFETs) have good electrostatic integrity and are an alternative to planar MOSFETs for below 20 nm technology nodes. Circuit design with these devices need compact models for SPICE simulation. In this work physics based compact models for the common-gate symmetric DG-FinFET, independent-gate asymmetric DG-FinFET, and SGFET are developed. Despite the complex device structure and boundary conditions for the Poisson-Boltzmann equation, the core structure of the DG-FinFET and SGFET models, are maintained similar to the surface potential based compact models for planar MOSFETs such as SP and PSP. TCAD simulations show differences between the transient behavior and the capacitance-voltage characteristics of bulk and SOI FinFETs if the gate-voltage swing includes the accumulation region. This effect can be captured by a compact model of FinFETs only if it includes the contribution of both types of carriers in the Poisson-Boltzmann equation. An accurate implicit input voltage equation valid in all regions of operation is proposed for common-gate symmetric DG-FinFETs with intrinsic or lightly doped bodies. A closed-form algorithm is developed for solving the new input voltage equation including ambipolar effects. The algorithm is verified for both the surface potential and its derivatives and includes a previously published analytical approximation for surface potential as a special case when ambipolar effects can be neglected. The symmetric linearization method for common-gate symmetric DG-FinFETs is developed in a form free of the charge-sheet approximation present in its original formulation for bulk MOSFETs. The accuracy of the proposed technique is verified by comparison with exact results. An alternative and computationally efficient description of the boundary between the trigonometric and hyperbolic solutions of the Poisson-Boltzmann equation for the independent-gate asymmetric DG-FinFET is developed in terms of the Lambert W function. Efficient numerical algorithm is proposed for solving the input voltage equation. Analytical expressions for terminal charges of an independent-gate asymmetric DG-FinFET are derived. The new charge model is C-infinity continuous, valid for weak as well as for strong inversion condition of both the channels and does not involve the charge-sheet approximation. This is accomplished by developing the symmetric linearization method in a form that does not require identical boundary conditions at the two Si-SiO2 interfaces and allows for volume inversion in the DG-FinFET. Verification of the model is performed with both numerical computations and 2D TCAD simulations under a wide range of biasing conditions. The model is implemented in a standard circuit simulator through Verilog-A code. Simulation examples for both digital and analog circuits verify good model convergence and demonstrate the capabilities of new circuit topologies that can be implemented using independent-gate asymmetric DG-FinFETs.
ContributorsDessai, Gajanan (Author) / Gildenblat, Gennady (Committee member) / McAndrew, Colin (Committee member) / Cao, Yu (Committee member) / Barnaby, Hugh (Committee member) / Arizona State University (Publisher)
Created2012
151410-Thumbnail Image.png
Description
Test cost has become a significant portion of device cost and a bottleneck in high volume manufacturing. Increasing integration density and shrinking feature sizes increased test time/cost and reduce observability. Test engineers have to put a tremendous effort in order to maintain test cost within an acceptable budget. Unfortunately, there

Test cost has become a significant portion of device cost and a bottleneck in high volume manufacturing. Increasing integration density and shrinking feature sizes increased test time/cost and reduce observability. Test engineers have to put a tremendous effort in order to maintain test cost within an acceptable budget. Unfortunately, there is not a single straightforward solution to the problem. Products that are tested have several application domains and distinct customer profiles. Some products are required to operate for long periods of time while others are required to be low cost and optimized for low cost. Multitude of constraints and goals make it impossible to find a single solution that work for all cases. Hence, test development/optimization is typically design/circuit dependent and even process specific. Therefore, test optimization cannot be performed using a single test approach, but necessitates a diversity of approaches. This works aims at addressing test cost minimization and test quality improvement at various levels. In the first chapter of the work, we investigate pre-silicon strategies, such as design for test and pre-silicon statistical simulation optimization. In the second chapter, we investigate efficient post-silicon test strategies, such as adaptive test, adaptive multi-site test, outlier analysis, and process shift detection/tracking.
ContributorsYilmaz, Ender (Author) / Ozev, Sule (Thesis advisor) / Bakkaloglu, Bertan (Committee member) / Cao, Yu (Committee member) / Christen, Jennifer Blain (Committee member) / Arizona State University (Publisher)
Created2012
152459-Thumbnail Image.png
Description
Non-volatile memories (NVM) are widely used in modern electronic devices due to their non-volatility, low static power consumption and high storage density. While Flash memories are the dominant NVM technology, resistive memories such as phase change access memory (PRAM) and spin torque transfer random access memory (STT-MRAM) are gaining ground.

Non-volatile memories (NVM) are widely used in modern electronic devices due to their non-volatility, low static power consumption and high storage density. While Flash memories are the dominant NVM technology, resistive memories such as phase change access memory (PRAM) and spin torque transfer random access memory (STT-MRAM) are gaining ground. All these technologies suffer from reliability degradation due to process variations, structural limits and material property shift. To address the reliability concerns of these NVM technologies, multi-level low cost solutions are proposed for each of them. My approach consists of first building a comprehensive error model. Next the error characteristics are exploited to develop low cost multi-level strategies to compensate for the errors. For instance, for NAND Flash memory, I first characterize errors due to threshold voltage variations as a function of the number of program/erase cycles. Next a flexible product code is designed to migrate to a stronger ECC scheme as program/erase cycles increases. An adaptive data refresh scheme is also proposed to improve memory reliability with low energy cost for applications with different data update frequencies. For PRAM, soft errors and hard errors models are built based on shifts in the resistance distributions. Next I developed a multi-level error control approach involving bit interleaving and subblock flipping at the architecture level, threshold resistance tuning at the circuit level and programming current profile tuning at the device level. This approach helped reduce the error rate significantly so that it was now sufficient to use a low cost ECC scheme to satisfy the memory reliability constraint. I also studied the reliability of a PRAM+DRAM hybrid memory system and analyzed the tradeoffs between memory performance, programming energy and lifetime. For STT-MRAM, I first developed an error model based on process variations. I developed a multi-level approach to reduce the error rates that consisted of increasing the W/L ratio of the access transistor, increasing the voltage difference across the memory cell and adjusting the current profile during write operation. This approach enabled use of a low cost BCH based ECC scheme to achieve very low block failure rates.
ContributorsYang, Chengen (Author) / Chakrabarti, Chaitali (Thesis advisor) / Cao, Yu (Committee member) / Ogras, Umit Y. (Committee member) / Bakkaloglu, Bertan (Committee member) / Arizona State University (Publisher)
Created2014
152275-Thumbnail Image.png
Description
With increasing demand for System on Chip (SoC) and System in Package (SiP) design in computer and communication technologies, integrated inductor which is an essential passive component has been widely used in numerous integrated circuits (ICs) such as in voltage regulators and RF circuits. In this work, soft ferromagnetic core

With increasing demand for System on Chip (SoC) and System in Package (SiP) design in computer and communication technologies, integrated inductor which is an essential passive component has been widely used in numerous integrated circuits (ICs) such as in voltage regulators and RF circuits. In this work, soft ferromagnetic core material, amorphous Co-Zr-Ta-B, was incorporated into on-chip and in-package inductors in order to scale down inductors and improve inductors performance in both inductance density and quality factor. With two layers of 500 nm Co-Zr-Ta-B films a 3.5X increase in inductance and a 3.9X increase in quality factor over inductors without magnetic films were measured at frequencies as high as 1 GHz. By laminating technology, up to 9.1X increase in inductance and more than 5X increase in quality factor (Q) were obtained from stripline inductors incorporated with 50 nm by 10 laminated films with a peak Q at 300 MHz. It was also demonstrated that this peak Q can be pushed towards high frequency as far as 1GHz by a combination of patterning magnetic films into fine bars and laminations. The role of magnetic vias in magnetic flux and eddy current control was investigated by both simulation and experiment using different patterning techniques and by altering the magnetic via width. Finger-shaped magnetic vias were designed and integrated into on-chip RF inductors improving the frequency of peak quality factor from 400 MHz to 800 MHz without sacrificing inductance enhancement. Eddy current and magnetic flux density in different areas of magnetic vias were analyzed by HFSS 3D EM simulation. With optimized magnetic vias, high frequency response of up to 2 GHz was achieved. Furthermore, the effect of applied magnetic field on on-chip inductors was investigated for high power applications. It was observed that as applied magnetic field along the hard axis (HA) increases, inductance maintains similar value initially at low fields, but decreases at larger fields until the magnetic films become saturated. The high frequency quality factor showed an opposite trend which is correlated to the reduction of ferromagnetic resonant absorption in the magnetic film. In addition, experiments showed that this field-dependent inductance change varied with different patterned magnetic film structures, including bars/slots and fingers structures. Magnetic properties of Co-Zr-Ta-B films on standard organic package substrates including ABF and polyimide were also characterized. Effects of substrate roughness and stress were analyzed and simulated which provide strategies for integrating Co-Zr-Ta-B into package inductors and improving inductors performance. Stripline and spiral inductors with Co-Zr-Ta-B films were fabricated on both ABF and polyimide substrates. Maximum 90% inductance increase in hundreds MHz frequency range were achieved in stripline inductors which are suitable for power delivery applications. Spiral inductors with Co-Zr-Ta-B films showed 18% inductance increase with quality factor of 4 at frequency up to 3 GHz.
ContributorsWu, Hao (Author) / Yu, Hongbin (Thesis advisor) / Bakkaloglu, Bertan (Committee member) / Cao, Yu (Committee member) / Chickamenahalli, Shamala (Committee member) / Arizona State University (Publisher)
Created2013
152378-Thumbnail Image.png
Description
The design and development of analog/mixed-signal (AMS) integrated circuits (ICs) is becoming increasingly expensive, complex, and lengthy. Rapid prototyping and emulation of analog ICs will be significant in the design and testing of complex analog systems. A new approach, Programmable ANalog Device Array (PANDA) that maps any AMS design problem

The design and development of analog/mixed-signal (AMS) integrated circuits (ICs) is becoming increasingly expensive, complex, and lengthy. Rapid prototyping and emulation of analog ICs will be significant in the design and testing of complex analog systems. A new approach, Programmable ANalog Device Array (PANDA) that maps any AMS design problem to a transistor-level programmable hardware, is proposed. This approach enables fast system level validation and a reduction in post-Silicon bugs, minimizing design risk and cost. The unique features of the approach include 1) transistor-level programmability that emulates each transistor behavior in an analog design, achieving very fine granularity of reconfiguration; 2) programmable switches that are treated as a design component during analog transistor emulating, and optimized with the reconfiguration matrix; 3) compensation of AC performance degradation through boosting the bias current. Based on these principles, a digitally controlled PANDA platform is designed at 45nm node that can map AMS modules across 22nm to 90nm technology nodes. A systematic emulation approach to map any analog transistor to 45nm PANDA cell is proposed, which achieves transistor level matching accuracy of less than 5% for ID and less than 10% for Rout and Gm. Circuit level analog metrics of a voltage-controlled oscillator (VCO) emulated by PANDA, match to those of the original designs in 22nm and 90nm nodes with less than a 5% error. Several other 90nm and 22nm analog blocks are successfully emulated by the 45nm PANDA platform, including a folded-cascode operational amplifier and a sample-and-hold module (S/H). Further capabilities of PANDA are demonstrated by the first full-chip silicon of PANDA which is implemented on 65nm process This system consists of a 24×25 cell array, reconfigurable interconnect and configuration memory. The voltage and current reference circuits, op amps and a VCO with a phase interpolation circuit are emulated by PANDA.
ContributorsSuh, Jounghyuk (Author) / Bakkaloglu, Bertan (Thesis advisor) / Cao, Yu (Committee member) / Ozev, Sule (Committee member) / Kozicki, Michael (Committee member) / Arizona State University (Publisher)
Created2013
151851-Thumbnail Image.png
Description
In this thesis we deal with the problem of temporal logic robustness estimation. We present a dynamic programming algorithm for the robust estimation problem of Metric Temporal Logic (MTL) formulas regarding a finite trace of time stated sequence. This algorithm not only tests if the MTL specification is satisfied by

In this thesis we deal with the problem of temporal logic robustness estimation. We present a dynamic programming algorithm for the robust estimation problem of Metric Temporal Logic (MTL) formulas regarding a finite trace of time stated sequence. This algorithm not only tests if the MTL specification is satisfied by the given input which is a finite system trajectory, but also quantifies to what extend does the sequence satisfies or violates the MTL specification. The implementation of the algorithm is the DP-TALIRO toolbox for MATLAB. Currently it is used as the temporal logic robust computing engine of S-TALIRO which is a tool for MATLAB searching for trajectories of minimal robustness in Simulink/ Stateflow. DP-TALIRO is expected to have near linear running time and constant memory requirement depending on the structure of the MTL formula. DP-TALIRO toolbox also integrates new features not supported in its ancestor FW-TALIRO such as parameter replacement, most related iteration and most related predicate. A derivative of DP-TALIRO which is DP-T-TALIRO is also addressed in this thesis which applies dynamic programming algorithm for time robustness computation. We test the running time of DP-TALIRO and compare it with FW-TALIRO. Finally, we present an application where DP-TALIRO is used as the robustness computation core of S-TALIRO for a parameter estimation problem.
ContributorsYang, Hengyi (Author) / Fainekos, Georgios (Thesis advisor) / Sarjoughian, Hessam S. (Committee member) / Shrivastava, Aviral (Committee member) / Arizona State University (Publisher)
Created2013
152691-Thumbnail Image.png
Description
Animals learn to choose a proper action among alternatives according to the circumstance. Through trial-and-error, animals improve their odds by making correct association between their behavioral choices and external stimuli. While there has been an extensive literature on the theory of learning, it is still unclear how individual neurons and

Animals learn to choose a proper action among alternatives according to the circumstance. Through trial-and-error, animals improve their odds by making correct association between their behavioral choices and external stimuli. While there has been an extensive literature on the theory of learning, it is still unclear how individual neurons and a neural network adapt as learning progresses. In this dissertation, single units in the medial and lateral agranular (AGm and AGl) cortices were recorded as rats learned a directional choice task. The task required the rat to make a left/right side lever press if a light cue appeared on the left/right side of the interface panel. Behavior analysis showed that rat's movement parameters during performance of directional choices became stereotyped very quickly (2-3 days) while learning to solve the directional choice problem took weeks to occur. The entire learning process was further broken down to 3 stages, each having similar number of recording sessions (days). Single unit based firing rate analysis revealed that 1) directional rate modulation was observed in both cortices; 2) the averaged mean rate between left and right trials in the neural ensemble each day did not change significantly among the three learning stages; 3) the rate difference between left and right trials of the ensemble did not change significantly either. Besides, for either left or right trials, the trial-to-trial firing variability of single neurons did not change significantly over the three stages. To explore the spatiotemporal neural pattern of the recorded ensemble, support vector machines (SVMs) were constructed each day to decode the direction of choice in single trials. Improved classification accuracy indicated enhanced discriminability between neural patterns of left and right choices as learning progressed. When using a restricted Boltzmann machine (RBM) model to extract features from neural activity patterns, results further supported the idea that neural firing patterns adapted during the three learning stages to facilitate the neural codes of directional choices. Put together, these findings suggest a spatiotemporal neural coding scheme in a rat AGl and AGm neural ensemble that may be responsible for and contributing to learning the directional choice task.
ContributorsMao, Hongwei (Author) / Si, Jennie (Thesis advisor) / Buneo, Christopher (Committee member) / Cao, Yu (Committee member) / Santello, Marco (Committee member) / Arizona State University (Publisher)
Created2014
152778-Thumbnail Image.png
Description
Software has a great impact on the energy efficiency of any computing system--it can manage the components of a system efficiently or inefficiently. The impact of software is amplified in the context of a wearable computing system used for activity recognition. The design space this platform opens up is immense

Software has a great impact on the energy efficiency of any computing system--it can manage the components of a system efficiently or inefficiently. The impact of software is amplified in the context of a wearable computing system used for activity recognition. The design space this platform opens up is immense and encompasses sensors, feature calculations, activity classification algorithms, sleep schedules, and transmission protocols. Design choices in each of these areas impact energy use, overall accuracy, and usefulness of the system. This thesis explores methods software can influence the trade-off between energy consumption and system accuracy. In general the more energy a system consumes the more accurate will be. We explore how finding the transitions between human activities is able to reduce the energy consumption of such systems without reducing much accuracy. We introduce the Log-likelihood Ratio Test as a method to detect transitions, and explore how choices of sensor, feature calculations, and parameters concerning time segmentation affect the accuracy of this method. We discovered an approximate 5X increase in energy efficiency could be achieved with only a 5% decrease in accuracy. We also address how a system's sleep mode, in which the processor enters a low-power state and sensors are turned off, affects a wearable computing platform that does activity recognition. We discuss the energy trade-offs in each stage of the activity recognition process. We find that careful analysis of these parameters can result in great increases in energy efficiency if small compromises in overall accuracy can be tolerated. We call this the ``Great Compromise.'' We found a 6X increase in efficiency with a 7% decrease in accuracy. We then consider how wireless transmission of data affects the overall energy efficiency of a wearable computing platform. We find that design decisions such as feature calculations and grouping size have a great impact on the energy consumption of the system because of the amount of data that is stored and transmitted. For example, storing and transmitting vector-based features such as FFT or DCT do not compress the signal and would use more energy than storing and transmitting the raw signal. The effect of grouping size on energy consumption depends on the feature. For scalar features energy consumption is proportional in the inverse of grouping size, so it's reduced as grouping size goes up. For features that depend on the grouping size, such as FFT, energy increases with the logarithm of grouping size, so energy consumption increases slowly as grouping size increases. We find that compressing data through activity classification and transition detection significantly reduces energy consumption and that the energy consumed for the classification overhead is negligible compared to the energy savings from data compression. We provide mathematical models of energy usage and data generation, and test our ideas using a mobile computing platform, the Texas Instruments Chronos watch.
ContributorsBoyd, Jeffrey Michael (Author) / Sundaram, Hari (Thesis advisor) / Li, Baoxin (Thesis advisor) / Shrivastava, Aviral (Committee member) / Turaga, Pavan (Committee member) / Arizona State University (Publisher)
Created2014
152997-Thumbnail Image.png
Description
Stream processing has emerged as an important model of computation especially in the context of multimedia and communication sub-systems of embedded System-on-Chip (SoC) architectures. The dataflow nature of streaming applications allows them to be most naturally expressed as a set of kernels iteratively operating on continuous streams of data. The

Stream processing has emerged as an important model of computation especially in the context of multimedia and communication sub-systems of embedded System-on-Chip (SoC) architectures. The dataflow nature of streaming applications allows them to be most naturally expressed as a set of kernels iteratively operating on continuous streams of data. The kernels are computationally intensive and are mainly characterized by real-time constraints that demand high throughput and data bandwidth with limited global data reuse. Conventional architectures fail to meet these demands due to their poorly matched execution models and the overheads associated with instruction and data movements.

This work presents StreamWorks, a multi-core embedded architecture for energy-efficient stream computing. The basic processing element in the StreamWorks architecture is the StreamEngine (SE) which is responsible for iteratively executing a stream kernel. SE introduces an instruction locking mechanism that exploits the iterative nature of the kernels and enables fine-grain instruction reuse. Each instruction in a SE is locked to a Reservation Station (RS) and revitalizes itself after execution; thus never retiring from the RS. The entire kernel is hosted in RS Banks (RSBs) close to functional units for energy-efficient instruction delivery. The dataflow semantics of stream kernels are captured by a context-aware dataflow execution mode that efficiently exploits the Instruction Level Parallelism (ILP) and Data-level parallelism (DLP) within stream kernels.

Multiple SEs are grouped together to form a StreamCluster (SC) that communicate via a local interconnect. A novel software FIFO virtualization technique with split-join functionality is proposed for efficient and scalable stream communication across SEs. The proposed communication mechanism exploits the Task-level parallelism (TLP) of the stream application. The performance and scalability of the communication mechanism is evaluated against the existing data movement schemes for scratchpad based multi-core architectures. Further, overlay schemes and architectural support are proposed that allow hosting any number of kernels on the StreamWorks architecture. The proposed oevrlay schemes for code management supports kernel(context) switching for the most common use cases and can be adapted for any multi-core architecture that use software managed local memories.

The performance and energy-efficiency of the StreamWorks architecture is evaluated for stream kernel and application benchmarks by implementing the architecture in 45nm TSMC and comparison with a low power RISC core and a contemporary accelerator.
ContributorsPanda, Amrit (Author) / Chatha, Karam S. (Thesis advisor) / Wu, Carole-Jean (Thesis advisor) / Chakrabarti, Chaitali (Committee member) / Shrivastava, Aviral (Committee member) / Arizona State University (Publisher)
Created2014