Matching Items (58)
Filtering by

Clear all filters

151310-Thumbnail Image.png
Description
Characterization of standard cells is one of the crucial steps in the IC design. Scaling of CMOS technology has lead to timing un-certainties such as that of cross coupling noise due to interconnect parasitic, skew variation due to voltage jitter and proximity effect of multiple inputs switching (MIS). Due to

Characterization of standard cells is one of the crucial steps in the IC design. Scaling of CMOS technology has lead to timing un-certainties such as that of cross coupling noise due to interconnect parasitic, skew variation due to voltage jitter and proximity effect of multiple inputs switching (MIS). Due to increased operating frequency and process variation, the probability of MIS occurrence and setup / hold failure within a clock cycle is high. The delay variation due to temporal proximity of MIS is significant for multiple input gates in the standard cell library. The shortest paths are affected by MIS due to the lack of averaging effect. Thus, sensitive designs such as that of SRAM row and column decoder circuits have high probability for MIS impact. The traditional static timing analysis (STA) assumes single input switching (SIS) scenario which is not adequate enough to capture gate delay accurately, as the delay variation due to temporal proximity of the MIS is ~15%-45%. Whereas, considering all possible scenarios of MIS for characterization is computationally intensive with huge data volume. Various modeling techniques are developed for the characterization of MIS effect. Some techniques require coefficient extraction through multiple spice simulation, and do not discuss speed up approach or apply models with complicated algorithms to account for MIS effect. The STA flow accounts for process variation through uncertainty parameter to improve product yield. Some of the MIS delay variability models account for MIS variation through table look up approach, resulting in huge data volume or do not consider propagation of RAT in the design flow. Thus, there is a need for a methodology to model MIS effect with less computational resource, and integration of such effect into design flow without trading off the accuracy. A finite-point based analytical model for MIS effect is proposed for multiple input logic gates and similar approach is extended for setup/hold characterization of sequential elements. Integration of MIS variation into design flow is explored. The proposed methodology is validated using benchmark circuits at 45nm technology node under process variation. Experimental results show significant reduction in runtime and data volume with ~10% error compared to that of SPICE simulation.
ContributorsSubramaniam, Anupama R (Author) / Cao, Yu (Thesis advisor) / Chakrabarti, Chaitali (Committee member) / Roveda, Janet (Committee member) / Yu, Hongbin (Committee member) / Arizona State University (Publisher)
Created2012
149553-Thumbnail Image.png
Description
To extend the lifetime of complementary metal-oxide-semiconductors (CMOS), emerging process techniques are being proposed to conquer the manufacturing difficulties. New structures and materials are proposed with superior electrical properties to traditional CMOS, such as strain technology and feedback field-effect transistor (FB-FET). To continue the design success and make an impact

To extend the lifetime of complementary metal-oxide-semiconductors (CMOS), emerging process techniques are being proposed to conquer the manufacturing difficulties. New structures and materials are proposed with superior electrical properties to traditional CMOS, such as strain technology and feedback field-effect transistor (FB-FET). To continue the design success and make an impact on leading products, advanced circuit design exploration must begin concurrently with early silicon development. Therefore, an accurate and scalable model is desired to correctly capture those effects and flexible to extend to alternative process choices. For example, strain technology has been successfully integrated into CMOS fabrication to improve transistor performance but the stress is non-uniformly distributed in the channel, leading to systematic performance variations. In this dissertation, a new layout-dependent stress model is proposed as a function of layout, temperature, and other device parameters. Furthermore, a method of layout decomposition is developed to partition the layout into a set of simple patterns for model extraction. These solutions significantly reduce the complexity in stress modeling and simulation. On the other hand, semiconductor devices with self-feedback mechanisms are emerging as promising alternatives to CMOS. Fe-FET was proposed to improve the switching by integrating a ferroelectric material as gate insulator in a MOSFET structure. Under particular circumstances, ferroelectric capacitance is effectively negative, due to the negative slope of its polarization-electrical field curve. This property makes the ferroelectric layer a voltage amplifier to boost surface potential, achieving fast transition. A new threshold voltage model for Fe-FET is developed, and is further revealed that the impact of random dopant fluctuation (RDF) can be suppressed. Furthermore, through silicon via (TSV), a key technology that enables the 3D integration of chips, is studied. TSV structure is usually a cylindrical metal-oxide-semiconductors (MOS) capacitor. A piecewise capacitance model is proposed for 3D interconnect simulation. Due to the mismatch in coefficients of thermal expansion (CTE) among materials, thermal stress is observed in TSV process and impacts neighboring devices. The stress impact is investigated to support the interaction between silicon process and IC design at the early stage.
ContributorsWang, Chi-Chao (Author) / Cao, Yu (Thesis advisor) / Chakrabarti, Chaitali (Committee member) / Clark, Lawrence (Committee member) / Schroder, Dieter (Committee member) / Arizona State University (Publisher)
Created2011
149387-Thumbnail Image.png
Description
In this thesis two methodologies have been proposed for evaluating the fault response of analog/RF circuits. These proposed approaches are used to evaluate the response of the faulty circuit in terms of specifications/measurements. Faulty response can be used to evaluate important test metrics like fail probability, fault coverage and yield

In this thesis two methodologies have been proposed for evaluating the fault response of analog/RF circuits. These proposed approaches are used to evaluate the response of the faulty circuit in terms of specifications/measurements. Faulty response can be used to evaluate important test metrics like fail probability, fault coverage and yield coverage of given measurements under process variations. Once the models for faulty and fault free circuit are generated, one needs to perform Monte Carlo sampling (as opposed to Monte Carlo simulations) to compute these statistical parameters with high accuracy. The first method is based on adaptively determining the order of the model based on the error budget in terms of computing the statistical metrics and position of the threshold(s) to decide how precisely necessary models need to be extracted. In the second method, using hierarchy in process variations a hybrid of heuristics and localized linear models have been proposed. Experiments on LNA and Mixer using the adaptive model order selection procedure can reduce the number of necessary simulations by 7.54x and 7.03x respectively in the computation of fail probability for an error budget of 2%. Experiments on LNA using the hybrid approach can reduce the number of necessary simulations by 21.9x and 17x for four and six output parameters cases for improved accuracy in test statistics estimation.
ContributorsSubrahmaniyan Radhakrishnan, Gurusubrahmaniyan (Author) / Ozev, Sule (Thesis advisor) / Blain Christen, Jennifer (Committee member) / Cao, Yu (Committee member) / Arizona State University (Publisher)
Created2010
189327-Thumbnail Image.png
Description
In recent years, the proliferation of deep neural networks (DNNs) has revolutionized the field of artificial intelligence, enabling advancements in various domains. With the emergence of efficient learning techniques such as quantization and distributed learning, DNN systems have become increasingly accessible for deployment on edge devices. This accessibility brings significant

In recent years, the proliferation of deep neural networks (DNNs) has revolutionized the field of artificial intelligence, enabling advancements in various domains. With the emergence of efficient learning techniques such as quantization and distributed learning, DNN systems have become increasingly accessible for deployment on edge devices. This accessibility brings significant benefits, including real-time inference on the edge, which mitigates communication latency, and on-device learning, which addresses privacy concerns and enables continuous improvement. However, the resource limitations of edge devices pose challenges in equipping them with robust safety protocols, making them vulnerable to various attacks. Two notable attacks that affect edge DNN systems are Bit-Flip Attacks (BFA) and architecture stealing attacks. BFA compromises the integrity of DNN models, while architecture stealing attacks aim to extract valuable intellectual property by reverse engineering the model's architecture. Furthermore, in Split Federated Learning (SFL) scenarios, where training occurs on distributed edge devices, Model Inversion (MI) attacks can reconstruct clients' data, and Model Extraction (ME) attacks can extract sensitive model parameters. This thesis aims to address these four attack scenarios and develop effective defense mechanisms. To defend against BFA, both passive and active defensive strategies are discussed. Furthermore, for both model inference and training, architecture stealing attacks are mitigated through novel defense techniques, ensuring the integrity and confidentiality of edge DNN systems. In the context of SFL, the thesis showcases defense mechanisms against MI attacks for both supervised and self-supervised learning applications. Additionally, the research investigates ME attacks in SFL and proposes countermeasures to enhance resistance against potential ME attackers. By examining and addressing these attack scenarios, this research contributes to the security and privacy enhancement of edge DNN systems. The proposed defense mechanisms enable safer deployment of DNN models on resource-constrained edge devices, facilitating the advancement of real-time applications, preserving data privacy, and fostering the widespread adoption of edge computing technologies.
ContributorsLi, Jingtao (Author) / Chakrabarti, Chaitali (Thesis advisor) / Fan, Deliang (Committee member) / Cao, Yu (Committee member) / Trieu, Ni (Committee member) / Arizona State University (Publisher)
Created2023
187773-Thumbnail Image.png
Description
Resistive random-access memory (RRAM) or memristor, is an emerging technology used in neuromorphic computing to exceed the traditional von Neumann obstacle by merging the processing and memory units. Two-dimensional (2D) materials with non-volatile switching behavior can be used as the switching layer of RRAMs, exhibiting superior behavior compared to conventional

Resistive random-access memory (RRAM) or memristor, is an emerging technology used in neuromorphic computing to exceed the traditional von Neumann obstacle by merging the processing and memory units. Two-dimensional (2D) materials with non-volatile switching behavior can be used as the switching layer of RRAMs, exhibiting superior behavior compared to conventional oxide-based RRAMs. The use of 2D materials allows scaling the resistive switching layer thickness to sub-nanometer dimensions enabling devices to operate with low switching voltages and high programming speeds, offering large improvements in efficiency and performance as well as ultra-dense integration. This dissertation presents an extensive study of linear and logistic regression algorithms implemented with 1-transistor-1-resistor (1T1R) memristor crossbars arrays. For this task, a simulation platform is used that wraps circuit-level simulations of 1T1R crossbars and physics-based model of RRAM to elucidate the impact of device variability on algorithm accuracy, convergence rate, and precision. Moreover, a smart pulsing strategy is proposed for the practical implementation of synaptic weight updates that can accelerate training in real crossbar architectures. Next, this dissertation reports on the hardware implementation of analog dot-product operation on arrays of 2D hexagonal boron nitride (h-BN) memristors. This extends beyond previous work that studied isolated device characteristics towards the application of analog neural network accelerators based on 2D memristor arrays. The wafer-level fabrication of the memristor arrays is enabled by large-area transfer of CVD-grown few-layer h-BN films. The dot-product operation shows excellent linearity and repeatability, with low read energy consumption, with minimal error and deviation over various measurement cycles. Moreover, the successful implementation of a stochastic linear and logistic regression algorithm in 2D h-BN memristor hardware is presented for the classification of noisy images. Additionally, the electrical performance of novel 2D h-BN memristor for SNN applications is extensively investigated. Then, using the experimental behavior of the h-BN memristor as the artificial synapse, an unsupervised spiking neural network (SNN) is simulated for the image classification task. A novel and simple Spike-Timing-Dependent-Plasticity (STDP)-based dropout technique is presented to enhance the recognition task of the h-BN memristor-based SNN.
ContributorsAfshari, Sahra (Author) / Sanchez Esqueda, Ivan (Thesis advisor) / Barnaby, Hugh J (Committee member) / Seo, Jae-Sun (Committee member) / Cao, Yu (Committee member) / Arizona State University (Publisher)
Created2023
Description
ABSTRACT With the fast development of industry, it brings indelible pollution to the natural environment. As a consequence, the air quality is getting worse which will seriously affect people's health. With such concern, continuous air quality monitoring and prediction are necessary. Traditional air quality monitoring methods cannot use

ABSTRACT With the fast development of industry, it brings indelible pollution to the natural environment. As a consequence, the air quality is getting worse which will seriously affect people's health. With such concern, continuous air quality monitoring and prediction are necessary. Traditional air quality monitoring methods cannot use large amount of historical data to make accurate predic-tions. Moreover, the traditional prediction method can only roughly predict the air quality level in a short time. With the development of artificial intelligence al-gorithms [1] and high performance computing, the latest mathematical methods and algorithms are able to generate much more accurate predictions based on long term past data. In this master thesis project, it explore to develop deep learning based air quality prediction based on real sensor network time series air quality data from STAIR system [3].
ContributorsZhou, Zeming (Author) / Fan, Deliang (Thesis advisor) / Cao, Yu (Committee member) / Yu, Haofei (Committee member) / Arizona State University (Publisher)
Created2023
171744-Thumbnail Image.png
Description
Convolutional neural networks(CNNs) achieve high accuracy on large datasets but requires significant computation and storage requirement for training/testing. While many applications demand low latency and energy-efficient processing of the images, deploying these complex algorithms on the hardware is a challenging task. This dissertation first presents a compiler-based CNN training accelerator

Convolutional neural networks(CNNs) achieve high accuracy on large datasets but requires significant computation and storage requirement for training/testing. While many applications demand low latency and energy-efficient processing of the images, deploying these complex algorithms on the hardware is a challenging task. This dissertation first presents a compiler-based CNN training accelerator using DDR3 and HBM2 memory. An optimized RTL library is implemented to perform training-specific tasks and an RTL compiler is developed to generate FPGA-synthesizable RTL based on user-defined constraints. High Bandwidth Memory(HBM) provides efficient off-chip communication and improves the training performance. The impact of HBM2 on CNN training workloads is analyzed and compressively compared with DDR3. For training ResNet-20/VGG-like CNNs for the CIFAR-10 dataset, the proposed CNN training accelerator on Stratix-10 GX FPGA(DDR3) demonstrates 479 GOPS performance, and on Stratix-10 MX FPGA(HBM) shows 4.5/9.7 X energy-efficiency improvement compared to Tesla V100 GPU. Next, the FPGA online learning accelerator is presented. Adopting model segmentation techniques from Progressive Segmented Training(PST), the online learning accelerator achieved a 4.2X reduction in training latency. Furthermore, this dissertation presents an 8-bit floating-point (FP8) training processor which implements (1) Highly parallel tensor cores that maintain high PE utilization, (2) Hardware-efficient channel gating for dynamic output activation sparsity (3) Dynamic weight sparsity based on group Lasso (4) Gradient skipping based on FP prediction error. The 28nm prototype chip demonstrates significant improvements in FLOPs reduction (7.3×), energy efficiency (16.4 TFLOPS/W), and overall training latency speedup (4.7×) for both supervised training and self-supervised training tasks. In addition to the training accelerators, this dissertation also presents a CNN inference accelerator on ASIC(FixyNN) and FPGA(FixyFPGA). FixyNN consists of a fixed-weight feature extractor that generates ubiquitous CNN features and a conventional programmable CNN accelerator. In the fixed-weight feature extractor, the network weights are hard-coded into hardware and used as a fixed operand for the multiplication. Experimental results demonstrate FixyNN can achieve very high energy efficiencies up to 26.6 TOPS/W, and FixyFPGA achieves $2.34\times$ higher GOPS on ImageNet classification. In summary, this dissertation comprehensively discusses novel architectures of high-performance and energy-efficient ASIC/FPGA CNN inference/training accelerators.
ContributorsKolala Venkataramaniah, Shreyas (Author) / Seo, Jae-Sun (Thesis advisor) / Cao, Yu (Committee member) / Chakrabarti, Chaitali (Committee member) / Fan, Deliang (Committee member) / Arizona State University (Publisher)
Created2022
171924-Thumbnail Image.png
Description
Many of the advanced integrated circuits in the past used monolithic grade die due to power, performance and cost considerations. Today, heterogenous integration of multiple dies into a single package is possible because of the advancement in packaging. These heterogeneous multi-chiplet systems provide high performance at minimum fabrication cost. The

Many of the advanced integrated circuits in the past used monolithic grade die due to power, performance and cost considerations. Today, heterogenous integration of multiple dies into a single package is possible because of the advancement in packaging. These heterogeneous multi-chiplet systems provide high performance at minimum fabrication cost. The main challenge is to interconnect these chiplets while keeping the power and performance closer to monolithic grade. Intel’s Advanced Interface Bus (AIB) is a short reach interface that offers high bandwidth, power efficient, low latency, and cost effective on-package connectivity between chiplets. It supports flexible interconnection of the chiplets with high speed data transfer. Specifically, it is a die to die parallel interface implemented with multiple configurable channels, routed between micro-bumps. In this work, the AIB model is synthesized in 65nm technology node and a performancemodel is generated. This model generates area, power and latency results for multiple technology nodes using technology scaling methods. For all nodes, the area, power and latency values increase linearly with frequency and number of channels. The bandwidth also increases linearly with the number of input/output lanes, which is a function of the micro-bump pitch. Next, the AIB performance model is integrated with the benchmarking simulator, Scalable In-Memory Acceleration With Mesh (SIAM), to realize a scalable chipletbased end-to-end system. The Ground-Referenced Signaling (GRS) driver model in SIAM is replaced with the AIB model and an end-to-end evaluation of Deep Neural Network (DNN) performance is carried out for two contemporary DNN models. Comparative analysis between SIAM with GRS and SIAM with AIB show that while the area of AIB transmitter is less compared to GRS transmitter, the AIB transmitter offers higher bandwidth than GRS transmitter at the expense of higher energy. Furthermore, SIAM with AIB provides more realistic timing numbers since the NoP driver latency is also taken into consideration.
ContributorsCHERIAN, NINOO SUSAN (Author) / Chakrabarti, Chaitali (Thesis advisor) / Cao, Yu (Committee member) / Fan, Deliang (Committee member) / Arizona State University (Publisher)
Created2022
171380-Thumbnail Image.png
Description
Deep neural networks (DNNs), as a main-stream algorithm for various AI tasks, achieve higher accuracy at the cost of increased computational complexity and model size, posing great challenges to hardware platforms. This dissertation first tackles the design challenges of resistive random-access-memory (RRAM) based in-memory computing (IMC) architectures. A new metric,

Deep neural networks (DNNs), as a main-stream algorithm for various AI tasks, achieve higher accuracy at the cost of increased computational complexity and model size, posing great challenges to hardware platforms. This dissertation first tackles the design challenges of resistive random-access-memory (RRAM) based in-memory computing (IMC) architectures. A new metric, model stability from the loss landscape, is proposed to help shed light on accuracy under variations and model compression and guide a novel variation-aware training (VAT) solution. The proposed method effectively improves post-mapping accuracy of multiple datasets. Next, a hybrid RRAM/SRAM IMC DNN inference accelerator is developed, that integrates an RRAM-based IMC macro, a reconfigurable SRAM-based multiply-accumulate (MAC) macro, and a programmable shifter. The hybrid IMC accelerator fully recovers the inference accuracy post the mapping. Furthermore, this dissertation researches on architectural optimizations for high IMC utilization, low on-chip communication cost, and low energy-delay product (EDP), including on-chip interconnect design, PE array utilization, and tile-to-router mapping and scheduling. The optimal choice of on-chip interconnect results in up to 6x improvement in energy-delay-area product for RRAM IMC architectures. Furthermore, the PE and NoC optimizations show up to 62% improvement in PE utilization, 78% reduction in area, and 78% lower energy-area product for a wide range of modern DNNs. Finally, this dissertation proposes a novel chiplet-based IMC benchmarking simulator, SIAM, and a heterogeneous chiplet IMC architecture to address the limitations of a monolithic DNN accelerator. SIAM utilizes model-based and cycle-accurate simulation to provide a scalable and flexible architecture. SIAM is calibrated against a published silicon result, SIMBA, from Nvidia. The heterogeneous architecture utilizes a custom mapping with a bank of big and little chiplets, and a hybrid network-on-package (NoP) to optimize the utilization, interconnect bandwidth, and energy efficiency. The proposed big-little chiplet-based RRAM IMC architecture significantly improves energy efficiency at lower area, compared to conventional GPUs. In summary, this dissertation comprehensively investigates novel methods that encompass device, circuits, architecture, packaging, and algorithm to design scalable high-performance and energy-efficient IMC architectures.
ContributorsKrishnan, Gokul (Author) / Cao, Yu (Thesis advisor) / Seo, Jae-Sun (Committee member) / Chakrabarti, Chaitali (Committee member) / Ogras, Umit Y. (Committee member) / Arizona State University (Publisher)
Created2022
154267-Thumbnail Image.png
Description
Internet of Things (IoT) has become a popular topic in industry over the recent years, which describes an ecosystem of internet-connected devices or things that enrich the everyday life by improving our productivity and efficiency. The primary components of the IoT ecosystem are hardware, software and services. While the software

Internet of Things (IoT) has become a popular topic in industry over the recent years, which describes an ecosystem of internet-connected devices or things that enrich the everyday life by improving our productivity and efficiency. The primary components of the IoT ecosystem are hardware, software and services. While the software and services of IoT system focus on data collection and processing to make decisions, the underlying hardware is responsible for sensing the information, preprocess and transmit it to the servers. Since the IoT ecosystem is still in infancy, there is a great need for rapid prototyping platforms that would help accelerate the hardware design process. However, depending on the target IoT application, different sensors are required to sense the signals such as heart-rate, temperature, pressure, acceleration, etc., and there is a great need for reconfigurable platforms that can prototype different sensor interfacing circuits.

This thesis primarily focuses on two important hardware aspects of an IoT system: (a) an FPAA based reconfigurable sensing front-end system and (b) an FPGA based reconfigurable processing system. To enable reconfiguration capability for any sensor type, Programmable ANalog Device Array (PANDA), a transistor-level analog reconfigurable platform is proposed. CAD tools required for implementation of front-end circuits on the platform are also developed. To demonstrate the capability of the platform on silicon, a small-scale array of 24×25 PANDA cells is fabricated in 65nm technology. Several analog circuit building blocks including amplifiers, bias circuits and filters are prototyped on the platform, which demonstrates the effectiveness of the platform for rapid prototyping IoT sensor interfaces.

IoT systems typically use machine learning algorithms that run on the servers to process the data in order to make decisions. Recently, embedded processors are being used to preprocess the data at the energy-constrained sensor node or at IoT gateway, which saves considerable energy for transmission and bandwidth. Using conventional CPU based systems for implementing the machine learning algorithms is not energy-efficient. Hence an FPGA based hardware accelerator is proposed and an optimization methodology is developed to maximize throughput of any convolutional neural network (CNN) based machine learning algorithm on a resource-constrained FPGA.
ContributorsSuda, Naveen (Author) / Cao, Yu (Thesis advisor) / Bakkaloglu, Bertan (Committee member) / Ozev, Sule (Committee member) / Yu, Shimeng (Committee member) / Seo, Jae-Sun (Committee member) / Arizona State University (Publisher)
Created2016