Matching Items (57)
Filtering by

Clear all filters

151200-Thumbnail Image.png
Description
In recent years, we have observed the prevalence of stream applications in many embedded domains. Stream programs distinguish themselves from traditional sequential programming languages through well defined independent actors, explicit data communication, and stable code/data access patterns. In order to achieve high performance and low power, scratch pad memory (SPM)

In recent years, we have observed the prevalence of stream applications in many embedded domains. Stream programs distinguish themselves from traditional sequential programming languages through well defined independent actors, explicit data communication, and stable code/data access patterns. In order to achieve high performance and low power, scratch pad memory (SPM) has been introduced in today's embedded multicore processors. Current design frameworks for developing stream applications on SPM enhanced embedded architectures typically do not include a compiler that can perform automatic partitioning, mapping and scheduling under limited on-chip SPM capacities and memory access delays. Consequently, many designs are implemented manually, which leads to lengthy tasks and inferior designs. In this work, optimization techniques that automatically compile stream programs onto embedded multi-core architectures are proposed. As an initial case study, we implemented an automatic target recognition (ATR) algorithm on the IBM Cell Broadband Engine (BE). Then integer linear programming (ILP) and heuristic approaches were proposed to schedule stream programs on a single core embedded processor that has an SPM with code overlay. Later, ILP and heuristic approaches for Compiling Stream programs on SPM enhanced Multicore Processors (CSMP) were studied. The proposed CSMP ILP and heuristic approaches do not optimize for cycles in stream applications. Further, the number of software pipeline stages in the implementation is dependent on actor to processing engine (PE) mapping and is uncontrollable. We next presented a Retiming technique for Throughput optimization on Embedded Multi-core processors (RTEM). RTEM approach inherently handles cycles and can accept an upper bound on the number of software pipeline stages to be generated. We further enhanced RTEM by incorporating unrolling (URSTEM) that preserves all the beneficial properties of RTEM heuristic and also scales with the number of PEs through unrolling.
ContributorsChe, Weijia (Author) / Chatha, Karam Singh (Thesis advisor) / Vrudhula, Sarma (Committee member) / Chakrabarti, Chaitali (Committee member) / Shrivastava, Aviral (Committee member) / Arizona State University (Publisher)
Created2012
151254-Thumbnail Image.png
Description
Negative bias temperature instability (NBTI) is a leading aging mechanism in modern digital and analog circuits. Recent NBTI data exhibits an excessive amount of randomness and fast recovery, which are difficult to be handled by conventional power-law model (tn). Such discrepancies further pose the challenge on long-term reliability prediction under

Negative bias temperature instability (NBTI) is a leading aging mechanism in modern digital and analog circuits. Recent NBTI data exhibits an excessive amount of randomness and fast recovery, which are difficult to be handled by conventional power-law model (tn). Such discrepancies further pose the challenge on long-term reliability prediction under statistical variations and Dynamic Voltage Scaling (DVS) in real circuit operation. To overcome these barriers, the modeling effort in this work (1) practically explains the aging statistics due to randomness in number of traps with log(t) model, accurately predicting the mean and variance shift; (2) proposes cycle-to-cycle model (from the first-principle of trapping) to handle aging under multiple supply voltages, predicting the non-monotonic behavior under DVS (3) presents a long-term model to estimate a tight upper bound of dynamic aging over multiple cycles, and (4) comprehensively validates the new set of aging models with 65nm statistical silicon data. Compared to previous models, the new set of aging models capture the aging variability and the essential role of the recovery phase under DVS, reducing unnecessary guard-banding during the design stage. With CMOS technology scaling, design for reliability has become an important step in the design cycle, and increased the need for efficient and accurate aging simulation methods during the design stage. NBTI induced delay shifts in logic paths are asymmetric in nature, as opposed to averaging effect due to recovery assumed in traditional aging analysis. Timing violations due to aging, in particular, are very sensitive to the standby operation regime of a digital circuit. In this report, by identifying the critical moments in circuit operation and considering the asymmetric aging effects, timing violations under NBTI effect are correctly predicted. The unique contributions of the simulation flow include: (1) accurate modeling of aging induced delay shift due to threshold voltage (Vth) shift using only the delay dependence on supply voltage from cell library; (2) simulation flow for asymmetric aging analysis is proposed and conducted at critical points in circuit operation; (3) setup and hold timing violations due to NBTI aging in logic and clock buffer are investigated in sequential circuits and (4) proposed framework is tested in VLSI applications such DDR memory circuits. This methodology is comprehensively demonstrated with ISCAS89 benchmark circuits using a 45nm Nangate standard cell library characterized using predictive technology models. Our proposed design margin assessment provides design insights and enables resilient techniques for mitigating digital circuit aging.
ContributorsVelamala, Jyothi Bhaskarr (Author) / Cao, Yu (Thesis advisor) / Clark, Lawrence (Committee member) / Chakrabarti, Chaitali (Committee member) / Yu, Hongbin (Committee member) / Arizona State University (Publisher)
Created2012
149313-Thumbnail Image.png
Description
Thousands of high-resolution images are generated each day. Segmenting, classifying, and analyzing the contents of these images are the key steps in image understanding. This thesis focuses on image segmentation and classification and its applications in synthetic, texture, natural, biomedical, and industrial images. A robust level-set-based multi-region and texture image

Thousands of high-resolution images are generated each day. Segmenting, classifying, and analyzing the contents of these images are the key steps in image understanding. This thesis focuses on image segmentation and classification and its applications in synthetic, texture, natural, biomedical, and industrial images. A robust level-set-based multi-region and texture image segmentation approach is proposed in this thesis to tackle most of the challenges in the existing multi-region segmentation methods, including computational complexity and sensitivity to initialization. Medical image analysis helps in understanding biological processes and disease pathologies. In this thesis, two cell evolution analysis schemes are proposed for cell cluster extraction in order to analyze cell migration, cell proliferation, and cell dispersion in different cancer cell images. The proposed schemes accurately segment both the cell cluster area and the individual cells inside and outside the cell cluster area. The method is currently used by different cell biology labs to study the behavior of cancer cells, which helps in drug discovery. Defects can cause failure to motherboards, processors, and semiconductor units. An automatic defect detection and classification methodology is very desirable in many industrial applications. This helps in producing consistent results, facilitating the processing, speeding up the processing time, and reducing the cost. In this thesis, three defect detection and classification schemes are proposed to automatically detect and classify different defects related to semiconductor unit images. The first proposed defect detection scheme is used to detect and classify the solder balls in the processor sockets as either defective (Non-Wet) or non-defective. The method produces a 96% classification rate and saves 89% of the time used by the operator. The second proposed defect detection scheme is used for detecting and measuring voids inside solder balls of different boards and products. The third proposed defect detection scheme is used to detect different defects in the die area of semiconductor unit images such as cracks, scratches, foreign materials, fingerprints, and stains. The three proposed defect detection schemes give high accuracy and are inexpensive to implement compared to the existing high cost state-of-the-art machines.
ContributorsSaid, Asaad F (Author) / Karam, Lina (Thesis advisor) / Chakrabarti, Chaitali (Committee member) / Tepedelenlioğlu, Cihan (Committee member) / Patel, Nital (Committee member) / Arizona State University (Publisher)
Created2010
149553-Thumbnail Image.png
Description
To extend the lifetime of complementary metal-oxide-semiconductors (CMOS), emerging process techniques are being proposed to conquer the manufacturing difficulties. New structures and materials are proposed with superior electrical properties to traditional CMOS, such as strain technology and feedback field-effect transistor (FB-FET). To continue the design success and make an impact

To extend the lifetime of complementary metal-oxide-semiconductors (CMOS), emerging process techniques are being proposed to conquer the manufacturing difficulties. New structures and materials are proposed with superior electrical properties to traditional CMOS, such as strain technology and feedback field-effect transistor (FB-FET). To continue the design success and make an impact on leading products, advanced circuit design exploration must begin concurrently with early silicon development. Therefore, an accurate and scalable model is desired to correctly capture those effects and flexible to extend to alternative process choices. For example, strain technology has been successfully integrated into CMOS fabrication to improve transistor performance but the stress is non-uniformly distributed in the channel, leading to systematic performance variations. In this dissertation, a new layout-dependent stress model is proposed as a function of layout, temperature, and other device parameters. Furthermore, a method of layout decomposition is developed to partition the layout into a set of simple patterns for model extraction. These solutions significantly reduce the complexity in stress modeling and simulation. On the other hand, semiconductor devices with self-feedback mechanisms are emerging as promising alternatives to CMOS. Fe-FET was proposed to improve the switching by integrating a ferroelectric material as gate insulator in a MOSFET structure. Under particular circumstances, ferroelectric capacitance is effectively negative, due to the negative slope of its polarization-electrical field curve. This property makes the ferroelectric layer a voltage amplifier to boost surface potential, achieving fast transition. A new threshold voltage model for Fe-FET is developed, and is further revealed that the impact of random dopant fluctuation (RDF) can be suppressed. Furthermore, through silicon via (TSV), a key technology that enables the 3D integration of chips, is studied. TSV structure is usually a cylindrical metal-oxide-semiconductors (MOS) capacitor. A piecewise capacitance model is proposed for 3D interconnect simulation. Due to the mismatch in coefficients of thermal expansion (CTE) among materials, thermal stress is observed in TSV process and impacts neighboring devices. The stress impact is investigated to support the interaction between silicon process and IC design at the early stage.
ContributorsWang, Chi-Chao (Author) / Cao, Yu (Thesis advisor) / Chakrabarti, Chaitali (Committee member) / Clark, Lawrence (Committee member) / Schroder, Dieter (Committee member) / Arizona State University (Publisher)
Created2011
171744-Thumbnail Image.png
Description
Convolutional neural networks(CNNs) achieve high accuracy on large datasets but requires significant computation and storage requirement for training/testing. While many applications demand low latency and energy-efficient processing of the images, deploying these complex algorithms on the hardware is a challenging task. This dissertation first presents a compiler-based CNN training accelerator

Convolutional neural networks(CNNs) achieve high accuracy on large datasets but requires significant computation and storage requirement for training/testing. While many applications demand low latency and energy-efficient processing of the images, deploying these complex algorithms on the hardware is a challenging task. This dissertation first presents a compiler-based CNN training accelerator using DDR3 and HBM2 memory. An optimized RTL library is implemented to perform training-specific tasks and an RTL compiler is developed to generate FPGA-synthesizable RTL based on user-defined constraints. High Bandwidth Memory(HBM) provides efficient off-chip communication and improves the training performance. The impact of HBM2 on CNN training workloads is analyzed and compressively compared with DDR3. For training ResNet-20/VGG-like CNNs for the CIFAR-10 dataset, the proposed CNN training accelerator on Stratix-10 GX FPGA(DDR3) demonstrates 479 GOPS performance, and on Stratix-10 MX FPGA(HBM) shows 4.5/9.7 X energy-efficiency improvement compared to Tesla V100 GPU. Next, the FPGA online learning accelerator is presented. Adopting model segmentation techniques from Progressive Segmented Training(PST), the online learning accelerator achieved a 4.2X reduction in training latency. Furthermore, this dissertation presents an 8-bit floating-point (FP8) training processor which implements (1) Highly parallel tensor cores that maintain high PE utilization, (2) Hardware-efficient channel gating for dynamic output activation sparsity (3) Dynamic weight sparsity based on group Lasso (4) Gradient skipping based on FP prediction error. The 28nm prototype chip demonstrates significant improvements in FLOPs reduction (7.3×), energy efficiency (16.4 TFLOPS/W), and overall training latency speedup (4.7×) for both supervised training and self-supervised training tasks. In addition to the training accelerators, this dissertation also presents a CNN inference accelerator on ASIC(FixyNN) and FPGA(FixyFPGA). FixyNN consists of a fixed-weight feature extractor that generates ubiquitous CNN features and a conventional programmable CNN accelerator. In the fixed-weight feature extractor, the network weights are hard-coded into hardware and used as a fixed operand for the multiplication. Experimental results demonstrate FixyNN can achieve very high energy efficiencies up to 26.6 TOPS/W, and FixyFPGA achieves $2.34\times$ higher GOPS on ImageNet classification. In summary, this dissertation comprehensively discusses novel architectures of high-performance and energy-efficient ASIC/FPGA CNN inference/training accelerators.
ContributorsKolala Venkataramaniah, Shreyas (Author) / Seo, Jae-Sun (Thesis advisor) / Cao, Yu (Committee member) / Chakrabarti, Chaitali (Committee member) / Fan, Deliang (Committee member) / Arizona State University (Publisher)
Created2022
171895-Thumbnail Image.png
Description
Adversarial threats of deep learning are increasingly becoming a concern due to the ubiquitous deployment of deep neural networks(DNNs) in many security-sensitive domains. Among the existing threats, adversarial weight perturbation is an emerging class of threats that attempts to perturb the weight parameters of DNNs to breach security and privacy.In

Adversarial threats of deep learning are increasingly becoming a concern due to the ubiquitous deployment of deep neural networks(DNNs) in many security-sensitive domains. Among the existing threats, adversarial weight perturbation is an emerging class of threats that attempts to perturb the weight parameters of DNNs to breach security and privacy.In this thesis, the first weight perturbation attack introduced is called Bit-Flip Attack (BFA), which can maliciously flip a small number of bits within a computer’s main memory system storing the DNN weight parameter to achieve malicious objectives. Our developed algorithm can achieve three specific attack objectives: I) Un-targeted accuracy degradation attack, ii) Targeted attack, & iii) Trojan attack. Moreover, BFA utilizes the rowhammer technique to demonstrate the bit-flip attack in an actual computer prototype. While the bit-flip attack is conducted in a white-box setting, the subsequent contribution of this thesis is to develop another novel weight perturbation attack in a black-box setting. Consequently, this thesis discusses a new study of DNN model vulnerabilities in a multi-tenant Field Programmable Gate Array (FPGA) cloud under a strict black-box framework. This newly developed attack framework injects faults in the malicious tenant by duplicating specific DNN weight packages during data transmission between off-chip memory and on-chip buffer of a victim FPGA. The proposed attack is also experimentally validated in a multi-tenant cloud FPGA prototype. In the final part, the focus shifts toward deep learning model privacy, popularly known as model extraction, that can steal partial DNN weight parameters remotely with the aid of a memory side-channel attack. In addition, a novel training algorithm is designed to utilize the partially leaked DNN weight bit information, making the model extraction attack more effective. The algorithm effectively leverages the partial leaked bit information and generates a substitute prototype of the victim model with almost identical performance to the victim.
ContributorsRakin, Adnan Siraj (Author) / Fan, Deliang (Thesis advisor) / Chakrabarti, Chaitali (Committee member) / Seo, Jae-Sun (Committee member) / Cao, Yu (Committee member) / Arizona State University (Publisher)
Created2022
190780-Thumbnail Image.png
Description
Artificial Intelligence (AI) and Machine Learning (ML) techniques have come a long way since their inception and have been used to build intelligent systems for a wide range of applications in everyday life. However they are very computationintensive and require transfer of large volume of data from memory to the

Artificial Intelligence (AI) and Machine Learning (ML) techniques have come a long way since their inception and have been used to build intelligent systems for a wide range of applications in everyday life. However they are very computationintensive and require transfer of large volume of data from memory to the computation units. This memory access time constitute significant part of the computational latency and a performance bottleneck. To address this limitation and the ever-growing demand for implementation in hand-held and edge-devices, In-memory computing (IMC) based AI/ML hardware accelerators have emerged. First, the dissertation presents an IMC static random access memory (SRAM) based hardware modeling and optimization framework. A unified systematic study closely models the IMC hardware, and investigates how a number of design variables and non-idealities (e.g. device mismatch and ADC quantization) affect the Deep Neural Network (DNN) accuracy of the IMC design. The framework allows co-optimized selection of different design variables accounting for sources of noise in IMC hardware and robust implementation of a high accuracy DNN. Next, it presents a kNN hardware accelerator in 65nm Complementary Metal-Oxide-Semiconductor (CMOS) technology. The accelerator combines an IMC SRAM that is developed for binarized deep neural networks and other digital hardware that performs top-k sorting. The simulated k Nearest Neighbor accelerator design processes up to 17.9 million query vectors per second while consuming 11.8 mW, demonstrating >4.8× energy-efficiency improvement over prior works. This dissertation also presents a novel floating-point precision IMC (FP-IMC) macro with a hybrid architecture that configurably supports two Floating Point (FP) precisions. Implementing FP precision MAC has been a challenge owing to its complexity. The design is implemented on 28nm CMOS, and taped-out on chip demonstrating 12.1 TFLOPS/W and 66.1 TFLOPS/W for 8-bit Floating Point (FP8) and Block Floating point (BF8) respectively. Finally, another iteration of the FP design is presented that is modeled to support multiple precision modes from FP8 up to FP32. Two approaches to the architectural design were compared illustrating the throughput-area overhead trade-off. The simulated design shows a 2.1 × normalized energy-efficiency compared to the on-chip implementation of the FP-IMC.
ContributorsSaikia, Jyotishman (Author) / Seo, Jae-Sun (Thesis advisor) / Chakrabarti, Chaitali (Thesis advisor) / Fan, Deliang (Committee member) / Cao, Yu (Committee member) / Arizona State University (Publisher)
Created2023
190975-Thumbnail Image.png
Description
This thesis addresses the problems of (a) scheduling multiple streaming jobs with soft deadline constraints to minimize the risk/energy consumption in heterogeneous Systems-on-chip (SoCs), and (b) training a neural network model with high accuracy and low training time using split federated learning (SFL) with heterogeneous clients. Designing a scheduler for

This thesis addresses the problems of (a) scheduling multiple streaming jobs with soft deadline constraints to minimize the risk/energy consumption in heterogeneous Systems-on-chip (SoCs), and (b) training a neural network model with high accuracy and low training time using split federated learning (SFL) with heterogeneous clients. Designing a scheduler for heterogeneous SoC SoCs built with different types of processing elements (PEs) is quite challenging, especially when it has to balance the conflicting requirements of low energy consumption, low risk, and high throughput for randomly streaming jobs at run time. Two probabilistic deadline-aware schedulers are designed for heterogeneous SoCs for such jobs with soft deadline constraints with the goals of optimizing job-level risk and energy efficiency. The key idea of the probabilistic scheduler is to calculate the task-to-PE allocation probabilities when a job arrives in the system. This allocation probability, generated by manually designed or neural network (NN) based allocation function, is used to compute the intra-job and inter-job contentions to derive the task-level slack. The tasks are allocated to the PEs that can complete the task within the task-level slack with minimum risk or minimum energy consumption. SFL is an edge-friendly decentralized NN training scheme, where the model is split and only a small client-side model is trained in the clients. The communication overhead in SFL is significant since the intermediate activations and gradients of every sample are transmitted in every epoch. Two communication reduction methods have been proposed, namely, loss-aware selective updating to reduce the number of training epochs and bottleneck layer (BL) to reduce the feature size.Next, the SFL system is trained with heterogeneous clients having different data rates and operating on non-IID data. The communication time of clients in low-end group with slow data rates dominates the training time. To reduce the training time without sacrificing accuracy significantly, HeteroSFL is built with HetBL and bi- directional knowledge sharing (BDKS). HetBL compresses data with different factors in low- and high-end groups using narrow and wide bottleneck layers respectively. BDKS is proposed to mitigate the label distribution skew across different groups. BDKS can also be applied in Federated Learning to address the label distribution skew.
ContributorsChen, Xing (Author) / Chakrabarti, Chaitali (Thesis advisor, Committee member) / Ogras, Umit (Committee member) / Fan, Deliang (Committee member) / Zhang, Jeff (Committee member) / Arizona State University (Publisher)
Created2023
189327-Thumbnail Image.png
Description
In recent years, the proliferation of deep neural networks (DNNs) has revolutionized the field of artificial intelligence, enabling advancements in various domains. With the emergence of efficient learning techniques such as quantization and distributed learning, DNN systems have become increasingly accessible for deployment on edge devices. This accessibility brings significant

In recent years, the proliferation of deep neural networks (DNNs) has revolutionized the field of artificial intelligence, enabling advancements in various domains. With the emergence of efficient learning techniques such as quantization and distributed learning, DNN systems have become increasingly accessible for deployment on edge devices. This accessibility brings significant benefits, including real-time inference on the edge, which mitigates communication latency, and on-device learning, which addresses privacy concerns and enables continuous improvement. However, the resource limitations of edge devices pose challenges in equipping them with robust safety protocols, making them vulnerable to various attacks. Two notable attacks that affect edge DNN systems are Bit-Flip Attacks (BFA) and architecture stealing attacks. BFA compromises the integrity of DNN models, while architecture stealing attacks aim to extract valuable intellectual property by reverse engineering the model's architecture. Furthermore, in Split Federated Learning (SFL) scenarios, where training occurs on distributed edge devices, Model Inversion (MI) attacks can reconstruct clients' data, and Model Extraction (ME) attacks can extract sensitive model parameters. This thesis aims to address these four attack scenarios and develop effective defense mechanisms. To defend against BFA, both passive and active defensive strategies are discussed. Furthermore, for both model inference and training, architecture stealing attacks are mitigated through novel defense techniques, ensuring the integrity and confidentiality of edge DNN systems. In the context of SFL, the thesis showcases defense mechanisms against MI attacks for both supervised and self-supervised learning applications. Additionally, the research investigates ME attacks in SFL and proposes countermeasures to enhance resistance against potential ME attackers. By examining and addressing these attack scenarios, this research contributes to the security and privacy enhancement of edge DNN systems. The proposed defense mechanisms enable safer deployment of DNN models on resource-constrained edge devices, facilitating the advancement of real-time applications, preserving data privacy, and fostering the widespread adoption of edge computing technologies.
ContributorsLi, Jingtao (Author) / Chakrabarti, Chaitali (Thesis advisor) / Fan, Deliang (Committee member) / Cao, Yu (Committee member) / Trieu, Ni (Committee member) / Arizona State University (Publisher)
Created2023
189373-Thumbnail Image.png
Description
Efficient visual sensing plays a pivotal role in enabling high-precision applications in augmented reality and low-power Internet of Things (IoT) devices. This dissertation addresses the primary challenges that hinder energy efficiency in visual sensing: the bottleneck of pixel traffic across camera and memory interfaces and the energy-intensive analog readout process

Efficient visual sensing plays a pivotal role in enabling high-precision applications in augmented reality and low-power Internet of Things (IoT) devices. This dissertation addresses the primary challenges that hinder energy efficiency in visual sensing: the bottleneck of pixel traffic across camera and memory interfaces and the energy-intensive analog readout process in image sensors. To overcome the bottleneck of pixel traffic, this dissertation proposes a visual sensing pipeline architecture that enables application developers to dynamically adapt the spatial resolution and update rates for specific regions within the scene. By selectively capturing and processing high-resolution frames only where necessary, the system significantly reduces energy consumption associated with memory traffic. This is achieved by encoding only the relevant pixels from the commercial image sensors with standard raster-scan pixel read-out patterns, thus minimizing the data stored in memory. The stored rhythmic pixel region stream is decoded into traditional frame-based representations, enabling seamless integration into existing video pipelines. Moreover, the system includes runtime support that allows flexible specification of the region labels, giving developers fine-grained control over the resolution adaptation process. Experimental evaluations conducted on a Xilinx Field Programmable Gate Array (FPGA) platform demonstrate substantial reductions of 43-64% in interface traffic, while maintaining controllable task accuracy. In addition to the pixel traffic bottleneck, the dissertation tackles the energy intensive analog readout process in image sensors. To address this, the dissertation proposes aggressive scaling of the analog voltage supplied to the camera. Extensive characterization on off-the-shelf sensors demonstrates that analog voltage scaling can significantly reduce sensor power, albeit at the expense of image quality. To mitigate this trade-off, this research develops a pipeline that allows application developers to adapt the sensor voltage on a frame-by-frame basis. A voltage controller is integrated into the existing Raspberry Pi (RPi) based video streaming pipeline, generating the sensor voltage. On top of that, the system provides a software interface for vision applications to specify the desired voltage levels. Evaluation of the system across a range of voltage scaling policies on popular vision tasks demonstrates that the technique can deliver up to 73% sensor power savings while maintaining reasonable task fidelity.
ContributorsKodukula, Venkatesh (Author) / LiKamWa, Robert (Thesis advisor) / Chakrabarti, Chaitali (Committee member) / Brunhaver, John (Committee member) / Nambi, Akshay (Committee member) / Arizona State University (Publisher)
Created2023