Matching Items (57)
Filtering by

Clear all filters

158684-Thumbnail Image.png
Description
The advances of Deep Learning (DL) achieved recently have successfully demonstrated its great potential of surpassing or close to human-level performance across multiple domains. Consequently, there exists a rising demand to deploy state-of-the-art DL algorithms, e.g., Deep Neural Networks (DNN), in real-world applications to release labors from repetitive work. On

The advances of Deep Learning (DL) achieved recently have successfully demonstrated its great potential of surpassing or close to human-level performance across multiple domains. Consequently, there exists a rising demand to deploy state-of-the-art DL algorithms, e.g., Deep Neural Networks (DNN), in real-world applications to release labors from repetitive work. On the one hand, the impressive performance achieved by the DNN normally accompanies with the drawbacks of intensive memory and power usage due to enormous model size and high computation workload, which significantly hampers their deployment on the resource-limited cyber-physical systems or edge devices. Thus, the urgent demand for enhancing the inference efficiency of DNN has also great research interests across various communities. On the other hand, scientists and engineers still have insufficient knowledge about the principles of DNN which makes it mostly be treated as a black-box. Under such circumstance, DNN is like "the sword of Damocles" where its security or fault-tolerance capability is an essential concern which cannot be circumvented.

Motivated by the aforementioned concerns, this dissertation comprehensively investigates the emerging efficiency and security issues of DNNs, from both software and hardware design perspectives. From the efficiency perspective, as the foundation technique for efficient inference of target DNN, the model compression via quantization is elaborated. In order to maximize the inference performance boost, the deployment of quantized DNN on the revolutionary Computing-in-Memory based neural accelerator is presented in a cross-layer (device/circuit/system) fashion. From the security perspective, the well known adversarial attack is investigated spanning from its original input attack form (aka. Adversarial example generation) to its parameter attack variant.
Contributorshe, zhezhi (Author) / Fan, Deliang (Thesis advisor) / Chakrabarti, Chaitali (Committee member) / Cao, Yu (Committee member) / Seo, Jae-Sun (Committee member) / Arizona State University (Publisher)
Created2020
158748-Thumbnail Image.png
Description
Respiratory behavior provides effective information to characterize lung functionality, including respiratory rate, respiratory profile, and respiratory volume. Current methods have limited capabilities of continuous characterization of respiratory behavior and are primarily targeting the measurement of respiratory rate, which has relatively less value in clinical application. In this dissertation, a wireless

Respiratory behavior provides effective information to characterize lung functionality, including respiratory rate, respiratory profile, and respiratory volume. Current methods have limited capabilities of continuous characterization of respiratory behavior and are primarily targeting the measurement of respiratory rate, which has relatively less value in clinical application. In this dissertation, a wireless wearable sensor on a paper substrate is developed to continuously characterize respiratory behavior and deliver clinically relevant parameters, contributing to asthma control. Based on the anatomical analysis and experimental results, the optimum site for the wireless wearable sensor is on the midway of the xiphoid process and the costal margin, corresponding to the abdomen-apposed rib cage. At the wearing site, the linear strain change during respiration is measured and converted to lung volume by the wireless wearable sensor utilizing a distance-elapsed ultrasound. An on-board low-power Bluetooth module transmits the temporal lung volume change to a smartphone, where a custom-programmed app computes to show the clinically relevant parameters, such as forced vital capacity (FVC) and forced expiratory volume delivered in the first second (FEV1) and the FEV1/FVC ratio. Enhanced by a simple, yet effective machine-learning algorithm, a system consisting of two wireless wearable sensors accurately extracts respiratory features and classifies the respiratory behavior within four postures among different subjects, demonstrating that the respiratory behaviors are individual- and posture-dependent contributing to monitoring the posture-related respiratory diseases. The continuous and accurate monitoring of respiratory behaviors can track the respiratory disorders and diseases' progression for timely and objective approaches for control and management.
ContributorsChen, Ang (Author) / Cao, Yu (Thesis advisor) / Bakkaloglu, Bertan (Committee member) / Allee, David (Committee member) / Goryll, Michael (Committee member) / Arizona State University (Publisher)
Created2020
158814-Thumbnail Image.png
Description
The recording of biosignals enables physicians to correctly diagnose diseases and prescribe treatment. Existing wireless systems failed to effectively replace the conventional wired methods due to their large sizes, high power consumption, and the need to replace batteries. This thesis aims to alleviate these issues by presenting a series of

The recording of biosignals enables physicians to correctly diagnose diseases and prescribe treatment. Existing wireless systems failed to effectively replace the conventional wired methods due to their large sizes, high power consumption, and the need to replace batteries. This thesis aims to alleviate these issues by presenting a series of wireless fully-passive sensors for the acquisition of biosignals: including neuropotential, biopotential, intracranial pressure (ICP), in addition to a stimulator for the pacing of engineered cardiac cells. In contrast to existing wireless biosignal recording systems, the proposed wireless sensors do not contain batteries or high-power electronics such as amplifiers or digital circuitries. Instead, the RFID tag-like sensors utilize a unique radiofrequency (RF) backscattering mechanism to enable wireless and battery-free telemetry of biosignals with extremely low power consumption. This characteristic minimizes the risk of heat-induced tissue damage and avoids the need to use any transcranial/transcutaneous wires, and thus significantly enhances long-term safety and reliability. For neuropotential recording, a small (9mm x 8mm), biocompatible, and flexible wireless recorder is developed and verified by in vivo acquisition of two types of neural signals, the somatosensory evoked potential (SSEP) and interictal epileptic discharges (IEDs). For wireless multichannel neural recording, a novel time-multiplexed multichannel recording method based on an inductor-capacitor delay circuit is presented and tested, realizing simultaneous wireless recording from 11 channels in a completely passive manner. For biopotential recording, a wearable and flexible wireless sensor is developed, achieving real-time wireless acquisition of ECG, EMG, and EOG signals. For ICP monitoring, a very small (5mm x 4mm) wireless ICP sensor is designed and verified both in vitro through a benchtop setup and in vivo through real-time ICP recording in rats. Finally, for cardiac cell stimulation, a flexible wireless passive stimulator, capable of delivering stimulation current as high as 60 mA, is developed, demonstrating successful control over the contraction of engineered cardiac cells. The studies conducted in this thesis provide information and guidance for future translation of wireless fully-passive telemetry methods into actual clinical application, especially in the field of implantable and wearable electronics.
ContributorsLiu, Shiyi (Author) / Christen, Jennifer (Thesis advisor) / Nikkhah, Mehdi (Committee member) / Phillips, Stephen (Committee member) / Cao, Yu (Committee member) / Goryll, Michael (Committee member) / Arizona State University (Publisher)
Created2020
157619-Thumbnail Image.png
Description
The past decade has seen a tremendous surge in running machine learning (ML) functions on mobile devices, from mere novelty applications to now indispensable features for the next generation of devices.

While the mobile platform capabilities range widely, long battery life and reliability are common design concerns that are crucial to

The past decade has seen a tremendous surge in running machine learning (ML) functions on mobile devices, from mere novelty applications to now indispensable features for the next generation of devices.

While the mobile platform capabilities range widely, long battery life and reliability are common design concerns that are crucial to remain competitive.

Consequently, state-of-the-art mobile platforms have become highly heterogeneous by combining a powerful CPUs with GPUs to accelerate the computation of deep neural networks (DNNs), which are the most common structures to perform ML operations.

But traditional von Neumann architectures are not optimized for the high memory bandwidth and massively parallel computation demands required by DNNs.

Hence, propelling research into non-von Neumann architectures to support the demands of DNNs.

The re-imagining of computer architectures to perform efficient DNN computations requires focusing on the prohibitive demands presented by DNNs and alleviating them. The two central challenges for efficient computation are (1) large memory storage and movement due to weights of the DNN and (2) massively parallel multiplications to compute the DNN output.

Introducing sparsity into the DNNs, where certain percentage of either the weights or the outputs of the DNN are zero, greatly helps with both challenges. This along with algorithm-hardware co-design to compress the DNNs is demonstrated to provide efficient solutions to greatly reduce the power consumption of hardware that compute DNNs. Additionally, exploring emerging technologies such as non-volatile memories and 3-D stacking of silicon in conjunction with algorithm-hardware co-design architectures will pave the way for the next generation of mobile devices.

Towards the objectives stated above, our specific contributions include (a) an architecture based on resistive crosspoint array that can update all values stored and compute matrix vector multiplication in parallel within a single cycle, (b) a framework of training DNNs with a block-wise sparsity to drastically reduce memory storage and total number of computations required to compute the output of DNNs, (c) the exploration of hardware implementations of sparse DNNs and architectural guidelines to reduce power consumption for the implementations in monolithic 3D integrated circuits, and (d) a prototype chip in 65nm CMOS accelerator for long-short term memory networks trained with the proposed block-wise sparsity scheme.
ContributorsKadetotad, Deepak Vinayak (Author) / Seo, Jae-Sun (Thesis advisor) / Chakrabarti, Chaitali (Committee member) / Vrudhula, Sarma (Committee member) / Cao, Yu (Committee member) / Arizona State University (Publisher)
Created2019
161913-Thumbnail Image.png
Description
Artificial intelligence is one of the leading technologies that mimics the problem solving and decision making capabilities of the human brain. Machine learning algorithms, especially deep learning algorithms, are leading the way in terms of performance and robustness. They are used for various purposes, mainly for computer vision, speech recognition,

Artificial intelligence is one of the leading technologies that mimics the problem solving and decision making capabilities of the human brain. Machine learning algorithms, especially deep learning algorithms, are leading the way in terms of performance and robustness. They are used for various purposes, mainly for computer vision, speech recognition, and object detection. The algorithms are usually tested inaccuracy, and they utilize full floating-point precision (32 bits). The hardware would require a high amount of power and area to accommodate many parameters with full precision. In this exploratory work, the convolution autoencoder is quantized for the working of an event base camera. The model is designed so that the autoencoder can work on-chip, which would sufficiently decrease the latency in processing. Different quantization methods are used to quantize and binarize the weights and activations of this neural network model to be portable and power efficient. The sparsity term is added to make the model as robust and energy-efficient as possible. The network model was able to recoup the lost accuracy due to binarizing the weights and activation's to quantize the layers of the encoder selectively. This method of recouping the accuracy gives enough flexibility to introduce the network on the chip to get real-time processing from systems like event-based cameras. Lately, computer vision, especially object detection have made strides in their object detection accuracy. The algorithms can sufficiently detect and predict the objects in real-time. However, end-to-end detection of the algorithm is challenging due to the large parameter need and processing requirements. A change in the Non Maximum Suppression algorithm in SSD(Single Shot Detector)-Mobilenet-V1 resulted in less computational complexity without change in the quality of output metric. The Mean Average Precision(mAP) calculated suggests that this method can be implemented in the post-processing of other networks.
ContributorsKuzhively, Ajay Balu (Author) / Cao, Yu (Thesis advisor) / Seo, Jae-Sun (Committee member) / Fan, Delian (Committee member) / Arizona State University (Publisher)
Created2021
Description
Due to high DRAM access latency and energy, several convolutional neural network(CNN) accelerators face performance and energy efficiency challenges, which are critical for embedded implementations. As these applications exploit larger datasets, memory accesses of these emerging applications are increasing. As a result, it is difficult to predict the combined

Due to high DRAM access latency and energy, several convolutional neural network(CNN) accelerators face performance and energy efficiency challenges, which are critical for embedded implementations. As these applications exploit larger datasets, memory accesses of these emerging applications are increasing. As a result, it is difficult to predict the combined dynamic random access memory (DRAM) workload behavior, which can sabotage memory optimizations in software. To understand the impact of external memory access on CNN accelerators which reduces the high DRAMaccess latency and energy, simulators such as RAMULATOR and VAMPIRE have been proposed in prior work. In this work, we utilize these simulators to benchmark external memory access in CNN accelerators. Experiments are performed generating trace files based on the number of parameters and data precision and also using trace file generated for CNN Accelerator Altera Arria 10 GX 1150 FPGA data to complete the end to end workflow using the mentioned simulators. Besides that, certain modifications were made in the default VAMPIRE code to implement certain functionalities such as PREA(Precharge All) and REF(Refresh). Then, precalculated energies were computed for DDR3, DDR4, and HBM based on the micron model to mention it in the dram specification file inputted to the VAMPIRE tool. An experimental study was performed and a comparison is made between DDR3, DDR4, and HBM, it was proved that DDR4 is nearly 31% more energy-efficient than DDR3 and HBMis 54% energy-efficient than DDR3. Performed modeling and experimental analysis on a large set of data and then split it into a set of data and compared the results of the small sets multiplied with the number of sets and the large data set and concluded that the results were nearly the same. Finally, a GUI is developed by wrapping both the simulators. GUI provides user-friendly access and can analyze the parameters without much prior knowledge and understanding of the working.
ContributorsPannala, Manvitha (Author) / Cao, Yu (Thesis advisor) / Chakrabarti, Chaitali (Committee member) / Seo, Jae-Sun (Committee member) / Arizona State University (Publisher)
Created2021
190780-Thumbnail Image.png
Description
Artificial Intelligence (AI) and Machine Learning (ML) techniques have come a long way since their inception and have been used to build intelligent systems for a wide range of applications in everyday life. However they are very computationintensive and require transfer of large volume of data from memory to the

Artificial Intelligence (AI) and Machine Learning (ML) techniques have come a long way since their inception and have been used to build intelligent systems for a wide range of applications in everyday life. However they are very computationintensive and require transfer of large volume of data from memory to the computation units. This memory access time constitute significant part of the computational latency and a performance bottleneck. To address this limitation and the ever-growing demand for implementation in hand-held and edge-devices, In-memory computing (IMC) based AI/ML hardware accelerators have emerged. First, the dissertation presents an IMC static random access memory (SRAM) based hardware modeling and optimization framework. A unified systematic study closely models the IMC hardware, and investigates how a number of design variables and non-idealities (e.g. device mismatch and ADC quantization) affect the Deep Neural Network (DNN) accuracy of the IMC design. The framework allows co-optimized selection of different design variables accounting for sources of noise in IMC hardware and robust implementation of a high accuracy DNN. Next, it presents a kNN hardware accelerator in 65nm Complementary Metal-Oxide-Semiconductor (CMOS) technology. The accelerator combines an IMC SRAM that is developed for binarized deep neural networks and other digital hardware that performs top-k sorting. The simulated k Nearest Neighbor accelerator design processes up to 17.9 million query vectors per second while consuming 11.8 mW, demonstrating >4.8× energy-efficiency improvement over prior works. This dissertation also presents a novel floating-point precision IMC (FP-IMC) macro with a hybrid architecture that configurably supports two Floating Point (FP) precisions. Implementing FP precision MAC has been a challenge owing to its complexity. The design is implemented on 28nm CMOS, and taped-out on chip demonstrating 12.1 TFLOPS/W and 66.1 TFLOPS/W for 8-bit Floating Point (FP8) and Block Floating point (BF8) respectively. Finally, another iteration of the FP design is presented that is modeled to support multiple precision modes from FP8 up to FP32. Two approaches to the architectural design were compared illustrating the throughput-area overhead trade-off. The simulated design shows a 2.1 × normalized energy-efficiency compared to the on-chip implementation of the FP-IMC.
ContributorsSaikia, Jyotishman (Author) / Seo, Jae-Sun (Thesis advisor) / Chakrabarti, Chaitali (Thesis advisor) / Fan, Deliang (Committee member) / Cao, Yu (Committee member) / Arizona State University (Publisher)
Created2023