Matching Items (9)
Filtering by

Clear all filters

157015-Thumbnail Image.png
Description
Deep learning (DL) has proved itself be one of the most important developements till date with far reaching impacts in numerous fields like robotics, computer vision, surveillance, speech processing, machine translation, finance, etc. They are now widely used for countless applications because of their ability to generalize real world data,

Deep learning (DL) has proved itself be one of the most important developements till date with far reaching impacts in numerous fields like robotics, computer vision, surveillance, speech processing, machine translation, finance, etc. They are now widely used for countless applications because of their ability to generalize real world data, robustness to noise in previously unseen data and high inference accuracy. With the ability to learn useful features from raw sensor data, deep learning algorithms have out-performed tradinal AI algorithms and pushed the boundaries of what can be achieved with AI. In this work, we demonstrate the power of deep learning by developing a neural network to automatically detect cough instances from audio recorded in un-constrained environments. For this, 24 hours long recordings from 9 dierent patients is collected and carefully labeled by medical personel. A pre-processing algorithm is proposed to convert event based cough dataset to a more informative dataset with start and end of coughs and also introduce data augmentation for regularizing the training procedure. The proposed neural network achieves 92.3% leave-one-out accuracy on data captured in real world.

Deep neural networks are composed of multiple layers that are compute/memory intensive. This makes it difficult to execute these algorithms real-time with low power consumption using existing general purpose computers. In this work, we propose hardware accelerators for a traditional AI algorithm based on random forest trees and two representative deep convolutional neural networks (AlexNet and VGG). With the proposed acceleration techniques, ~ 30x performance improvement was achieved compared to CPU for random forest trees. For deep CNNS, we demonstrate that much higher performance can be achieved with architecture space exploration using any optimization algorithms with system level performance and area models for hardware primitives as inputs and goal of minimizing latency with given resource constraints. With this method, ~30GOPs performance was achieved for Stratix V FPGA boards.

Hardware acceleration of DL algorithms alone is not always the most ecient way and sucient to achieve desired performance. There is a huge headroom available for performance improvement provided the algorithms are designed keeping in mind the hardware limitations and bottlenecks. This work achieves hardware-software co-optimization for Non-Maximal Suppression (NMS) algorithm. Using the proposed algorithmic changes and hardware architecture

With CMOS scaling coming to an end and increasing memory bandwidth bottlenecks, CMOS based system might not scale enough to accommodate requirements of more complicated and deeper neural networks in future. In this work, we explore RRAM crossbars and arrays as compact, high performing and energy efficient alternative to CMOS accelerators for deep learning training and inference. We propose and implement RRAM periphery read and write circuits and achieved ~3000x performance improvement in online dictionary learning compared to CPU.

This work also examines the realistic RRAM devices and their non-idealities. We do an in-depth study of the effects of RRAM non-idealities on inference accuracy when a pretrained model is mapped to RRAM based accelerators. To mitigate this issue, we propose Random Sparse Adaptation (RSA), a novel scheme aimed at tuning the model to take care of the faults of the RRAM array on which it is mapped. Our proposed method can achieve inference accuracy much higher than what traditional Read-Verify-Write (R-V-W) method could achieve. RSA can also recover lost inference accuracy 100x ~ 1000x faster compared to R-V-W. Using 32-bit high precision RSA cells, we achieved ~10% higher accuracy using fautly RRAM arrays compared to what can be achieved by mapping a deep network to an 32 level RRAM array with no variations.
ContributorsMohanty, Abinash (Author) / Cao, Yu (Thesis advisor) / Seo, Jae-Sun (Committee member) / Vrudhula, Sarma (Committee member) / Chakrabarti, Chaitali (Committee member) / Arizona State University (Publisher)
Created2018
168467-Thumbnail Image.png
Description
Neural networks are increasingly becoming attractive solutions for automated systems within automotive, aerospace, and military industries.Since many applications in such fields are both real-time and safety-critical, strict performance and reliability constraints must be considered. To achieve high performance, specialized architectures are required.Given that over 90% of the workload in modern

Neural networks are increasingly becoming attractive solutions for automated systems within automotive, aerospace, and military industries.Since many applications in such fields are both real-time and safety-critical, strict performance and reliability constraints must be considered. To achieve high performance, specialized architectures are required.Given that over 90% of the workload in modern neural network topologies is dominated by matrix multiplication, accelerating said algorithm becomes of paramount importance. Modern neural network accelerators, such as Xilinx's Deep Processing Unit (DPU), adopt efficient systolic-like architectures. Thanks to their high degree of parallelism and design flexibility, Field-Programmable Gate Arrays (FPGAs) are among the most promising devices for speeding up matrix multiplication and neural network computation.However, SRAM-based FPGAs are also known to suffer from radiation-induced upsets in their configuration memories. To achieve high reliability, hardening strategies must be put in place.However, traditional modular redundancy of inherently expensive modules is not always feasible due to limited resource availability on target devices. Therefore, more efficient and cleverly designed hardening methods become a necessity. For instance, Algorithm-Based Fault-Tolerance (ABFT) exploits algorithm characteristics to deliver error detection/correction capabilities at significantly lower costs. First, experimental results with Xilinx's DPU indicate that failure rates can be over twice as high as the limits specified for terrestrial applications.In other words, the undeniable need for hardening in the state-of-the-art neural network accelerator for FPGAs is demonstrated. Later, an extensive multi-level fault propagation analysis is presented, and an ultra-low-cost algorithm-based error detection strategy for matrix multiplication is proposed.By considering the specifics of FPGAs' fault model, this novel hardening method decreases costs of implementation by over a polynomial degree, when compared to state-of-the-art solutions. A corresponding architectural implementation is suggested, incurring area and energy overheads lower than 1% for the vast majority of systolic arrays dimensions. Finally, the impact of fundamental design decisions, such as data precision in processing elements, and overall degree of parallelism, on the reliability of hypothetical neural network accelerators is experimentally investigated.A novel way of predicting the compound failure rate of inherently inaccurate algorithms/applications in the presence of radiation is also provided.
ContributorsLibano, Fabiano (Author) / Brunhaver, John (Thesis advisor) / Clark, Lawrence (Committee member) / Quinn, Heather (Committee member) / Rech, Paolo (Committee member) / Arizona State University (Publisher)
Created2021
168398-Thumbnail Image.png
Description
With the extensive technological progress made in the areas of drives, sensors and processing, exoskeletons and other wearable devices have become more feasible. However, the stringent requirements in regards to size and weight continue to exert a strong influence on the system-wide design of these devices and present many obstacles

With the extensive technological progress made in the areas of drives, sensors and processing, exoskeletons and other wearable devices have become more feasible. However, the stringent requirements in regards to size and weight continue to exert a strong influence on the system-wide design of these devices and present many obstacles to a successful solution. On the other hand, while the area of controls has seen a significant amount of progress, there also remains a large potential for improvements. This dissertation approaches the design and control of wearable devices from a systems perspective and provides a framework to successfully overcome the often-encountered obstacles with optimal solutions. The electronics, drive and control system design for the HeSA hip exoskeleton project and APEx hip exoskeleton project are presented as examples of how this framework is used to design wearable devices. In the area of control algorithms, a real-time implementation of the Fast Fourier Transform (FFT) is presented as an alternative approach to extracting amplitude and frequency information of a time varying signal. In comparison to the peak search method (PSM), the FFT allows extracting basic gait signal information at a faster rate because time windows can be chosen to be less than the fundamental gait frequency. The FFT is implemented on a 16-bit processor and the results show the real-time detection of amplitude and frequency coefficients at an update rate of 50Hz. Finally, a novel neural networks based approach to detecting human gait activities is presented. Existing neural networks often require vast amounts of data along with significant computer resources. Using Neural Ordinary Differential Equations (Neural ODEs) it is possible to distinguish between seven different daily activities using a significantly smaller data set, lower system resources and a time window of only 0.1 seconds.
ContributorsBoehler, Alexander (Author) / Sugar, Thomas (Thesis advisor) / Redkar, Sangram (Committee member) / Hollander, Kevin (Committee member) / Arizona State University (Publisher)
Created2021
187454-Thumbnail Image.png
Description
This dissertation presents novel solutions for improving the generalization capabilities of deep learning based computer vision models. Neural networks are known to suffer a large drop in performance when tested on samples from a different distribution than the one on which they were trained. The proposed solutions, based on latent

This dissertation presents novel solutions for improving the generalization capabilities of deep learning based computer vision models. Neural networks are known to suffer a large drop in performance when tested on samples from a different distribution than the one on which they were trained. The proposed solutions, based on latent space geometry and meta-learning, address this issue by improving the robustness of these models to distribution shifts. Through the use of geometrical alignment, state-of-the-art domain adaptation and source-free test-time adaptation strategies are developed. Additionally, geometrical alignment can allow classifiers to be progressively adapted to new, unseen test domains without requiring retraining of the feature extractors. The dissertation also presents algorithms for enabling in-the-wild generalization without needing access to any samples from the target domain. Other causes of poor generalization, such as data scarcity in critical applications and training data with high levels of noise and variance, are also explored. To address data scarcity in fine-grained computer vision tasks such as object detection, novel context-aware augmentations are suggested. While the first four chapters focus on general-purpose computer vision models, strategies are also developed to improve robustness in specific applications. The efficiency of training autonomous agents for visual navigation is improved by incorporating semantic knowledge, and the integration of domain experts' knowledge allows for the realization of a low-cost, minimally invasive generalizable automated rehabilitation system. Lastly, new tools for explainability and model introspection using counter-factual explainers trained through interval-based uncertainty calibration objectives are presented.
ContributorsThopalli, Kowshik (Author) / Turaga, Pavan (Thesis advisor) / Thiagarajan, Jayaraman J (Committee member) / Li, Baoxin (Committee member) / Yang, Yezhou (Committee member) / Arizona State University (Publisher)
Created2023
171744-Thumbnail Image.png
Description
Convolutional neural networks(CNNs) achieve high accuracy on large datasets but requires significant computation and storage requirement for training/testing. While many applications demand low latency and energy-efficient processing of the images, deploying these complex algorithms on the hardware is a challenging task. This dissertation first presents a compiler-based CNN training accelerator

Convolutional neural networks(CNNs) achieve high accuracy on large datasets but requires significant computation and storage requirement for training/testing. While many applications demand low latency and energy-efficient processing of the images, deploying these complex algorithms on the hardware is a challenging task. This dissertation first presents a compiler-based CNN training accelerator using DDR3 and HBM2 memory. An optimized RTL library is implemented to perform training-specific tasks and an RTL compiler is developed to generate FPGA-synthesizable RTL based on user-defined constraints. High Bandwidth Memory(HBM) provides efficient off-chip communication and improves the training performance. The impact of HBM2 on CNN training workloads is analyzed and compressively compared with DDR3. For training ResNet-20/VGG-like CNNs for the CIFAR-10 dataset, the proposed CNN training accelerator on Stratix-10 GX FPGA(DDR3) demonstrates 479 GOPS performance, and on Stratix-10 MX FPGA(HBM) shows 4.5/9.7 X energy-efficiency improvement compared to Tesla V100 GPU. Next, the FPGA online learning accelerator is presented. Adopting model segmentation techniques from Progressive Segmented Training(PST), the online learning accelerator achieved a 4.2X reduction in training latency. Furthermore, this dissertation presents an 8-bit floating-point (FP8) training processor which implements (1) Highly parallel tensor cores that maintain high PE utilization, (2) Hardware-efficient channel gating for dynamic output activation sparsity (3) Dynamic weight sparsity based on group Lasso (4) Gradient skipping based on FP prediction error. The 28nm prototype chip demonstrates significant improvements in FLOPs reduction (7.3×), energy efficiency (16.4 TFLOPS/W), and overall training latency speedup (4.7×) for both supervised training and self-supervised training tasks. In addition to the training accelerators, this dissertation also presents a CNN inference accelerator on ASIC(FixyNN) and FPGA(FixyFPGA). FixyNN consists of a fixed-weight feature extractor that generates ubiquitous CNN features and a conventional programmable CNN accelerator. In the fixed-weight feature extractor, the network weights are hard-coded into hardware and used as a fixed operand for the multiplication. Experimental results demonstrate FixyNN can achieve very high energy efficiencies up to 26.6 TOPS/W, and FixyFPGA achieves $2.34\times$ higher GOPS on ImageNet classification. In summary, this dissertation comprehensively discusses novel architectures of high-performance and energy-efficient ASIC/FPGA CNN inference/training accelerators.
ContributorsKolala Venkataramaniah, Shreyas (Author) / Seo, Jae-Sun (Thesis advisor) / Cao, Yu (Committee member) / Chakrabarti, Chaitali (Committee member) / Fan, Deliang (Committee member) / Arizona State University (Publisher)
Created2022
157804-Thumbnail Image.png
Description
While machine/deep learning algorithms have been successfully used in many practical applications including object detection and image/video classification, accurate, fast, and low-power hardware implementations of such algorithms are still a challenging task, especially for mobile systems such as Internet of Things, autonomous vehicles, and smart drones.

This work presents an energy-efficient

While machine/deep learning algorithms have been successfully used in many practical applications including object detection and image/video classification, accurate, fast, and low-power hardware implementations of such algorithms are still a challenging task, especially for mobile systems such as Internet of Things, autonomous vehicles, and smart drones.

This work presents an energy-efficient programmable application-specific integrated circuit (ASIC) accelerator for object detection. The proposed ASIC supports multi-class (face/traffic sign/car license plate/pedestrian), many-object (up to 50) in one image with different sizes (6 down-/11 up-scaling), and high accuracy (87% for face detection datasets). The proposed accelerator is composed of an integral channel detector with 2,000 classifiers for five rigid boosted templates to make a strong object detection. By jointly optimizing the algorithm and efficient hardware architecture, the prototype chip implemented in 65nm demonstrates real-time object detection of 20-50 frames/s with 22.5-181.7mW (0.54-1.75nJ/pixel) at 0.58-1.1V supply.



In this work, to reduce computation without accuracy degradation, an energy-efficient deep convolutional neural network (DCNN) accelerator is proposed based on a novel conditional computing scheme and integrates convolution with subsequent max-pooling operations. This way, the total number of bit-wise convolutions could be reduced by ~2x, without affecting the output feature values. This work also has been developing an optimized dataflow that exploits sparsity, maximizes data re-use and minimizes off-chip memory access, which can improve upon existing hardware works. The total off-chip memory access can be saved by 2.12x. Preliminary results of the proposed DCNN accelerator achieved a peak 7.35 TOPS/W for VGG-16 by post-layout simulation results in 40nm.

A number of recent efforts have attempted to design custom inference engine based on various approaches, including the systolic architecture, near memory processing, and in-meomry computing concept. This work evaluates a comprehensive comparison of these various approaches in a unified framework. This work also presents the proposed energy-efficient in-memory computing accelerator for deep neural networks (DNNs) by integrating many instances of in-memory computing macros with an ensemble of peripheral digital circuits, which supports configurable multibit activations and large-scale DNNs seamlessly while substantially improving the chip-level energy-efficiency. Proposed accelerator is fully designed in 65nm, demonstrating ultralow energy consumption for DNNs.
ContributorsKim, Minkyu (Author) / Seo, Jae-Sun (Thesis advisor) / Cao, Yu Kevin (Committee member) / Vrudhula, Sarma (Committee member) / Ogras, Umit Y. (Committee member) / Arizona State University (Publisher)
Created2019
157808-Thumbnail Image.png
Description
Deep learning is a sub-field of machine learning in which models are developed to imitate the workings of the human brain in processing data and creating patterns for decision making. This dissertation is focused on developing deep learning models for medical imaging analysis of different modalities for different tasks including

Deep learning is a sub-field of machine learning in which models are developed to imitate the workings of the human brain in processing data and creating patterns for decision making. This dissertation is focused on developing deep learning models for medical imaging analysis of different modalities for different tasks including detection, segmentation and classification. Imaging modalities including digital mammography (DM), magnetic resonance imaging (MRI), positron emission tomography (PET) and computed tomography (CT) are studied in the dissertation for various medical applications. The first phase of the research is to develop a novel shallow-deep convolutional neural network (SD-CNN) model for improved breast cancer diagnosis. This model takes one type of medical image as input and synthesizes different modalities for additional feature sources; both original image and synthetic image are used for feature generation. This proposed architecture is validated in the application of breast cancer diagnosis and proved to be outperforming the competing models. Motivated by the success from the first phase, the second phase focuses on improving medical imaging synthesis performance with advanced deep learning architecture. A new architecture named deep residual inception encoder-decoder network (RIED-Net) is proposed. RIED-Net has the advantages of preserving pixel-level information and cross-modality feature transferring. The applicability of RIED-Net is validated in breast cancer diagnosis and Alzheimer’s disease (AD) staging. Recognizing medical imaging research often has multiples inter-related tasks, namely, detection, segmentation and classification, my third phase of the research is to develop a multi-task deep learning model. Specifically, a feature transfer enabled multi-task deep learning model (FT-MTL-Net) is proposed to transfer high-resolution features from segmentation task to low-resolution feature-based classification task. The application of FT-MTL-Net on breast cancer detection, segmentation and classification using DM images is studied. As a continuing effort on exploring the transfer learning in deep models for medical application, the last phase is to develop a deep learning model for both feature transfer and knowledge from pre-training age prediction task to new domain of Mild cognitive impairment (MCI) to AD conversion prediction task. It is validated in the application of predicting MCI patients’ conversion to AD with 3D MRI images.
ContributorsGao, Fei (Author) / Wu, Teresa (Thesis advisor) / Li, Jing (Committee member) / Yan, Hao (Committee member) / Patel, Bhavika (Committee member) / Arizona State University (Publisher)
Created2019
161835-Thumbnail Image.png
Description
To optimize solar cell performance, it is necessary to properly design the doping profile in the absorber layer of the solar cell. For CdTe solar cells, Cu is used for providing p-type doping. Hence, having an estimator that, given the diffusion parameter set (time and Temperature) and the doping concentration

To optimize solar cell performance, it is necessary to properly design the doping profile in the absorber layer of the solar cell. For CdTe solar cells, Cu is used for providing p-type doping. Hence, having an estimator that, given the diffusion parameter set (time and Temperature) and the doping concentration at the junction, gives the junction depth of the absorber layer, is essential in the design process of CdTe solar cells (and other cell technologies). In this work it is called a forward (direct) estimation process. The backward (inverse) problem then is the one in which, given the junction depth and the desired concentration of Cu doping at the CdTe/CdS heterointerface, the estimator gives the time and/or the Temperature needed to achieve the desired doping profiles. This is called a backward (inverse) estimation process. Such estimators, both forward and backward, do not exist in the literature for solar cell technology. To train the Machine Learning (ML) estimator, it is necessary to first generate a large set of data that are obtained by using the PVRD-FASP Solver, which has been validated via comparison with experimental values. Note that this big dataset needs to be generated only once. Next, one uses Machine Learning (ML), Deep Learning (DL) and Artificial Intelligence (AI) to extract the actual Cu doping profiles that result from the process of diffusion, annealing, and cool-down in the fabrication sequence of CdTe solar cells. Two deep learning neural network models are used: (1) Multilayer Perceptron Artificial Neural Network (MLPANN) model using a Keras Application Programmable Interface (API) with TensorFlow backend, and (2) Radial Basis Function Network (RBFN) model to predict the Cu doping profiles for different Temperatures and durations of the annealing process. Excellent agreement between the simulated results obtained with the PVRD-FASP Solver and the predicted values is obtained. It is important to mention here that it takes a significant amount of time to generate the Cu doping profiles given the initial conditions using the PVRD-FASP Solver, because solving the drift-diffusion-reaction model is mathematically a stiff problem and leads to numerical instabilities if the time steps are not small enough, which, in turn, affects the time needed for completion of one simulation run. The generation of the same with Machine Learning (ML) is almost instantaneous and can serve as an excellent simulation tool to guide future fabrication of optimal doping profiles in CdTe solar cells.
ContributorsSalman, Ghaith (Author) / Vasileska, Dragica (Thesis advisor) / Goodnick, Stephen M. (Thesis advisor) / Ringhofer, Christian (Committee member) / Banerjee, Ayan (Committee member) / Arizona State University (Publisher)
Created2021
161997-Thumbnail Image.png
Description
Many real-world engineering problems require simulations to evaluate the design objectives and constraints. Often, due to the complexity of the system model, simulations can be prohibitive in terms of computation time. One approach to overcome this issue is to construct a surrogate model, which approximates the original model. The focus

Many real-world engineering problems require simulations to evaluate the design objectives and constraints. Often, due to the complexity of the system model, simulations can be prohibitive in terms of computation time. One approach to overcome this issue is to construct a surrogate model, which approximates the original model. The focus of this work is on the data-driven surrogate models, in which empirical approximations of the output are performed given the input parameters. Recently neural networks (NN) have re-emerged as a popular method for constructing data-driven surrogate models. Although, NNs have achieved excellent accuracy and are widely used, they pose their own challenges. This work addresses two common challenges, the need for: (1) hardware acceleration and (2) uncertainty quantification (UQ) in the presence of input variability. The high demand in the inference phase of deep NNs in cloud servers/edge devices calls for the design of low power custom hardware accelerators. The first part of this work describes the design of an energy-efficient long short-term memory (LSTM) accelerator. The overarching goal is to aggressively reduce the power consumption and area of the LSTM components using approximate computing, and then use architectural level techniques to boost the performance. The proposed design is synthesized and placed and routed as an application-specific integrated circuit (ASIC). The results demonstrate that this accelerator is 1.2X and 3.6X more energy-efficient and area-efficient than the baseline LSTM. In the second part of this work, a robust framework is developed based on an alternate data-driven surrogate model referred to as polynomial chaos expansion (PCE) for addressing UQ. In contrast to many existing approaches, no assumptions are made on the elements of the function space and UQ is a function of the expansion coefficients. Moreover, the sensitivity of the output with respect to any subset of the input variables can be computed analytically by post-processing the PCE coefficients. This provides a systematic and incremental method to pruning or changing the order of the model. This framework is evaluated on several real-world applications from different domains and is extended for classification tasks as well.
ContributorsAzari, Elham (Author) / Vrudhula, Sarma (Thesis advisor) / Fainekos, Georgios (Committee member) / Ren, Fengbo (Committee member) / Yang, Yezhou (Committee member) / Arizona State University (Publisher)
Created2021