Matching Items (3)
Filtering by
- All Subjects: Deep Neural Networks
- Creators: Seo, Jae-Sun
Description
Deep neural networks (DNN) have shown tremendous success in various cognitive tasks, such as image classification, speech recognition, etc. However, their usage on resource-constrained edge devices has been limited due to high computation and large memory requirement.
To overcome these challenges, recent works have extensively investigated model compression techniques such as element-wise sparsity, structured sparsity and quantization. While most of these works have applied these compression techniques in isolation, there have been very few studies on application of quantization and structured sparsity together on a DNN model.
This thesis co-optimizes structured sparsity and quantization constraints on DNN models during training. Specifically, it obtains optimal setting of 2-bit weight and 2-bit activation coupled with 4X structured compression by performing combined exploration of quantization and structured compression settings. The optimal DNN model achieves 50X weight memory reduction compared to floating-point uncompressed DNN. This memory saving is significant since applying only structured sparsity constraints achieves 2X memory savings and only quantization constraints achieves 16X memory savings. The algorithm has been validated on both high and low capacity DNNs and on wide-sparse and deep-sparse DNN models. Experiments demonstrated that deep-sparse DNN outperforms shallow-dense DNN with varying level of memory savings depending on DNN precision and sparsity levels. This work further proposed a Pareto-optimal approach to systematically extract optimal DNN models from a huge set of sparse and dense DNN models. The resulting 11 optimal designs were further evaluated by considering overall DNN memory which includes activation memory and weight memory. It was found that there is only a small change in the memory footprint of the optimal designs corresponding to the low sparsity DNNs. However, activation memory cannot be ignored for high sparsity DNNs.
To overcome these challenges, recent works have extensively investigated model compression techniques such as element-wise sparsity, structured sparsity and quantization. While most of these works have applied these compression techniques in isolation, there have been very few studies on application of quantization and structured sparsity together on a DNN model.
This thesis co-optimizes structured sparsity and quantization constraints on DNN models during training. Specifically, it obtains optimal setting of 2-bit weight and 2-bit activation coupled with 4X structured compression by performing combined exploration of quantization and structured compression settings. The optimal DNN model achieves 50X weight memory reduction compared to floating-point uncompressed DNN. This memory saving is significant since applying only structured sparsity constraints achieves 2X memory savings and only quantization constraints achieves 16X memory savings. The algorithm has been validated on both high and low capacity DNNs and on wide-sparse and deep-sparse DNN models. Experiments demonstrated that deep-sparse DNN outperforms shallow-dense DNN with varying level of memory savings depending on DNN precision and sparsity levels. This work further proposed a Pareto-optimal approach to systematically extract optimal DNN models from a huge set of sparse and dense DNN models. The resulting 11 optimal designs were further evaluated by considering overall DNN memory which includes activation memory and weight memory. It was found that there is only a small change in the memory footprint of the optimal designs corresponding to the low sparsity DNNs. However, activation memory cannot be ignored for high sparsity DNNs.
ContributorsSrivastava, Gaurav (Author) / Seo, Jae-Sun (Thesis advisor) / Chakrabarti, Chaitali (Committee member) / Berisha, Visar (Committee member) / Arizona State University (Publisher)
Created2018
Description
Speech recognition and keyword detection are becoming increasingly popular applications for mobile systems. While deep neural network (DNN) implementation of these systems have very good performance,
they have large memory and compute resource requirements, making their implementation on a mobile device quite challenging. In this thesis, techniques to reduce the memory and computation cost
of keyword detection and speech recognition networks (or DNNs) are presented.
The first technique is based on representing all weights and biases by a small number of bits and mapping all nodal computations into fixed-point ones with minimal degradation in the
accuracy. Experiments conducted on the Resource Management (RM) database show that for the keyword detection neural network, representing the weights by 5 bits results in a 6 fold reduction in memory compared to a floating point implementation with very little loss in performance. Similarly, for the speech recognition neural network, representing the weights by 6 bits results in a 5 fold reduction in memory while maintaining an error rate similar to a floating point implementation. Additional reduction in memory is achieved by a technique called weight pruning,
where the weights are classified as sensitive and insensitive and the sensitive weights are represented with higher precision. A combination of these two techniques helps reduce the memory
footprint by 81 - 84% for speech recognition and keyword detection networks respectively.
Further reduction in memory size is achieved by judiciously dropping connections for large blocks of weights. The corresponding technique, termed coarse-grain sparsification, introduces
hardware-aware sparsity during DNN training, which leads to efficient weight memory compression and significant reduction in the number of computations during classification without
loss of accuracy. Keyword detection and speech recognition DNNs trained with 75% of the weights dropped and classified with 5-6 bit weight precision effectively reduced the weight memory
requirement by ~95% compared to a fully-connected network with double precision, while showing similar performance in keyword detection accuracy and word error rate.
they have large memory and compute resource requirements, making their implementation on a mobile device quite challenging. In this thesis, techniques to reduce the memory and computation cost
of keyword detection and speech recognition networks (or DNNs) are presented.
The first technique is based on representing all weights and biases by a small number of bits and mapping all nodal computations into fixed-point ones with minimal degradation in the
accuracy. Experiments conducted on the Resource Management (RM) database show that for the keyword detection neural network, representing the weights by 5 bits results in a 6 fold reduction in memory compared to a floating point implementation with very little loss in performance. Similarly, for the speech recognition neural network, representing the weights by 6 bits results in a 5 fold reduction in memory while maintaining an error rate similar to a floating point implementation. Additional reduction in memory is achieved by a technique called weight pruning,
where the weights are classified as sensitive and insensitive and the sensitive weights are represented with higher precision. A combination of these two techniques helps reduce the memory
footprint by 81 - 84% for speech recognition and keyword detection networks respectively.
Further reduction in memory size is achieved by judiciously dropping connections for large blocks of weights. The corresponding technique, termed coarse-grain sparsification, introduces
hardware-aware sparsity during DNN training, which leads to efficient weight memory compression and significant reduction in the number of computations during classification without
loss of accuracy. Keyword detection and speech recognition DNNs trained with 75% of the weights dropped and classified with 5-6 bit weight precision effectively reduced the weight memory
requirement by ~95% compared to a fully-connected network with double precision, while showing similar performance in keyword detection accuracy and word error rate.
ContributorsArunachalam, Sairam (Author) / Chakrabarti, Chaitali (Thesis advisor) / Seo, Jae-Sun (Thesis advisor) / Cao, Yu (Committee member) / Arizona State University (Publisher)
Created2016
Description
The information era has brought about many technological advancements in the past
few decades, and that has led to an exponential increase in the creation of digital images and
videos. Constantly, all digital images go through some image processing algorithm for
various reasons like compression, transmission, storage, etc. There is data loss during this
process which leaves us with a degraded image. Hence, to ensure minimal degradation of
images, the requirement for quality assessment has become mandatory. Image Quality
Assessment (IQA) has been researched and developed over the last several decades to
predict the quality score in a manner that agrees with human judgments of quality. Modern
image quality assessment (IQA) algorithms are quite effective at prediction accuracy, and
their development has not focused on improving computational performance. The existing
serial implementation requires a relatively large run-time on the order of seconds for a single
frame. Hardware acceleration using Field programmable gate arrays (FPGAs) provides
reconfigurable computing fabric that can be tailored for a broad range of applications.
Usually, programming FPGAs has required expertise in hardware descriptive languages
(HDLs) or high-level synthesis (HLS) tool. OpenCL is an open standard for cross-platform,
parallel programming of heterogeneous systems along with Altera OpenCL SDK, enabling
developers to use FPGA's potential without extensive hardware knowledge. Hence, this
thesis focuses on accelerating the computationally intensive part of the most apparent
distortion (MAD) algorithm on FPGA using OpenCL. The results are compared with CPU
implementation to evaluate performance and efficiency gains.
few decades, and that has led to an exponential increase in the creation of digital images and
videos. Constantly, all digital images go through some image processing algorithm for
various reasons like compression, transmission, storage, etc. There is data loss during this
process which leaves us with a degraded image. Hence, to ensure minimal degradation of
images, the requirement for quality assessment has become mandatory. Image Quality
Assessment (IQA) has been researched and developed over the last several decades to
predict the quality score in a manner that agrees with human judgments of quality. Modern
image quality assessment (IQA) algorithms are quite effective at prediction accuracy, and
their development has not focused on improving computational performance. The existing
serial implementation requires a relatively large run-time on the order of seconds for a single
frame. Hardware acceleration using Field programmable gate arrays (FPGAs) provides
reconfigurable computing fabric that can be tailored for a broad range of applications.
Usually, programming FPGAs has required expertise in hardware descriptive languages
(HDLs) or high-level synthesis (HLS) tool. OpenCL is an open standard for cross-platform,
parallel programming of heterogeneous systems along with Altera OpenCL SDK, enabling
developers to use FPGA's potential without extensive hardware knowledge. Hence, this
thesis focuses on accelerating the computationally intensive part of the most apparent
distortion (MAD) algorithm on FPGA using OpenCL. The results are compared with CPU
implementation to evaluate performance and efficiency gains.
ContributorsGunavelu Mohan, Aswin (Author) / Sohoni, Sohum (Thesis advisor) / Ren, Fengbo (Thesis advisor) / Seo, Jae-Sun (Committee member) / Arizona State University (Publisher)
Created2017