Implications of Deep Compression with Complex Neural Networks
Lily Young1, James Richardson York2, Byeong Kil Lee3
1Lily Young, Department of Electrical and Computer Engineering, University of Colorado Colorado Springs, 1420 Austin Bluffs Parkway, Colorado Springs, CO 80918, USA.
2James Richrdson York, Department of Electrical and Computer Engineering, University of Colorado Colorado Springs, 1420 Austin Bluffs Parkway, Colorado Springs, CO 80918, USA.
3Byeong Kil Lee, Department of Electrical and Computer Engineering, University of Colorado Colorado Springs, 1420 Austin Bluffs Parkway, Colorado Springs, CO 80918, USA.
Manuscript received on 12 May 2023 | Revised Manuscript received on 22 May 2023 | Manuscript Accepted on 15 July 2023 | Manuscript published on 30 July 2023 | PP: 1-6 | Volume-13 Issue-3, July 2023 | Retrieval Number: 100.1/ijsce.C36130713323 | DOI: 10.35940/ijsce.C3613.0713323
Open Access | Editorial and Publishing Policies | Cite | Zenodo | Indexing and Abstracting
© The Authors. Blue Eyes Intelligence Engineering and Sciences Publication (BEIESP). This is an open access article under the CC-BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/)
Abstract: Deep learning and neural networks have become increasingly popular in the area of artificial intelligence. These models have the capability to solve complex problems, such as image recognition or language processing. However, the memory utilization and power consumption of these networks can be very large for many applications. This has led to research into techniques to compress the size of these models while retaining accuracy and performance. One of the compression techniques is the deep compression three-stage pipeline, including pruning, trained quantization, and Huffman coding. In this paper, we apply the principles of deep compression to multiple complex networks in order to compare the effectiveness of deep compression in terms of compression ratio and the quality of the compressed network. While the deep compression pipeline is effectively working for CNN and RNN models to reduce the network size with small performance degradation, it is not properly working for more complicated networks such as GAN. In our GAN experiments, performance degradation is too much from the compression. For complex neural networks, careful analysis should be done for discovering which parameters allow a GAN to be compressed without loss in output quality.
Keywords: Neural Network, Network Compression, Pruning, Quantization, CNN, RNN, GAN.
Scope of the Article: Deep learning