μl2Q: An Ultra-Low Loss Quantization Method for DNN Compression

Cheng Gong, Tao Li, Ye Lu, Cong Hao, Xiaofan Zhang, Deming Chen, Yao Chen

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

Data quantization has been proved to be an effective method to compress deep neural networks (DNNs) by using less bits to represent the parameters and intermediate data. The bit width of the data directly affects the memory footprint, computing capability, and energy consumption during the computation of the DNN models. Although there have been numerous existing studies on data quantization, there is still no quantitative analysis of the existing quantization methods, which results in empirical quantization with unpredictable DNN accuracy loss. To address this problem, we propose an effective method, called ultra-low loss quantization (μL2Q), to provide DNN quantization schemes based on comprehensive quantitative data analysis. μL2Q builds the transformation of the original data to a data space with standard normal distribution, and then find the optimal parameters to minimize the loss of the quantization of a targeted bit width. In addition, we integrate the proposed μL2Q into a popular machine learning framework Caffe for convenient end-to-end DNN design and training. By comparing to the state-of-the-art DNN compression designs, μL2Q shows the greatest ability to maintain DNN accuracy after quantization. In the experiments, our proposed method can deliver 4.42%, 16.70%, 1.95%, 8.26% and 5.63% accuracy improvements on Lenet-5, Cifarnet, VGG7-64 and Resnet-18 (Top1/5), respectively, compared to the state-of-the-art solutions with the same compression ratio.

Original languageEnglish (US)
Title of host publication2019 International Joint Conference on Neural Networks, IJCNN 2019
PublisherInstitute of Electrical and Electronics Engineers Inc.
ISBN (Electronic)9781728119854
DOIs
StatePublished - Jul 2019
Event2019 International Joint Conference on Neural Networks, IJCNN 2019 - Budapest, Hungary
Duration: Jul 14 2019Jul 19 2019

Publication series

NameProceedings of the International Joint Conference on Neural Networks
Volume2019-July

Conference

Conference2019 International Joint Conference on Neural Networks, IJCNN 2019
Country/TerritoryHungary
CityBudapest
Period7/14/197/19/19

ASJC Scopus subject areas

  • Software
  • Artificial Intelligence

Fingerprint

Dive into the research topics of 'μl2Q: An Ultra-Low Loss Quantization Method for DNN Compression'. Together they form a unique fingerprint.

Cite this