TY - GEN
T1 - UCNN
T2 - 45th ACM/IEEE Annual International Symposium on Computer Architecture, ISCA 2018
AU - Hegde, Kartik
AU - Yu, Jiyong
AU - Agrawal, Rohit
AU - Yan, Mengjia
AU - Pellauer, Michael
AU - Fletcher, Christopher W.
N1 - IX. ACKNOWLEDGEMENTS We thank Joel Emer and Angshuman Parasher for many helpful discussions. We would also like to thank the anonymous reviewers and our shepherd Hadi Esmaeilzadeh, for their valuable feedback. This work was partially supported by NSF award CCF-1725734.
PY - 2018/7/19
Y1 - 2018/7/19
N2 - Convolutional Neural Networks (CNNs) have begun to permeate all corners of electronic society (from voice recognition to scene generation) due to their high accuracy and Machine efficiency per operation. At their core, CNN computations are made up of multi-dimensional dot products between weight and input vectors. This paper studies how weight repetition—when the same weight occurs multiple times in or across weight vectors—can be exploited to save energy and improve performance during CNN inference. This generalizes a popular line of work to improve efficiency from CNN weight sparsity, as reducing computation due to repeated zero weights is a special case of reducing computation due to repeated weights. To exploit weight repetition, this paper proposes a new CNN accelerator called the Unique Weight CNN Accelerator (UCNN). UCNN uses weight repetition to reuse CNN sub-computations (e.g., dot products) and to reduce CNN model size when stored in off-chip DRAM—both of which save energy. UCNN further improves performance by exploiting sparsity in weights. We evaluate UCNN with an accelerator-level cycle and energy model and with an RTL implementation of the UCNN processing element. On three contemporary CNNs, UCNN improves throughput-normalized energy consumption by 1.2× ∼ 4×, relative to a similarly provisioned baseline accelerator that uses Eyeriss-style sparsity optimizations. At the same time, the UCNN processing element adds only 17-24% area overhead relative to the same baseline.
AB - Convolutional Neural Networks (CNNs) have begun to permeate all corners of electronic society (from voice recognition to scene generation) due to their high accuracy and Machine efficiency per operation. At their core, CNN computations are made up of multi-dimensional dot products between weight and input vectors. This paper studies how weight repetition—when the same weight occurs multiple times in or across weight vectors—can be exploited to save energy and improve performance during CNN inference. This generalizes a popular line of work to improve efficiency from CNN weight sparsity, as reducing computation due to repeated zero weights is a special case of reducing computation due to repeated weights. To exploit weight repetition, this paper proposes a new CNN accelerator called the Unique Weight CNN Accelerator (UCNN). UCNN uses weight repetition to reuse CNN sub-computations (e.g., dot products) and to reduce CNN model size when stored in off-chip DRAM—both of which save energy. UCNN further improves performance by exploiting sparsity in weights. We evaluate UCNN with an accelerator-level cycle and energy model and with an RTL implementation of the UCNN processing element. On three contemporary CNNs, UCNN improves throughput-normalized energy consumption by 1.2× ∼ 4×, relative to a similarly provisioned baseline accelerator that uses Eyeriss-style sparsity optimizations. At the same time, the UCNN processing element adds only 17-24% area overhead relative to the same baseline.
UR - http://www.scopus.com/inward/record.url?scp=85055863792&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85055863792&partnerID=8YFLogxK
U2 - 10.1109/ISCA.2018.00062
DO - 10.1109/ISCA.2018.00062
M3 - Conference contribution
AN - SCOPUS:85055863792
T3 - Proceedings - International Symposium on Computer Architecture
SP - 674
EP - 687
BT - Proceedings - 2018 ACM/IEEE 45th Annual International Symposium on Computer Architecture, ISCA 2018
PB - Institute of Electrical and Electronics Engineers Inc.
Y2 - 2 June 2018 through 6 June 2018
ER -