SparseTrain: Leveraging dynamic sparsity in software for training DNNs on general-purpose SIMD processors

Zhangxiaowen Gong, Houxiang Ji, Christopher W. Fletcher, Christopher J. Hughes, Josep Torrellas

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

Our community has improved the efficiency of deep learning applications by exploiting sparsity in inputs. Most of that work, though,is for inference, where weight sparsity is known statically, and/orfor specialized hardware. In this paper, we propose SparseTrain, asoftware-only scheme to leverage dynamic sparsity during trainingon general-purpose SIMD processors. SparseTrain exploits zerosintroduced by the ReLU activation function to both feature mapsand their gradients. Exploiting such sparsity is challenging becausethe sparsity degree is moderate and the locations of zeros changeover time.SparseTrain identifies zeros in a dense data representation andperforms vectorized computation. Variations of the scheme are applicable to all major components of training: forward propagation,backward propagation by inputs, and backward propagation byweights. Our experiments on a 6-core Intel Skylake-X server showthat SparseTrain is very effective. In end-to-end training of VGG16,ResNet-34, and ResNet-50 with ImageNet, SparseTrain outperformsa highly-optimized direct convolution on the non-initial convolutional layers by 2.19x, 1.37x, and 1.31x, respectively. SparseTrainalso benefits inference. It accelerates the non-initial convolutionallayers of the aforementioned models by 1.88x, 1.64x, and 1.44x,respectively.

Original languageEnglish (US)
Title of host publicationPACT 2020 - Proceedings of the ACM International Conference on Parallel Architectures and Compilation Techniques
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages279-292
Number of pages14
ISBN (Electronic)9781450380751
DOIs
StatePublished - Sep 30 2020
Event2020 ACM International Conference on Parallel Architectures and Compilation Techniques, PACT 2020 - Virtual, Online, United States
Duration: Oct 3 2020Oct 7 2020

Publication series

NameParallel Architectures and Compilation Techniques - Conference Proceedings, PACT
ISSN (Print)1089-795X

Conference

Conference2020 ACM International Conference on Parallel Architectures and Compilation Techniques, PACT 2020
CountryUnited States
CityVirtual, Online
Period10/3/2010/7/20

Keywords

  • CPU
  • Convolution
  • Deep neural networks
  • Sparsity
  • Training

ASJC Scopus subject areas

  • Software
  • Theoretical Computer Science
  • Hardware and Architecture

Fingerprint Dive into the research topics of 'SparseTrain: Leveraging dynamic sparsity in software for training DNNs on general-purpose SIMD processors'. Together they form a unique fingerprint.

Cite this