Google Neural Network Models for Edge Devices: Analyzing and Mitigating Machine Learning Inference Bottlenecks

Amirali Boroumand, Saugata Ghose, Berkin Akin, Ravi Narayanaswami, Geraldo F. Oliveira, Xiaoyu Ma, Eric Shiu, Onur Mutlu

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

Emerging edge computing platforms often contain machine learning (ML) accelerators that can accelerate inference for a wide range of neural network (NN) models. These models are designed to fit within the limited area and energy constraints of the edge computing platforms, each targeting various applications (e.g., face detection, speech recognition, translation, image captioning, video analytics). To understand how edge ML accelerators perform, we characterize the performance of a commercial Google Edge TPU, using 24 Google edge NN models (which span a wide range of NN model types) and analyzing each NN layer within each model. We find that the Edge TPU suffers from three major shortcomings: (1) it operates significantly below peak computational throughput, (2) it operates significantly below its theoretical energy efficiency, and (3) its memory system is a large energy and performance bottleneck. Our characterization reveals that the one-size-fits-all, monolithic design of the Edge TPU ignores the high degree of heterogeneity both across different NN models and across different NN layers within the same NN model, leading to the shortcomings we observe. We propose a new acceleration framework called Mensa. Mensa incorporates multiple heterogeneous edge ML accelerators (including both on-chip and near-data accelerators), each of which caters to the characteristics of a particular subset of NN models and layers. During NN inference, for each NN layer, Mensa decides which accelerator to schedule the layer on, taking into account both the optimality of each accelerator for the layer and layer-to-layer communication costs. Our comprehensive analysis of the Google edge NN models shows that all of the layers naturally group into a small number of clusters, which allows us to design an efficient implementation of Mensa for these models with only three specialized accelerators. Averaged across all 24 Google edge NN models, Mensa improves energy efficiency and throughput by 3.0x and 3.1x over the Edge TPU, and by 2.4x and 4.3x over Eyeriss v2, a state-of-the-art accelerator.

Original languageEnglish (US)
Title of host publicationProceedings - 30th International Conference on Parallel Architectures and Compilation Techniques, PACT 2021
EditorsJaejin Lee, Albert Cohen
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages159-172
Number of pages14
ISBN (Electronic)9781665442787
DOIs
StatePublished - 2021
Event30th International Conference on Parallel Architectures and Compilation Techniques, PACT 2021 - Virtual, Onliine, United States
Duration: Sep 26 2021Sep 29 2021

Publication series

NameParallel Architectures and Compilation Techniques - Conference Proceedings, PACT
Volume2021-September
ISSN (Print)1089-795X

Conference

Conference30th International Conference on Parallel Architectures and Compilation Techniques, PACT 2021
Country/TerritoryUnited States
CityVirtual, Onliine
Period9/26/219/29/21

ASJC Scopus subject areas

  • Software
  • Theoretical Computer Science
  • Hardware and Architecture

Fingerprint

Dive into the research topics of 'Google Neural Network Models for Edge Devices: Analyzing and Mitigating Machine Learning Inference Bottlenecks'. Together they form a unique fingerprint.

Cite this