PUMA

A Programmable Ultra-efficient Memristor-based Accelerator for Machine Learning Inference

Aayush Ankit, Izzat El Hajj, Sai Rahul Chalamalasetti, Geoffrey Ndu, Martin Foltin, R. Stanley Williams, Paolo Faraboschi, Wen-Mei W Hwu, John Paul Strachan, Kaushik Roy, Dejan S. Milojicic

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

Memristor crossbars are circuits capable of performing analog matrix-vector multiplications, overcoming the fundamental energy efficiency limitations of digital logic. They have been shown to be effective in special-purpose accelerators for a limited set of neural network applications. We present the Programmable Ultra-efficient Memristorbased Accelerator (PUMA) which enhances memristor crossbars with general purpose execution units to enable the acceleration of a wide variety of Machine Learning (ML) inference workloads. PUMA's microarchitecture techniques exposed through a specialized Instruction Set Architecture (ISA) retain the efficiency of in-memory computing and analog circuitry, without compromising programmability. We also present the PUMA compiler which translates high-level code to PUMA ISA. The compiler partitions the computational graph and optimizes instruction scheduling and register allocation to generate code for large and complex workloads to run on thousands of spatial cores. We have developed a detailed architecture simulator that incorporates the functionality, timing, and power models of PUMA's components to evaluate performance and energy consumption. A PUMA accelerator running at 1 GHz can reach area and power efficiency of 577 GOPS/s/mm2 and 837 GOPS/s/W, respectively. Our evaluation of diverse ML applications from image recognition, machine translation, and language modelling (5M-800M synapses) shows that PUMA achieves up to 2,446× energy and 66× latency improvement for inference compared to state-of-the-art GPUs. Compared to an application-specific memristor-based accelerator, PUMA incurs small energy overheads at similar inference latency and added programmability.

Original languageEnglish (US)
Title of host publicationASPLOS 2019 - 24th International Conference on Architectural Support for Programming Languages and Operating Systems
PublisherAssociation for Computing Machinery
Pages715-731
Number of pages17
ISBN (Electronic)9781450362405
DOIs
StatePublished - Apr 4 2019
Event24th International Conference on Architectural Support for Programming Languages and Operating Systems, ASPLOS 2019 - Providence, United States
Duration: Apr 13 2019Apr 17 2019

Publication series

NameInternational Conference on Architectural Support for Programming Languages and Operating Systems - ASPLOS

Conference

Conference24th International Conference on Architectural Support for Programming Languages and Operating Systems, ASPLOS 2019
CountryUnited States
CityProvidence
Period4/13/194/17/19

Fingerprint

Memristors
Particle accelerators
Learning systems
Image recognition
Energy efficiency
Energy utilization
Simulators
Scheduling
Neural networks
Data storage equipment

Keywords

  • accelerators
  • machine learning
  • memristors
  • neural networks

ASJC Scopus subject areas

  • Software
  • Information Systems
  • Hardware and Architecture

Cite this

Ankit, A., El Hajj, I., Rahul Chalamalasetti, S., Ndu, G., Foltin, M., Williams, R. S., ... Milojicic, D. S. (2019). PUMA: A Programmable Ultra-efficient Memristor-based Accelerator for Machine Learning Inference. In ASPLOS 2019 - 24th International Conference on Architectural Support for Programming Languages and Operating Systems (pp. 715-731). (International Conference on Architectural Support for Programming Languages and Operating Systems - ASPLOS). Association for Computing Machinery. https://doi.org/10.1145/3297858.3304049

PUMA : A Programmable Ultra-efficient Memristor-based Accelerator for Machine Learning Inference. / Ankit, Aayush; El Hajj, Izzat; Rahul Chalamalasetti, Sai; Ndu, Geoffrey; Foltin, Martin; Williams, R. Stanley; Faraboschi, Paolo; Hwu, Wen-Mei W; Paul Strachan, John; Roy, Kaushik; Milojicic, Dejan S.

ASPLOS 2019 - 24th International Conference on Architectural Support for Programming Languages and Operating Systems. Association for Computing Machinery, 2019. p. 715-731 (International Conference on Architectural Support for Programming Languages and Operating Systems - ASPLOS).

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Ankit, A, El Hajj, I, Rahul Chalamalasetti, S, Ndu, G, Foltin, M, Williams, RS, Faraboschi, P, Hwu, W-MW, Paul Strachan, J, Roy, K & Milojicic, DS 2019, PUMA: A Programmable Ultra-efficient Memristor-based Accelerator for Machine Learning Inference. in ASPLOS 2019 - 24th International Conference on Architectural Support for Programming Languages and Operating Systems. International Conference on Architectural Support for Programming Languages and Operating Systems - ASPLOS, Association for Computing Machinery, pp. 715-731, 24th International Conference on Architectural Support for Programming Languages and Operating Systems, ASPLOS 2019, Providence, United States, 4/13/19. https://doi.org/10.1145/3297858.3304049
Ankit A, El Hajj I, Rahul Chalamalasetti S, Ndu G, Foltin M, Williams RS et al. PUMA: A Programmable Ultra-efficient Memristor-based Accelerator for Machine Learning Inference. In ASPLOS 2019 - 24th International Conference on Architectural Support for Programming Languages and Operating Systems. Association for Computing Machinery. 2019. p. 715-731. (International Conference on Architectural Support for Programming Languages and Operating Systems - ASPLOS). https://doi.org/10.1145/3297858.3304049
Ankit, Aayush ; El Hajj, Izzat ; Rahul Chalamalasetti, Sai ; Ndu, Geoffrey ; Foltin, Martin ; Williams, R. Stanley ; Faraboschi, Paolo ; Hwu, Wen-Mei W ; Paul Strachan, John ; Roy, Kaushik ; Milojicic, Dejan S. / PUMA : A Programmable Ultra-efficient Memristor-based Accelerator for Machine Learning Inference. ASPLOS 2019 - 24th International Conference on Architectural Support for Programming Languages and Operating Systems. Association for Computing Machinery, 2019. pp. 715-731 (International Conference on Architectural Support for Programming Languages and Operating Systems - ASPLOS).
@inproceedings{313143e4e4b34d1c86fe03fed54ea332,
title = "PUMA: A Programmable Ultra-efficient Memristor-based Accelerator for Machine Learning Inference",
abstract = "Memristor crossbars are circuits capable of performing analog matrix-vector multiplications, overcoming the fundamental energy efficiency limitations of digital logic. They have been shown to be effective in special-purpose accelerators for a limited set of neural network applications. We present the Programmable Ultra-efficient Memristorbased Accelerator (PUMA) which enhances memristor crossbars with general purpose execution units to enable the acceleration of a wide variety of Machine Learning (ML) inference workloads. PUMA's microarchitecture techniques exposed through a specialized Instruction Set Architecture (ISA) retain the efficiency of in-memory computing and analog circuitry, without compromising programmability. We also present the PUMA compiler which translates high-level code to PUMA ISA. The compiler partitions the computational graph and optimizes instruction scheduling and register allocation to generate code for large and complex workloads to run on thousands of spatial cores. We have developed a detailed architecture simulator that incorporates the functionality, timing, and power models of PUMA's components to evaluate performance and energy consumption. A PUMA accelerator running at 1 GHz can reach area and power efficiency of 577 GOPS/s/mm2 and 837 GOPS/s/W, respectively. Our evaluation of diverse ML applications from image recognition, machine translation, and language modelling (5M-800M synapses) shows that PUMA achieves up to 2,446× energy and 66× latency improvement for inference compared to state-of-the-art GPUs. Compared to an application-specific memristor-based accelerator, PUMA incurs small energy overheads at similar inference latency and added programmability.",
keywords = "accelerators, machine learning, memristors, neural networks",
author = "Aayush Ankit and {El Hajj}, Izzat and {Rahul Chalamalasetti}, Sai and Geoffrey Ndu and Martin Foltin and Williams, {R. Stanley} and Paolo Faraboschi and Hwu, {Wen-Mei W} and {Paul Strachan}, John and Kaushik Roy and Milojicic, {Dejan S.}",
year = "2019",
month = "4",
day = "4",
doi = "10.1145/3297858.3304049",
language = "English (US)",
series = "International Conference on Architectural Support for Programming Languages and Operating Systems - ASPLOS",
publisher = "Association for Computing Machinery",
pages = "715--731",
booktitle = "ASPLOS 2019 - 24th International Conference on Architectural Support for Programming Languages and Operating Systems",

}

TY - GEN

T1 - PUMA

T2 - A Programmable Ultra-efficient Memristor-based Accelerator for Machine Learning Inference

AU - Ankit, Aayush

AU - El Hajj, Izzat

AU - Rahul Chalamalasetti, Sai

AU - Ndu, Geoffrey

AU - Foltin, Martin

AU - Williams, R. Stanley

AU - Faraboschi, Paolo

AU - Hwu, Wen-Mei W

AU - Paul Strachan, John

AU - Roy, Kaushik

AU - Milojicic, Dejan S.

PY - 2019/4/4

Y1 - 2019/4/4

N2 - Memristor crossbars are circuits capable of performing analog matrix-vector multiplications, overcoming the fundamental energy efficiency limitations of digital logic. They have been shown to be effective in special-purpose accelerators for a limited set of neural network applications. We present the Programmable Ultra-efficient Memristorbased Accelerator (PUMA) which enhances memristor crossbars with general purpose execution units to enable the acceleration of a wide variety of Machine Learning (ML) inference workloads. PUMA's microarchitecture techniques exposed through a specialized Instruction Set Architecture (ISA) retain the efficiency of in-memory computing and analog circuitry, without compromising programmability. We also present the PUMA compiler which translates high-level code to PUMA ISA. The compiler partitions the computational graph and optimizes instruction scheduling and register allocation to generate code for large and complex workloads to run on thousands of spatial cores. We have developed a detailed architecture simulator that incorporates the functionality, timing, and power models of PUMA's components to evaluate performance and energy consumption. A PUMA accelerator running at 1 GHz can reach area and power efficiency of 577 GOPS/s/mm2 and 837 GOPS/s/W, respectively. Our evaluation of diverse ML applications from image recognition, machine translation, and language modelling (5M-800M synapses) shows that PUMA achieves up to 2,446× energy and 66× latency improvement for inference compared to state-of-the-art GPUs. Compared to an application-specific memristor-based accelerator, PUMA incurs small energy overheads at similar inference latency and added programmability.

AB - Memristor crossbars are circuits capable of performing analog matrix-vector multiplications, overcoming the fundamental energy efficiency limitations of digital logic. They have been shown to be effective in special-purpose accelerators for a limited set of neural network applications. We present the Programmable Ultra-efficient Memristorbased Accelerator (PUMA) which enhances memristor crossbars with general purpose execution units to enable the acceleration of a wide variety of Machine Learning (ML) inference workloads. PUMA's microarchitecture techniques exposed through a specialized Instruction Set Architecture (ISA) retain the efficiency of in-memory computing and analog circuitry, without compromising programmability. We also present the PUMA compiler which translates high-level code to PUMA ISA. The compiler partitions the computational graph and optimizes instruction scheduling and register allocation to generate code for large and complex workloads to run on thousands of spatial cores. We have developed a detailed architecture simulator that incorporates the functionality, timing, and power models of PUMA's components to evaluate performance and energy consumption. A PUMA accelerator running at 1 GHz can reach area and power efficiency of 577 GOPS/s/mm2 and 837 GOPS/s/W, respectively. Our evaluation of diverse ML applications from image recognition, machine translation, and language modelling (5M-800M synapses) shows that PUMA achieves up to 2,446× energy and 66× latency improvement for inference compared to state-of-the-art GPUs. Compared to an application-specific memristor-based accelerator, PUMA incurs small energy overheads at similar inference latency and added programmability.

KW - accelerators

KW - machine learning

KW - memristors

KW - neural networks

UR - http://www.scopus.com/inward/record.url?scp=85064688330&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85064688330&partnerID=8YFLogxK

U2 - 10.1145/3297858.3304049

DO - 10.1145/3297858.3304049

M3 - Conference contribution

T3 - International Conference on Architectural Support for Programming Languages and Operating Systems - ASPLOS

SP - 715

EP - 731

BT - ASPLOS 2019 - 24th International Conference on Architectural Support for Programming Languages and Operating Systems

PB - Association for Computing Machinery

ER -