Unleashing the Potential of PIM: Accelerating Large Batched Inference of Transformer-Based Generative Models

Jaewan Choi, Jaehyun Park, Kwanhee Kyung, Nam Sung Kim, Jung Ho Ahn

Research output: Contribution to journalArticlepeer-review

Abstract

Transformer-based generative models, such as GPT, summarize an input sequence by generating key/value (KV) matrices through attention and generate the corresponding output sequence by utilizing these matrices once per token of the sequence. Both input and output sequences tend to get longer, which improves the understanding of contexts and conversation quality. These models are also typically batched for inference to improve the serving throughput. All these trends enable the models' weights to be reused effectively, increasing the relative importance of sequence generation, especially in processing KV matrices through attention. We identify that the conventional computing platforms (e.g., GPUs) are not efficient at handling this attention part for inference because each request generates different KV matrices, it has a low operation per byte ratio regardless of the batch size, and the aggregate size of the KV matrices can even surpass that of the entire model weights. This motivates us to propose AttAcc, which exploits the fact that the KV matrices are written once during summarization but used many times (proportional to the output sequence length), each multiplied by the embedding vector corresponding to an output token. The volume of data entering/leaving AttAcc could be more than orders of magnitude smaller than what should be read internally for attention. We design AttAcc with multiple processing-in-memory devices, each multiplying the embedding vector with the portion of the KV matrices within the devices, saving external (inter-device) bandwidth and energy consumption.

Original languageEnglish (US)
Pages (from-to)113-116
Number of pages4
JournalIEEE Computer Architecture Letters
Volume22
Issue number2
DOIs
StatePublished - Jul 1 2023

Keywords

  • Transformer-based generative model
  • attention
  • processing-in-memory

ASJC Scopus subject areas

  • Hardware and Architecture

Fingerprint

Dive into the research topics of 'Unleashing the Potential of PIM: Accelerating Large Batched Inference of Transformer-Based Generative Models'. Together they form a unique fingerprint.

Cite this