Abstract
This article provides an overview of recently proposed deep in-memory architectures (DIMAs) in SRAM for energy- and latency-efficient hardware realization of machine learning (ML) algorithms. DIMA tackles the data movement problem in von Neumann architectures head-on by deeply embedding mixed-signal computations into a conventional memory array. In doing so, it trades off its computational signal-to-noise ratio (compute SNR) with energy and latency, and therefore, it represents an analog form of approximate computing. DIMA exploits the inherent error immunity of ML algorithms and SNR budgeting methods to operate its analog circuitry in a low-swing/low-compute SNR regime, thereby achieving > 100× reduction in the energy-delay product (EDP) over an equivalent von Neumann architecture with no loss in inference accuracy. This article describes DIMA's computational pipeline and provides a Shannon-inspired rationale for its robustness to process, temperature, and voltage variations and design guidelines to manage its analog nonidealities. DIMA's versatility, effectiveness, and practicality demonstrated via multiple silicon IC prototypes in a 65-nm CMOS process are described. A DIMA-based instruction set architecture (ISA) to realize an end-to-end application-to-architecture mapping for the accelerating diverse ML algorithms is also presented. Finally, DIMA's fundamental tradeoff between energy and accuracy in the low-compute SNR regime is analyzed to determine energy-optimum design parameters.
Original language | English (US) |
---|---|
Article number | 9252843 |
Pages (from-to) | 2251-2275 |
Number of pages | 25 |
Journal | Proceedings of the IEEE |
Volume | 108 |
Issue number | 12 |
DOIs | |
State | Published - Dec 2020 |
Externally published | Yes |
Keywords
- Accelerator
- artificial intelligence
- energy efficiency
- in-memory computing
- machine learning (ML)
- non-von Neumann
ASJC Scopus subject areas
- General Computer Science
- Electrical and Electronic Engineering