TY - GEN
T1 - Hardware architecture and software stack for PIM based on commercial DRAM technology
T2 - 48th ACM/IEEE Annual International Symposium on Computer Architecture, ISCA 2021
AU - Lee, Sukhan
AU - Kang, Shin Haeng
AU - Lee, Jaehoon
AU - Kim, Hyeonsu
AU - Lee, Eojin
AU - Seo, Seungwoo
AU - Yoon, Hosang
AU - Lee, Seungwon
AU - Lim, Kyounghwan
AU - Shin, Hyunsung
AU - Kim, Jinhyun
AU - Seongil, O.
AU - Iyer, Anand
AU - Wang, David
AU - Sohn, Kyomin
AU - Kim, Nam Sung
N1 - Publisher Copyright:
© 2021 IEEE.
PY - 2021/6
Y1 - 2021/6
N2 - Emerging applications such as deep neural network demand high off-chip memory bandwidth. However, under stringent physical constraints of chip packages and system boards, it becomes very expensive to further increase the bandwidth of off-chip memory. Besides, transferring data across the memory hierarchy constitutes a large fraction of total energy consumption of systems, and the fraction has steadily increased with the stagnant technology scaling and poor data reuse characteristics of such emerging applications. To cost-effectively increase the bandwidth and energy efficiency, researchers began to reconsider the past processing-in-memory (PIM) architectures and advance them further, especially exploiting recent integration technologies such as 2.5D/3D stacking. Albeit the recent advances, no major memory manufacturer has developed even a proof-of-concept silicon yet, not to mention a product. This is because the past PIM architectures often require changes in host processors and/or application code which memory manufacturers cannot easily govern. In this paper, elegantly tackling the aforementioned challenges, we propose an innovative yet practical PIM architecture. To demonstrate its practicality and effectiveness at the system level, we implement it with a 20nm DRAM technology, integrate it with an unmodified commercial processor, develop the necessary software stack, and run existing applications without changing their source code. Our evaluation at the system level shows that our PIM improves the performance of memory-bound neural network kernels and applications by 11.2× and 3.5×, respectively. Atop the performance improvement, PIM also reduces the energy per bit transfer by 3.5×, and the overall energy efficiency of the system running the applications by 3.2×.
AB - Emerging applications such as deep neural network demand high off-chip memory bandwidth. However, under stringent physical constraints of chip packages and system boards, it becomes very expensive to further increase the bandwidth of off-chip memory. Besides, transferring data across the memory hierarchy constitutes a large fraction of total energy consumption of systems, and the fraction has steadily increased with the stagnant technology scaling and poor data reuse characteristics of such emerging applications. To cost-effectively increase the bandwidth and energy efficiency, researchers began to reconsider the past processing-in-memory (PIM) architectures and advance them further, especially exploiting recent integration technologies such as 2.5D/3D stacking. Albeit the recent advances, no major memory manufacturer has developed even a proof-of-concept silicon yet, not to mention a product. This is because the past PIM architectures often require changes in host processors and/or application code which memory manufacturers cannot easily govern. In this paper, elegantly tackling the aforementioned challenges, we propose an innovative yet practical PIM architecture. To demonstrate its practicality and effectiveness at the system level, we implement it with a 20nm DRAM technology, integrate it with an unmodified commercial processor, develop the necessary software stack, and run existing applications without changing their source code. Our evaluation at the system level shows that our PIM improves the performance of memory-bound neural network kernels and applications by 11.2× and 3.5×, respectively. Atop the performance improvement, PIM also reduces the energy per bit transfer by 3.5×, and the overall energy efficiency of the system running the applications by 3.2×.
KW - Accelerator
KW - DRAM
KW - Neural network
KW - Processing in memory
UR - http://www.scopus.com/inward/record.url?scp=85111464344&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85111464344&partnerID=8YFLogxK
U2 - 10.1109/ISCA52012.2021.00013
DO - 10.1109/ISCA52012.2021.00013
M3 - Conference contribution
AN - SCOPUS:85111464344
T3 - Proceedings - International Symposium on Computer Architecture
SP - 43
EP - 56
BT - Proceedings - 2021 ACM/IEEE 48th Annual International Symposium on Computer Architecture, ISCA 2021
PB - Institute of Electrical and Electronics Engineers Inc.
Y2 - 14 June 2021 through 19 June 2021
ER -