TY - GEN
T1 - The Road to Widely Deploying Processing-in-Memory
T2 - 2022 IEEE Computer Society Annual Symposium on VLSI, ISVLSI 2022
AU - Ghose, Saugata
N1 - ACKNOWLEDGMENTS We thank our many collaborators for their contributions to the works discussed above, including (but not limited to) Minh S. Q. Truong, Amirali Boroumand, Damla Senol Cali, Ryan Wong, Yiqiu Sun, Liting Shen, Alexander Glass, Eric Chen, Deanyone Su, Alison Hoffmann, Ziyi Zuo, L. Richard Carley, James A. Bain, Geraldo F. Oliveira, Nastaran Haji-nazar, Juan Gómez-Luna, Jeremie S. Kim, Can Alkan, Sreeni-vas Subramoney, Gurpreet S. Kalsi, Eric Shiu, Parthasarathy Ranganathan, and Onur Mutlu. The RACER work was funded in part by a seed grant from the Wilton E. Scott Institute for Energy Innovation, and by the Data Storage Systems Center at Carnegie Mellon University. Minh S. Q. Truong is supported by an Apple Ph.D. Fellowship in Integrated Systems.
PY - 2022
Y1 - 2022
N2 - Processing-in-memory (PIM) refers to a computing paradigm where some or all of the computation for an ap-plication is moved closer to where the data resides (e.g., in main memory). While PIM has been the subject of ongoing research since the 1970s [8], [11], [17], [19], [26], 2[8], [29], [33], it has experienced a resurgence in the last decade due to (1) the pressing need to reduce the energy and latency overheads associated with data movement between the CPU and memory in conventional systems [6], [18], and (2) recent innovations in memory technologies that can enable PIM integration (e.g., [13]-[16], [20], [21], [24], [31]). Recently-released products and prototypes, ranging from programmable near-memory pro-cessing units [7], [36] to custom near-bank accelerators for machine learning [22], [23], [30] and analog compute support within memory arrays [9], [27], have demonstrated the viability of manufacturing PIM architectures.
AB - Processing-in-memory (PIM) refers to a computing paradigm where some or all of the computation for an ap-plication is moved closer to where the data resides (e.g., in main memory). While PIM has been the subject of ongoing research since the 1970s [8], [11], [17], [19], [26], 2[8], [29], [33], it has experienced a resurgence in the last decade due to (1) the pressing need to reduce the energy and latency overheads associated with data movement between the CPU and memory in conventional systems [6], [18], and (2) recent innovations in memory technologies that can enable PIM integration (e.g., [13]-[16], [20], [21], [24], [31]). Recently-released products and prototypes, ranging from programmable near-memory pro-cessing units [7], [36] to custom near-bank accelerators for machine learning [22], [23], [30] and analog compute support within memory arrays [9], [27], have demonstrated the viability of manufacturing PIM architectures.
KW - hardware/software co design
KW - processing in memory
KW - processing near memory
KW - processing using memory
KW - system integration
UR - http://www.scopus.com/inward/record.url?scp=85139716320&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85139716320&partnerID=8YFLogxK
U2 - 10.1109/ISVLSI54635.2022.00057
DO - 10.1109/ISVLSI54635.2022.00057
M3 - Conference contribution
AN - SCOPUS:85139716320
T3 - Proceedings of IEEE Computer Society Annual Symposium on VLSI, ISVLSI
SP - 259
EP - 260
BT - Proceedings - 2022 IEEE Computer Society Annual Symposium on VLSI, ISVLSI 2022
PB - IEEE Computer Society
Y2 - 4 July 2022 through 6 July 2022
ER -