TY - GEN
T1 - Heterogeneous Data-Centric Architectures for Modern Data-Intensive Applications
T2 - 2022 IEEE Computer Society Annual Symposium on VLSI, ISVLSI 2022
AU - Oliveira, Geraldo F.
AU - Boroumand, Amirali
AU - Ghose, Saugata
AU - Gomez-Luna, Juan
AU - Mutlu, Onur
N1 - Funding Information:
We thank SAFARI Research Group members for valuable feedback and the stimulating intellectual environment they provide. We acknowledge the generous gifts provided by our industrial partners, including ASML, Facebook, Google, Huawei, Intel, Microsoft, and VMware. We acknowledge support from the Semiconductor Research Corporation and the ETH Future Computing Laboratory.
Publisher Copyright:
© 2022 IEEE.
PY - 2022
Y1 - 2022
N2 - Today's computing systems require moving data back-and-forth between computing resources (e.g., CPUs, GPUs, accelerators) and off-chip main memory so that computation can take place on the data. Unfortunately, this data movement is a major bottleneck for system performance and energy consumption [1], [2]. One promising execution paradigm that alleviates the data movement bottleneck in modern and emerging applications is processing-in-memory (PIM) [2]-[12], where the cost of data movement to/from main memory is reduced by placing computation capabilities close to memory. In the data-centric PIM paradigm, the logic close to memory has access to data with significantly higher memory bandwidth, lower latency, and lower energy consumption than processors/accelerators in existing processor-centric systems.
AB - Today's computing systems require moving data back-and-forth between computing resources (e.g., CPUs, GPUs, accelerators) and off-chip main memory so that computation can take place on the data. Unfortunately, this data movement is a major bottleneck for system performance and energy consumption [1], [2]. One promising execution paradigm that alleviates the data movement bottleneck in modern and emerging applications is processing-in-memory (PIM) [2]-[12], where the cost of data movement to/from main memory is reduced by placing computation capabilities close to memory. In the data-centric PIM paradigm, the logic close to memory has access to data with significantly higher memory bandwidth, lower latency, and lower energy consumption than processors/accelerators in existing processor-centric systems.
KW - accelerator
KW - databases
KW - machine learning
KW - neural networks
KW - processing in memory
UR - http://www.scopus.com/inward/record.url?scp=85140893532&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85140893532&partnerID=8YFLogxK
U2 - 10.1109/ISVLSI54635.2022.00060
DO - 10.1109/ISVLSI54635.2022.00060
M3 - Conference contribution
AN - SCOPUS:85140893532
T3 - Proceedings of IEEE Computer Society Annual Symposium on VLSI, ISVLSI
SP - 273
EP - 278
BT - Proceedings - 2022 IEEE Computer Society Annual Symposium on VLSI, ISVLSI 2022
PB - IEEE Computer Society
Y2 - 4 July 2022 through 6 July 2022
ER -