TY - GEN
T1 - Hardware-Assisted Virtualization of Neural Processing Units for Cloud Platforms
AU - Xue, Yuqi
AU - Liu, Yiqi
AU - Nai, Lifeng
AU - Huang, Jian
N1 - We thank the anonymous reviewers for their helpful comments and feedback. We thank Haoyang Zhang for his insightful discussion on the NeuISA design. This work was partially supported by NSF grant CCF-1919044, NSF CAREER Award CNS-2144796, and the Hybrid Cloud and AI program at the IBM-Illinois Discovery Accelerator Institute (IIDAI).
PY - 2024
Y1 - 2024
N2 - Cloud platforms today have been deploying hardware accelerators like neural processing units (NPUs) for powering machine learning (ML) inference services. To maximize the resource utilization while ensuring reasonable quality of service, a natural approach is to virtualize NPUs for efficient resource sharing for multi-Tenant ML services. However, virtualizing NPUs for modern cloud platforms is not easy. This is not only due to the lack of system abstraction support for NPU hardware, but also due to the lack of architectural and ISA support for enabling fine-grained dynamic operator scheduling for virtualized NPUs. We present Neu10, a holistic NPU virtualization framework. We investigate virtualization techniques for NPUs across the entire software and hardware stack. Neul0 consists of (1) a flexible NPU abstraction called vNPU, which enables fine-grained virtualization of the heterogeneous compute units in a physical NPU (pNPU); (2) a vNPU resource allocator that enables pay-As-you-go computing model and flexible vNPU-To-pNPU mappings for improved resource utilization and cost-effectiveness; (3) an ISA extension of modern NPU architecture for facilitating fine-grained tensor operator scheduling for multiple vNPUs. We implement Neu10 based on a production-level NPU simulator. Our experiments show that Neul0 improves the throughput of ML inference services by up to 1.4 × and reduces the tail latency by up to 4.6 ×, while improving the NPU utilization by 1.2 × on average, compared to state-of-The-Art NPU sharing approaches.
AB - Cloud platforms today have been deploying hardware accelerators like neural processing units (NPUs) for powering machine learning (ML) inference services. To maximize the resource utilization while ensuring reasonable quality of service, a natural approach is to virtualize NPUs for efficient resource sharing for multi-Tenant ML services. However, virtualizing NPUs for modern cloud platforms is not easy. This is not only due to the lack of system abstraction support for NPU hardware, but also due to the lack of architectural and ISA support for enabling fine-grained dynamic operator scheduling for virtualized NPUs. We present Neu10, a holistic NPU virtualization framework. We investigate virtualization techniques for NPUs across the entire software and hardware stack. Neul0 consists of (1) a flexible NPU abstraction called vNPU, which enables fine-grained virtualization of the heterogeneous compute units in a physical NPU (pNPU); (2) a vNPU resource allocator that enables pay-As-you-go computing model and flexible vNPU-To-pNPU mappings for improved resource utilization and cost-effectiveness; (3) an ISA extension of modern NPU architecture for facilitating fine-grained tensor operator scheduling for multiple vNPUs. We implement Neu10 based on a production-level NPU simulator. Our experiments show that Neul0 improves the throughput of ML inference services by up to 1.4 × and reduces the tail latency by up to 4.6 ×, while improving the NPU utilization by 1.2 × on average, compared to state-of-The-Art NPU sharing approaches.
KW - machine learning accelerator
KW - neural processing unit
KW - virtualization
UR - http://www.scopus.com/inward/record.url?scp=85213296791&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85213296791&partnerID=8YFLogxK
U2 - 10.1109/MICRO61859.2024.00011
DO - 10.1109/MICRO61859.2024.00011
M3 - Conference contribution
AN - SCOPUS:85213296791
T3 - Proceedings of the Annual International Symposium on Microarchitecture, MICRO
SP - 1
EP - 16
BT - Proceedings - 2024 57th Annual IEEE/ACM International Symposium on Microarchitecture, MICRO 2024
PB - IEEE Computer Society
T2 - 57th Annual IEEE/ACM International Symposium on Microarchitecture, MICRO 2024
Y2 - 2 November 2024 through 6 November 2024
ER -