TY - GEN
T1 - FLIN
T2 - 45th ACM/IEEE Annual International Symposium on Computer Architecture, ISCA 2018
AU - Tavakkol, Arash
AU - Sadrosadati, Mohammad
AU - Ghose, Saugata
AU - Kim, Jeremie S.
AU - Luo, Yixin
AU - Wang, Yaohua
AU - Ghiasi, Nika Mansouri
AU - Orosa, Lois
AU - Gómez-Luna, Juan
AU - Mutlu, Onur
N1 - Funding Information:
Acknowledgments We thank the anonymous referees of ISCA 2018 and AS-PLOS 2018. We thank SAFARI group members for feedback and the stimulating research environment. We thank our industrial partners, especially Alibaba, Google, Huawei, Intel, Microsoft, and VMware, for their generous support. Lois Orosa was supported by FAPESPfellowship 2016/18929-4.
Publisher Copyright:
© 2018 IEEE.
Copyright:
Copyright 2018 Elsevier B.V., All rights reserved.
PY - 2018/7/19
Y1 - 2018/7/19
N2 - Modern solid-state drives (SSDs) use new host–interface protocols, such as NVMe, to provide applications with fast access to storage. These new protocols make use of a concept known as the multi-queue SSD (MQ-SSD), where the SSD has direct access to the application-level I/O request queues. This removes most of the OS software stack that was used in older protocols to control how and when the I/O requests were dispatched to storage devices. Unfortunately, while the elimination of the OS software stack leads to a significant performance improvement, we show in this paper that it introduces a new problem: unfairness. This is because the elimination of the OS software stack eliminates the mechanisms that were used to provide fairness among applications in older SSDs. To study application-level unfairness, we perform experiments using four real state-of-the-art MQ-SSDs. We demonstrate that the lack of fair scheduling mechanisms leads to high unfairness among concurrently-executing applications due to the interference among them. For instance, when one of these applications issues many more I/O requests than others, the other applications are slowed down significantly. We perform a comprehensive analysis of interference in real MQ-SSDs, and find four major interference sources: (1) the intensity of requests sent by each application, (2) differences in request access patterns, (3) the ratio of reads to writes, and (4) garbage collection. To alleviate unfairness in MQ-SSDs, we propose the Flash-Level INterference-aware scheduler (FLIN). FLIN is a lightweight I/O request scheduling mechanism that provides fairness among requests from different applications. FLIN uses a three-stage scheduling algorithm that protects against all four major sources of interference, while respecting the application-level priorities assigned by the host. FLIN is implemented fully within the SSD controller firmware, requiring no new hardware, and has negligible (<0.06%) storage cost. Compared to a state-of-the-art I/O scheduler, FLIN improves the fairness and performance of a wide range of enterprise and datacenter storage workloads, with an average improvement of 70% and 47%, respectively.
AB - Modern solid-state drives (SSDs) use new host–interface protocols, such as NVMe, to provide applications with fast access to storage. These new protocols make use of a concept known as the multi-queue SSD (MQ-SSD), where the SSD has direct access to the application-level I/O request queues. This removes most of the OS software stack that was used in older protocols to control how and when the I/O requests were dispatched to storage devices. Unfortunately, while the elimination of the OS software stack leads to a significant performance improvement, we show in this paper that it introduces a new problem: unfairness. This is because the elimination of the OS software stack eliminates the mechanisms that were used to provide fairness among applications in older SSDs. To study application-level unfairness, we perform experiments using four real state-of-the-art MQ-SSDs. We demonstrate that the lack of fair scheduling mechanisms leads to high unfairness among concurrently-executing applications due to the interference among them. For instance, when one of these applications issues many more I/O requests than others, the other applications are slowed down significantly. We perform a comprehensive analysis of interference in real MQ-SSDs, and find four major interference sources: (1) the intensity of requests sent by each application, (2) differences in request access patterns, (3) the ratio of reads to writes, and (4) garbage collection. To alleviate unfairness in MQ-SSDs, we propose the Flash-Level INterference-aware scheduler (FLIN). FLIN is a lightweight I/O request scheduling mechanism that provides fairness among requests from different applications. FLIN uses a three-stage scheduling algorithm that protects against all four major sources of interference, while respecting the application-level priorities assigned by the host. FLIN is implemented fully within the SSD controller firmware, requiring no new hardware, and has negligible (<0.06%) storage cost. Compared to a state-of-the-art I/O scheduler, FLIN improves the fairness and performance of a wide range of enterprise and datacenter storage workloads, with an average improvement of 70% and 47%, respectively.
UR - http://www.scopus.com/inward/record.url?scp=85055864294&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85055864294&partnerID=8YFLogxK
U2 - 10.1109/ISCA.2018.00041
DO - 10.1109/ISCA.2018.00041
M3 - Conference contribution
AN - SCOPUS:85055864294
T3 - Proceedings - International Symposium on Computer Architecture
SP - 397
EP - 410
BT - Proceedings - 2018 ACM/IEEE 45th Annual International Symposium on Computer Architecture, ISCA 2018
PB - Institute of Electrical and Electronics Engineers Inc.
Y2 - 2 June 2018 through 6 June 2018
ER -