TY - JOUR
T1 - Jointly optimizing preprocessing and inference for dnn-based visual analytics
AU - Kang, Daniel
AU - Mathur, Ankit
AU - Veeramacheneni, Teja
AU - Bailis, Peter
AU - Zaharia, Matei
N1 - Funding Information:
We thank Sahaana Suri, Kexin Rong, and members of the Stanford Infolab for their feedback on early drafts. This research was supported in part by affiliate members and other supporters of the Stanford DAWN project—Ant Financial, Facebook, Google, Infosys, NEC, and VMware—as well as Toyota Research Institute, Northrop Grumman, Amazon Web Services, Cisco, and the NSF under CAREER grant CNS-1651570. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the NSF. Toyota Research Institute ("TRI") provided funds to assist the authors with their research but this article solely reflects the opinions and conclusions of its authors and not TRI or any other Toyota entity.
Funding Information:
We thank Sahaana Suri, Kexin Rong, and members of the Stanford Infolab for their feedback on early drafts. This research was supported in part by affiliate members and other supporters of the Stanford DAWN project?Ant Financial, Facebook, Google, Infosys, NEC, and VMware?as well as Toyota Research Institute, Northrop Grumman, Amazon Web Services, Cisco, and the NSF under CAREER grant CNS-1651570. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the NSF. Toyota Research Institute ("TRI") provided funds to assist the authors with their research but this article solely reflects the opinions and conclusions of its authors and not TRI or any other Toyota entity.
Publisher Copyright:
© 2020, VLDB Endowment. All rights reserved.
PY - 2020
Y1 - 2020
N2 - While deep neural networks (DNNs) are an increasingly popular way to query large corpora of data, their significant runtime remains an active area of research. As a result, researchers have proposed systems and optimizations to reduce these costs by allowing users to trade off accuracy and speed. In this work, we examine end-to-end DNN execution in visual analytics systems on modern accelerators. Through a novel measurement study, we show that the preprocessing of data (e.g., decoding, resizing) can be the bottleneck in many visual analytics systems on modern hardware. To address the bottleneck of preprocessing, we introduce two optimizations for end-to-end visual analytics systems. First, we introduce novel methods of achieving accuracy and throughput trade-offs by using natively present, low-resolution visual data. Second, we develop a runtime engine for efficient visual DNN inference. This runtime engine a) efficiently pipelines preprocessing and DNN execution for inference, b) places preprocessing operations on the CPU or GPU in a hardware-and input-aware manner, and c) efficiently manages memory and threading for high throughput execution. We implement these optimizations in a novel system, Smol, and evaluate Smol on eight visual datasets. We show that its optimizations can achieve up to 5.9× end-to-end throughput improvements at a fixed accuracy over recent work in visual analytics.
AB - While deep neural networks (DNNs) are an increasingly popular way to query large corpora of data, their significant runtime remains an active area of research. As a result, researchers have proposed systems and optimizations to reduce these costs by allowing users to trade off accuracy and speed. In this work, we examine end-to-end DNN execution in visual analytics systems on modern accelerators. Through a novel measurement study, we show that the preprocessing of data (e.g., decoding, resizing) can be the bottleneck in many visual analytics systems on modern hardware. To address the bottleneck of preprocessing, we introduce two optimizations for end-to-end visual analytics systems. First, we introduce novel methods of achieving accuracy and throughput trade-offs by using natively present, low-resolution visual data. Second, we develop a runtime engine for efficient visual DNN inference. This runtime engine a) efficiently pipelines preprocessing and DNN execution for inference, b) places preprocessing operations on the CPU or GPU in a hardware-and input-aware manner, and c) efficiently manages memory and threading for high throughput execution. We implement these optimizations in a novel system, Smol, and evaluate Smol on eight visual datasets. We show that its optimizations can achieve up to 5.9× end-to-end throughput improvements at a fixed accuracy over recent work in visual analytics.
UR - http://www.scopus.com/inward/record.url?scp=85097302945&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85097302945&partnerID=8YFLogxK
U2 - 10.14778/3425879.3425881
DO - 10.14778/3425879.3425881
M3 - Article
AN - SCOPUS:85097302945
SN - 2150-8097
VL - 14
SP - 87
EP - 100
JO - Proceedings of the VLDB Endowment
JF - Proceedings of the VLDB Endowment
IS - 2
ER -