TY - JOUR
T1 - Blazelt
T2 - 46th International Conference on Very Large Data Bases, VLDB 2020
AU - Kang, Daniel
AU - Bailis, Peter
AU - Zaharia, Matei
N1 - Funding Information:
This research was supported in part by affiliate members and other supporters of the Stanford DAWN project-Ant Financial, Facebook, Google, Infosys, Intel, NEC, SAP, Teradata, and V Mware-as well as Toyota Research Institute, Keysight Technologies, AmazonWeb Services, Cisco, and the NSF under CAREER grant CNS-1651570. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the NSF.
Publisher Copyright:
© VLDB Endowment.
PY - 2019/12/9
Y1 - 2019/12/9
N2 - Recent advances in neural networks (NNs) have enabled automatic querying of large volumes of video data with high accuracy. While these deep NNs can produce accurate annotations of an object's position and type in video, they are computationally expensive and require complex, imperative deployment code to answer queries. Prior work uses approximate filtering to reduce the cost of video analytics, but does not handle two important classes of queries, aggregation and limit queries; moreover, these approaches still require complex code to deploy. To address the computational and usability challenges of querying video at scale, we introduce BLAZEIT, a system that optimizes queries of spatiotemporal information of objects in video. BLAZEIT accepts queries via FRAMEQL, a declarative extension of SQL for video analytics that enables video-specific query optimization. We introduce two new query optimization techniques in BLAZEIT that are not supported by prior work. First, we develop methods of using NNs as control variates to quickly answer approximate aggregation queries with error bounds. Second, we present a novel search algorithm for cardinality-limited video queries. Through these these optimizations, BLAZEIT can deliver up to 83x speedups over the recent literature on video processing.
AB - Recent advances in neural networks (NNs) have enabled automatic querying of large volumes of video data with high accuracy. While these deep NNs can produce accurate annotations of an object's position and type in video, they are computationally expensive and require complex, imperative deployment code to answer queries. Prior work uses approximate filtering to reduce the cost of video analytics, but does not handle two important classes of queries, aggregation and limit queries; moreover, these approaches still require complex code to deploy. To address the computational and usability challenges of querying video at scale, we introduce BLAZEIT, a system that optimizes queries of spatiotemporal information of objects in video. BLAZEIT accepts queries via FRAMEQL, a declarative extension of SQL for video analytics that enables video-specific query optimization. We introduce two new query optimization techniques in BLAZEIT that are not supported by prior work. First, we develop methods of using NNs as control variates to quickly answer approximate aggregation queries with error bounds. Second, we present a novel search algorithm for cardinality-limited video queries. Through these these optimizations, BLAZEIT can deliver up to 83x speedups over the recent literature on video processing.
UR - http://www.scopus.com/inward/record.url?scp=85079322677&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85079322677&partnerID=8YFLogxK
U2 - 10.14778/3372716.3372725
DO - 10.14778/3372716.3372725
M3 - Conference article
AN - SCOPUS:85079322677
SN - 2150-8097
VL - 13
SP - 533
EP - 546
JO - Proceedings of the VLDB Endowment
JF - Proceedings of the VLDB Endowment
IS - 4
Y2 - 31 August 2020 through 4 September 2020
ER -