A Demonstration of Willump: A Statistically-Aware End-to-end Optimizer for Machine Learning Inference

Peter Kraft, Daniel Kang, Deepak Narayanan, Shoumik Palkar, Peter Bailis, Matei Zaharia

Research output: Contribution to journalArticlepeer-review

Abstract

Systems for ML inference are widely deployed today, but they typically optimize ML inference workloads using techniques designed for conventional data serving workloads and miss critical opportunities to leverage the statistical nature of ML. In this demo, we present Willump, an optimizer for ML inference that introduces statistically-motivated optimizations targeting ML applications whose performance bottleneck is feature computation. Willump automatically cascades feature computation for classification queries: Willump classifies most data inputs using only high-value, low-cost features selected by a cost model, improving query performance by up to 5× without statistically significant accuracy loss. In this demo, we use interactive and easily-downloadable Jupyter notebooks to show VLDB attendees which applications Willump can speed up, how to use Willump, and how Willump produces such large performance gains.

Original languageEnglish (US)
Pages (from-to)2833-2836
Number of pages4
JournalProceedings of the VLDB Endowment
Volume13
Issue number12
DOIs
StatePublished - 2020
Externally publishedYes

ASJC Scopus subject areas

  • Computer Science (miscellaneous)
  • General Computer Science

Fingerprint

Dive into the research topics of 'A Demonstration of Willump: A Statistically-Aware End-to-end Optimizer for Machine Learning Inference'. Together they form a unique fingerprint.

Cite this