Interpreting black box predictions using fisher kernels

Rajiv Khanna, Been Kim, Joydeep Ghosh, Oluwasanmi Koyejo

Research output: Contribution to conferencePaperpeer-review


Research in both machine learning and psychology suggests that salient examples can help humans to interpret learning models. To this end, we take a novel look at black box interpretation of test predictions in terms of training examples. Our goal is to ask “which training examples are most responsible for a given set of predictions”? To answer this question, we make use of Fisher kernels as the defining feature embedding of each data point, combined with Sequential Bayesian Quadrature (SBQ) for efficient selection of examples. In contrast to prior work, our method is able to seamlessly handle any sized subset of test predictions in a principled way. We theoretically analyze our approach, providing novel convergence bounds for SBQ over discrete candidate atoms. Our approach recovers the application of influence functions for interpretability as a special case yielding novel insights from this connection. We also present applications of the proposed approach to three use cases: cleaning training data, fixing mislabeled examples and data summarization.

Original languageEnglish (US)
StatePublished - 2020
Event22nd International Conference on Artificial Intelligence and Statistics, AISTATS 2019 - Naha, Japan
Duration: Apr 16 2019Apr 18 2019


Conference22nd International Conference on Artificial Intelligence and Statistics, AISTATS 2019

ASJC Scopus subject areas

  • Artificial Intelligence
  • Statistics and Probability


Dive into the research topics of 'Interpreting black box predictions using fisher kernels'. Together they form a unique fingerprint.

Cite this