Predictive approximate Bayesian computation via saddle points

Yingxiang Yang, Bo Dai, Negar Kiyavash, Niao He

Research output: Contribution to journalConference article

Abstract

Approximate Bayesian computation (ABC) is an important methodology for Bayesian inference when the likelihood function is intractable. Sampling-based ABC algorithms such as rejection- and K2-ABC are inefficient when the parameters have high dimensions, while the regression-based algorithms such as K- and DR-ABC are hard to scale. In this paper, we introduce an optimization-based ABC framework that addresses these deficiencies. Leveraging a generative model for posterior and joint distribution matching, we show that ABC can be framed as saddle point problems, whose objectives can be accessed directly with samples. We present the predictive ABC algorithm (P-ABC), and provide a probabilistically approximately correct (PAC) bound for its learning consistency. Numerical experiment shows that P-ABC outperforms both K2- and DR-ABC significantly.

Original languageEnglish (US)
Pages (from-to)10260-10270
Number of pages11
JournalAdvances in Neural Information Processing Systems
Volume2018-December
StatePublished - Jan 1 2018
Event32nd Conference on Neural Information Processing Systems, NeurIPS 2018 - Montreal, Canada
Duration: Dec 2 2018Dec 8 2018

Fingerprint

Sampling
Experiments

ASJC Scopus subject areas

  • Computer Networks and Communications
  • Information Systems
  • Signal Processing

Cite this

Predictive approximate Bayesian computation via saddle points. / Yang, Yingxiang; Dai, Bo; Kiyavash, Negar; He, Niao.

In: Advances in Neural Information Processing Systems, Vol. 2018-December, 01.01.2018, p. 10260-10270.

Research output: Contribution to journalConference article

Yang, Yingxiang ; Dai, Bo ; Kiyavash, Negar ; He, Niao. / Predictive approximate Bayesian computation via saddle points. In: Advances in Neural Information Processing Systems. 2018 ; Vol. 2018-December. pp. 10260-10270.
@article{62cf7f0c124345e9b579a0195a770d24,
title = "Predictive approximate Bayesian computation via saddle points",
abstract = "Approximate Bayesian computation (ABC) is an important methodology for Bayesian inference when the likelihood function is intractable. Sampling-based ABC algorithms such as rejection- and K2-ABC are inefficient when the parameters have high dimensions, while the regression-based algorithms such as K- and DR-ABC are hard to scale. In this paper, we introduce an optimization-based ABC framework that addresses these deficiencies. Leveraging a generative model for posterior and joint distribution matching, we show that ABC can be framed as saddle point problems, whose objectives can be accessed directly with samples. We present the predictive ABC algorithm (P-ABC), and provide a probabilistically approximately correct (PAC) bound for its learning consistency. Numerical experiment shows that P-ABC outperforms both K2- and DR-ABC significantly.",
author = "Yingxiang Yang and Bo Dai and Negar Kiyavash and Niao He",
year = "2018",
month = "1",
day = "1",
language = "English (US)",
volume = "2018-December",
pages = "10260--10270",
journal = "Advances in Neural Information Processing Systems",
issn = "1049-5258",

}

TY - JOUR

T1 - Predictive approximate Bayesian computation via saddle points

AU - Yang, Yingxiang

AU - Dai, Bo

AU - Kiyavash, Negar

AU - He, Niao

PY - 2018/1/1

Y1 - 2018/1/1

N2 - Approximate Bayesian computation (ABC) is an important methodology for Bayesian inference when the likelihood function is intractable. Sampling-based ABC algorithms such as rejection- and K2-ABC are inefficient when the parameters have high dimensions, while the regression-based algorithms such as K- and DR-ABC are hard to scale. In this paper, we introduce an optimization-based ABC framework that addresses these deficiencies. Leveraging a generative model for posterior and joint distribution matching, we show that ABC can be framed as saddle point problems, whose objectives can be accessed directly with samples. We present the predictive ABC algorithm (P-ABC), and provide a probabilistically approximately correct (PAC) bound for its learning consistency. Numerical experiment shows that P-ABC outperforms both K2- and DR-ABC significantly.

AB - Approximate Bayesian computation (ABC) is an important methodology for Bayesian inference when the likelihood function is intractable. Sampling-based ABC algorithms such as rejection- and K2-ABC are inefficient when the parameters have high dimensions, while the regression-based algorithms such as K- and DR-ABC are hard to scale. In this paper, we introduce an optimization-based ABC framework that addresses these deficiencies. Leveraging a generative model for posterior and joint distribution matching, we show that ABC can be framed as saddle point problems, whose objectives can be accessed directly with samples. We present the predictive ABC algorithm (P-ABC), and provide a probabilistically approximately correct (PAC) bound for its learning consistency. Numerical experiment shows that P-ABC outperforms both K2- and DR-ABC significantly.

UR - http://www.scopus.com/inward/record.url?scp=85064814856&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85064814856&partnerID=8YFLogxK

M3 - Conference article

AN - SCOPUS:85064814856

VL - 2018-December

SP - 10260

EP - 10270

JO - Advances in Neural Information Processing Systems

JF - Advances in Neural Information Processing Systems

SN - 1049-5258

ER -