Blending learning and inference in conditional random fields

Tamir Hazan, Alexander G. Schwing, Raquel Urtasun

Research output: Contribution to journalArticlepeer-review

Abstract

Conditional random fields maximize the log-likelihood of training labels given the training data, e.g., objects given images. In many cases the training labels are structures that consist of a set of variables and the computational complexity for estimating their likelihood is exponential in the number of the variables. Learning algorithms relax this computational burden using approximate inference that is nested as a sub-procedure. In this paper we describe the objective function for nested learning and inference in conditional random fields. The devised objective maximizes the log-beliefs - probability distributions over subsets of training variables that agree on their marginal probabilities. This objective is concave and consists of two types of variables that are related to the learning and inference tasks respectively. Importantly, we afterwards show how to blend the learning and inference procedure and effectively get to the identical optimum much faster. The proposed algorithm currently achieves the state-of-the-art in various computer vision applications.

Original languageEnglish (US)
Pages (from-to)1-25
Number of pages25
JournalJournal of Machine Learning Research
Volume17
StatePublished - Dec 1 2016

ASJC Scopus subject areas

  • Software
  • Artificial Intelligence
  • Control and Systems Engineering
  • Statistics and Probability

Fingerprint

Dive into the research topics of 'Blending learning and inference in conditional random fields'. Together they form a unique fingerprint.

Cite this