Easy minimax estimation with random forests for human pose estimation

P. Daphne Tsatsoulis, David Forsyth

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

We describe a method for human parsing that is straightforward and competes with state-of-the-art performance on standard datasets. Unlike the state-of-the-art, our method does not search for individual body parts or poselets. Instead, a regression forest is used to predict a body configuration in body-space. The output of this regression forest is then combined in a novel way. Instead of averaging the output of each tree in the forest we use minimax to calculate optimal weights for the trees. This optimal weighting improves performance on rare poses and improves the generalization of our method to different datasets. Our paper demonstrates the unique advantage of random forest representations: minimax estimation is straightforward with no significant retraining burden.

Original languageEnglish (US)
Title of host publicationComputer Vision - ECCV 2014 Workshops, Proceedings
EditorsMichael M. Bronstein, Carsten Rother, Lourdes Agapito
PublisherSpringer
Pages669-684
Number of pages16
ISBN (Electronic)9783319161778
DOIs
StatePublished - 2015
Event13th European Conference on Computer Vision, ECCV 2014 - Zurich, Switzerland
Duration: Sep 6 2014Sep 12 2014

Publication series

NameLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Volume8925
ISSN (Print)0302-9743
ISSN (Electronic)1611-3349

Other

Other13th European Conference on Computer Vision, ECCV 2014
Country/TerritorySwitzerland
CityZurich
Period9/6/149/12/14

Keywords

  • Human pose estimation
  • Minimax
  • Regression
  • Regression forests

ASJC Scopus subject areas

  • Theoretical Computer Science
  • General Computer Science

Fingerprint

Dive into the research topics of 'Easy minimax estimation with random forests for human pose estimation'. Together they form a unique fingerprint.

Cite this