ICaps: Iterative Category-Level Object Pose and Shape Estimation

Xinke Deng, Junyi Geng, Timothy Bretl, Yu Xiang, Dieter Fox

Research output: Contribution to journalArticlepeer-review

Abstract

This letter proposes a category-level 6D object pose and shape estimation approach iCaps, which allows tracking 6D poses of unseen objects in a category and estimating their 3D shapes. We develop a category-level auto-encoder network using depth images as input, where feature embeddings from the auto-encoder encode poses of objects in a category. The auto-encoder can be used in a particle filter framework to estimate and track 6D poses of objects in a category. By exploiting an implicit shape representation based on signed distance functions, we build a LatentNet to estimate a latent representation of the 3D shape given the estimated pose of an object. Then the estimated pose and shape can be used to update each other in an iterative way. Our category-level 6D object pose and shape estimation pipeline only requires 2D detection and segmentation for initialization. We evaluate our approach on a publicly available dataset and demonstrate its effectiveness. In particular, our method achieves comparably high accuracy on shape estimation.

Original languageEnglish (US)
Pages (from-to)1784-1791
Number of pages8
JournalIEEE Robotics and Automation Letters
Volume7
Issue number2
DOIs
StatePublished - Apr 1 2022

Keywords

  • Category-level 6D pose estimation
  • Deep learning for visual perception
  • Object shape estimation
  • Perception for grasping and manipulation
  • RGB-D Perception

ASJC Scopus subject areas

  • Control and Systems Engineering
  • Biomedical Engineering
  • Human-Computer Interaction
  • Mechanical Engineering
  • Computer Vision and Pattern Recognition
  • Computer Science Applications
  • Control and Optimization
  • Artificial Intelligence

Fingerprint

Dive into the research topics of 'ICaps: Iterative Category-Level Object Pose and Shape Estimation'. Together they form a unique fingerprint.

Cite this