Locating 3D Object Proposals: A Depth-Based Online Approach

Ramanpreet Singh Pahwa, Jiangbo Lu, Nianjuan Jiang, Tian Tsong Ng, Minh N. Do

Research output: Contribution to journalArticlepeer-review


2D object proposals, quickly detected regions in an image that likely contain an object of interest, are an effective approach for improving the computational efficiency and accuracy of object detection in color images. In this paper, we propose a novel online method that generates 3D object proposals in an RGB-D video sequence. Our main observation is that depth images provide important information about the geometry of the scene. Diverging from the traditional goal of 2D object proposals to provide a high recall, we aim for precise 3D proposals. We leverage on depth information per frame and multiview scene information to obtain accurate 3D object proposals. Using efficient but robust registration enables us to combine multiple frames of a scene in near real time and generate 3D bounding boxes for potential 3D regions of interest. Using standard metrics, such as precision-recall (P-R) curves and F-measure, we show that the proposed approach is significantly more accurate than the current state-of-the-art techniques. Our online approach can be integrated into simultaneous localization and mapping-based video processing for quick 3D object localization. Our method takes less than a second in MATLAB on the UW-RGBD scene data set on a single thread CPU and, thus, has potential to be used in low-power chips in unmanned aerial vehicles, quadcopters, and drones.

Original languageEnglish (US)
Pages (from-to)626-639
Number of pages14
JournalIEEE Transactions on Circuits and Systems for Video Technology
Issue number3
StatePublished - Mar 2018


  • Depth cameras
  • object proposals
  • robot vision

ASJC Scopus subject areas

  • Media Technology
  • Electrical and Electronic Engineering


Dive into the research topics of 'Locating 3D Object Proposals: A Depth-Based Online Approach'. Together they form a unique fingerprint.

Cite this