Abstract
While great strides have been made in detecting and localizing specific objects in natural images, the bottom-up segmentation of unknown, generic objects remains a difficult challenge. We believe that occlusion can provide a strong cue for object segmentation and "pop-out", but detecting an object's occlusion boundaries using appearance alone is a difficult problem in itself. If the camera or the scene is moving, however, that motion provides an additional powerful indicator of occlusion. Thus, we use standard appearance cues (e.g. brightness/color gradient) in addition to motion cues that capture subtle differences in the relative surface motion (i.e. parallax) on either side of an occlusion boundary. We describe a learned local classifier and global inference approach which provide a framework for combining and reasoning about these appearance and motion cues to estimate which region boundaries of an initial over-segmentation correspond to object/occlusion boundaries in the scene. Through results on a dataset which contains short videos with labeled boundaries, we demonstrate the effectiveness of motion cues for this task.
Original language | English (US) |
---|---|
DOIs | |
State | Published - 2007 |
Externally published | Yes |
Event | 2007 IEEE 11th International Conference on Computer Vision, ICCV - Rio de Janeiro, Brazil Duration: Oct 14 2007 → Oct 21 2007 |
Other
Other | 2007 IEEE 11th International Conference on Computer Vision, ICCV |
---|---|
Country/Territory | Brazil |
City | Rio de Janeiro |
Period | 10/14/07 → 10/21/07 |
ASJC Scopus subject areas
- Software
- Computer Vision and Pattern Recognition