Foreground object detection in highly dynamic scenes using saliency

Kai Hsiang Lin, Pooya Khorrami, Jiangping Wang, Mark Hasegawa-Johnson, Thomas S. Huang

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

In this paper, we propose a novel saliency-based algorithm to detect foreground regions in highly dynamic scenes. We first convert input video frames to multiple patch-based feature maps. Then, we apply temporal saliency analysis to the pixels of each feature map. For each temporal set of co-located pixels, the feature distance of a point from its kth nearest neighbor is used to compute the temporal saliency. By computing and combining temporal saliency maps of different features, we obtain foreground likelihood maps. A simple segmentation method based on adaptive thresholding is applied to detect the foreground objects. We test our algorithm on images sequences of dynamic scenes, including public datasets and a new challenging wildlife dataset we constructed. The experimental results demonstrate the proposed algorithm achieves state-of-the-art results.

Original languageEnglish (US)
Title of host publication2014 IEEE International Conference on Image Processing, ICIP 2014
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages1125-1129
Number of pages5
ISBN (Electronic)9781479957514
DOIs
StatePublished - Jan 28 2014

Publication series

Name2014 IEEE International Conference on Image Processing, ICIP 2014

ASJC Scopus subject areas

  • Computer Vision and Pattern Recognition

Fingerprint

Dive into the research topics of 'Foreground object detection in highly dynamic scenes using saliency'. Together they form a unique fingerprint.

Cite this