TY - GEN
T1 - YouTube-VOS
T2 - 15th European Conference on Computer Vision, ECCV 2018
AU - Xu, Ning
AU - Yang, Linjie
AU - Fan, Yuchen
AU - Yang, Jianchao
AU - Yue, Dingcheng
AU - Liang, Yuchen
AU - Price, Brian
AU - Cohen, Scott
AU - Huang, Thomas
N1 - Funding Information:
Acknowledgement. This research was partially supported by a gift funding from Snap Inc. and UIUC Andrew T. Yang Research and Entrepreneurship Award to Beckman Institute for Advanced Science & Technology, UIUC.
Publisher Copyright:
© 2018, Springer Nature Switzerland AG.
PY - 2018
Y1 - 2018
N2 - Learning long-term spatial-temporal features are critical for many video analysis tasks. However, existing video segmentation methods predominantly rely on static image segmentation techniques, and methods capturing temporal dependency for segmentation have to depend on pretrained optical flow models, leading to suboptimal solutions for the problem. End-to-end sequential learning to explore spatial-temporal features for video segmentation is largely limited by the scale of available video segmentation datasets, i.e., even the largest video segmentation dataset only contains 90 short video clips. To solve this problem, we build a new large-scale video object segmentation dataset called YouTube Video Object Segmentation dataset (YouTube-VOS). Our dataset contains 3,252 YouTube video clips and 78 categories including common objects and human activities (This is the statistics when we submit this paper, see updated statistics on our website). This is by far the largest video object segmentation dataset to our knowledge and we have released it at https://youtube-vos.org. Based on this dataset, we propose a novel sequence-to-sequence network to fully exploit long-term spatial-temporal information in videos for segmentation. We demonstrate that our method is able to achieve the best results on our YouTube-VOStest set and comparable results on DAVIS 2016 compared to the current state-of-the-art methods. Experiments show that the large scale dataset is indeed a key factor to the success of our model.
AB - Learning long-term spatial-temporal features are critical for many video analysis tasks. However, existing video segmentation methods predominantly rely on static image segmentation techniques, and methods capturing temporal dependency for segmentation have to depend on pretrained optical flow models, leading to suboptimal solutions for the problem. End-to-end sequential learning to explore spatial-temporal features for video segmentation is largely limited by the scale of available video segmentation datasets, i.e., even the largest video segmentation dataset only contains 90 short video clips. To solve this problem, we build a new large-scale video object segmentation dataset called YouTube Video Object Segmentation dataset (YouTube-VOS). Our dataset contains 3,252 YouTube video clips and 78 categories including common objects and human activities (This is the statistics when we submit this paper, see updated statistics on our website). This is by far the largest video object segmentation dataset to our knowledge and we have released it at https://youtube-vos.org. Based on this dataset, we propose a novel sequence-to-sequence network to fully exploit long-term spatial-temporal information in videos for segmentation. We demonstrate that our method is able to achieve the best results on our YouTube-VOStest set and comparable results on DAVIS 2016 compared to the current state-of-the-art methods. Experiments show that the large scale dataset is indeed a key factor to the success of our model.
KW - Large-scale dataset
KW - Spatial-temporal information
KW - Video object segmentation
UR - http://www.scopus.com/inward/record.url?scp=85055081492&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85055081492&partnerID=8YFLogxK
U2 - 10.1007/978-3-030-01228-1_36
DO - 10.1007/978-3-030-01228-1_36
M3 - Conference contribution
AN - SCOPUS:85055081492
SN - 9783030012274
T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
SP - 603
EP - 619
BT - Computer Vision – ECCV 2018 - 15th European Conference, 2018, Proceedings
A2 - Ferrari, Vittorio
A2 - Sminchisescu, Cristian
A2 - Hebert, Martial
A2 - Weiss, Yair
PB - Springer
Y2 - 8 September 2018 through 14 September 2018
ER -