TY - GEN
T1 - Multi-scale orderless pooling of deep convolutional activation features
AU - Gong, Yunchao
AU - Wang, Liwei
AU - Guo, Ruiqi
AU - Lazebnik, Svetlana
PY - 2014
Y1 - 2014
N2 - Deep convolutional neural networks (CNN) have shown their promise as a universal representation for recognition. However, global CNN activations lack geometric invariance, which limits their robustness for classification and matching of highly variable scenes. To improve the invariance of CNN activations without degrading their discriminative power, this paper presents a simple but effective scheme called multi-scale orderless pooling (MOP-CNN). This scheme extracts CNN activations for local patches at multiple scale levels, performs orderless VLAD pooling of these activations at each level separately, and concatenates the result. The resulting MOP-CNN representation can be used as a generic feature for either supervised or unsupervised recognition tasks, from image classification to instance-level retrieval; it consistently outperforms global CNN activations without requiring any joint training of prediction layers for a particular target dataset. In absolute terms, it achieves state-of-the-art results on the challenging SUN397 and MIT Indoor Scenes classification datasets, and competitive results on ILSVRC2012/2013 classification and INRIA Holidays retrieval datasets.
AB - Deep convolutional neural networks (CNN) have shown their promise as a universal representation for recognition. However, global CNN activations lack geometric invariance, which limits their robustness for classification and matching of highly variable scenes. To improve the invariance of CNN activations without degrading their discriminative power, this paper presents a simple but effective scheme called multi-scale orderless pooling (MOP-CNN). This scheme extracts CNN activations for local patches at multiple scale levels, performs orderless VLAD pooling of these activations at each level separately, and concatenates the result. The resulting MOP-CNN representation can be used as a generic feature for either supervised or unsupervised recognition tasks, from image classification to instance-level retrieval; it consistently outperforms global CNN activations without requiring any joint training of prediction layers for a particular target dataset. In absolute terms, it achieves state-of-the-art results on the challenging SUN397 and MIT Indoor Scenes classification datasets, and competitive results on ILSVRC2012/2013 classification and INRIA Holidays retrieval datasets.
UR - http://www.scopus.com/inward/record.url?scp=84906352772&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=84906352772&partnerID=8YFLogxK
U2 - 10.1007/978-3-319-10584-0_26
DO - 10.1007/978-3-319-10584-0_26
M3 - Conference contribution
AN - SCOPUS:84906352772
SN - 9783319105833
T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
SP - 392
EP - 407
BT - Computer Vision, ECCV 2014 - 13th European Conference, Proceedings
PB - Springer
T2 - 13th European Conference on Computer Vision, ECCV 2014
Y2 - 6 September 2014 through 12 September 2014
ER -