TY - GEN
T1 - From captions to visual concepts and back
AU - Fang, Hao
AU - Gupta, Saurabh
AU - Iandola, Forrest
AU - Srivastava, Rupesh K.
AU - Deng, Li
AU - Dollár, Piotr
AU - Gao, Jianfeng
AU - He, Xiaodong
AU - Mitchell, Margaret
AU - Platt, John C.
AU - Zitnick, C. Lawrence
AU - Zweig, Geoffrey
N1 - Publisher Copyright:
© 2015 IEEE.
PY - 2015/10/14
Y1 - 2015/10/14
N2 - This paper presents a novel approach for automatically generating image descriptions: visual detectors, language models, and multimodal similarity models learnt directly from a dataset of image captions. We use multiple instance learning to train visual detectors for words that commonly occur in captions, including many different parts of speech such as nouns, verbs, and adjectives. The word detector outputs serve as conditional inputs to a maximum-entropy language model. The language model learns from a set of over 400,000 image descriptions to capture the statistics of word usage. We capture global semantics by re-ranking caption candidates using sentence-level features and a deep multimodal similarity model. Our system is state-of-the-art on the official Microsoft COCO benchmark, producing a BLEU-4 score of 29.1%. When human judges compare the system captions to ones written by other people on our held-out test set, the system captions have equal or better quality 34% of the time.
AB - This paper presents a novel approach for automatically generating image descriptions: visual detectors, language models, and multimodal similarity models learnt directly from a dataset of image captions. We use multiple instance learning to train visual detectors for words that commonly occur in captions, including many different parts of speech such as nouns, verbs, and adjectives. The word detector outputs serve as conditional inputs to a maximum-entropy language model. The language model learns from a set of over 400,000 image descriptions to capture the statistics of word usage. We capture global semantics by re-ranking caption candidates using sentence-level features and a deep multimodal similarity model. Our system is state-of-the-art on the official Microsoft COCO benchmark, producing a BLEU-4 score of 29.1%. When human judges compare the system captions to ones written by other people on our held-out test set, the system captions have equal or better quality 34% of the time.
UR - http://www.scopus.com/inward/record.url?scp=84959250180&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=84959250180&partnerID=8YFLogxK
U2 - 10.1109/CVPR.2015.7298754
DO - 10.1109/CVPR.2015.7298754
M3 - Conference contribution
AN - SCOPUS:84959250180
T3 - Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition
SP - 1473
EP - 1482
BT - IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2015
PB - IEEE Computer Society
T2 - IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2015
Y2 - 7 June 2015 through 12 June 2015
ER -