Solving Visual Madlibs with Multiple Cues

Tatiana Tommasi, Arun Mallya, Bryan Plummer, Svetlana Lazebnik, Alexander C. Berg, Tamara L. Berg

Research output: Contribution to conferencePaperpeer-review

Abstract

This paper focuses on answering fill-in-the-blank style multiple choice questions from the Visual Madlibs dataset. Previous approaches to Visual Question Answering (VQA) have mainly used generic image features from networks trained on the ImageNet dataset, despite the wide scope of questions. In contrast, our approach employs features derived from networks trained for specialized tasks of scene classification, person activity prediction, and person and object attribute prediction. We also present a method for selecting sub-regions of an image that are relevant for evaluating the appropriateness of a putative answer. Visual features are computed both from the whole image and from local regions, while sentences are mapped to a common space using a simple normalized canonical correlation analysis (CCA) model. Our results show a significant improvement over the previous state of the art, and indicate that answering different question types benefits from examining a variety of image cues and carefully choosing informative image sub-regions.

Original languageEnglish (US)
Pages77.1-77.13
DOIs
StatePublished - 2016
Event27th British Machine Vision Conference, BMVC 2016 - York, United Kingdom
Duration: Sep 19 2016Sep 22 2016

Other

Other27th British Machine Vision Conference, BMVC 2016
Country/TerritoryUnited Kingdom
CityYork
Period9/19/169/22/16

ASJC Scopus subject areas

  • Computer Vision and Pattern Recognition

Fingerprint

Dive into the research topics of 'Solving Visual Madlibs with Multiple Cues'. Together they form a unique fingerprint.

Cite this