Learning Two-Branch Neural Networks for Image-Text Matching Tasks

Liwei Wang, Yin Li, Jing Huang, Svetlana Lazebnik

Research output: Contribution to journalArticlepeer-review


Image-language matching tasks have recently attracted a lot of attention in the computer vision field. These tasks include image-sentence matching, i.e., given an image query, retrieving relevant sentences and vice versa, and region-phrase matching or visual grounding, i.e., matching a phrase to relevant regions. This paper investigates two-branch neural networks for learning the similarity between these two data modalities. We propose two network structures that produce different output representations. The first one, referred to as an embedding network, learns an explicit shared latent embedding space with a maximum-margin ranking loss and novel neighborhood constraints. Compared to standard triplet sampling, we perform improved neighborhood sampling that takes neighborhood information into consideration while constructing mini-batches. The second network structure, referred to as a similarity network, fuses the two branches via element-wise product and is trained with regression loss to directly predict a similarity score. Extensive experiments show that our networks achieve high accuracies for phrase localization on the Flickr30K Entities dataset and for bi-directional image-sentence retrieval on Flickr30K and MSCOCO datasets.

Original languageEnglish (US)
Article number8268651
Pages (from-to)394-407
Number of pages14
JournalIEEE transactions on pattern analysis and machine intelligence
Issue number2
StatePublished - Feb 1 2019


  • Deep learning
  • cross-modal retrieval
  • image-sentence retrieval
  • phrase localization
  • visual grounding

ASJC Scopus subject areas

  • Software
  • Computer Vision and Pattern Recognition
  • Computational Theory and Mathematics
  • Artificial Intelligence
  • Applied Mathematics


Dive into the research topics of 'Learning Two-Branch Neural Networks for Image-Text Matching Tasks'. Together they form a unique fingerprint.

Cite this