Synthesizing Spoken Descriptions of Images

Xinsheng Wang, Justin Van Der Hout, Jihua Zhu, Mark Hasegawa-Johnson, Odette Scharenborg

Research output: Contribution to journalArticlepeer-review


Image captioning technology has great potential in many scenarios. However, current text-based image captioning methods cannot be applied to approximately half of the world's languages due to these languages' lack of a written form. To solve this problem, recently the image-to-speech task was proposed, which generates spoken descriptions of images bypassing any text via an intermediate representation consisting of phonemes (image-to-phoneme). Here, we present a comprehensive study on the image-to-speech task in which, 1) several representative image-to-text generation methods are implemented for the image-to-phoneme task, 2) objective metrics are sought to evaluate the image-to-phoneme task, and 3) an end-to-end image-to-speech model that is able to synthesize spoken descriptions of images bypassing both text and phonemes is proposed. Extensive experiments are conducted on the public benchmark database Flickr8k. Results of our experiments demonstrate that 1) State-of-the-art image-to-text models can perform well on the image-to-phoneme task, and 2) several evaluation metrics, including BLEU3, BLEU4, BLEU5, and ROUGE-L can be used to evaluate image-to-phoneme performance. Finally, 3) end-to-end image-to-speech bypassing text and phonemes is feasible.

Original languageEnglish (US)
Pages (from-to)3242-3254
Number of pages13
JournalIEEE/ACM Transactions on Audio Speech and Language Processing
StatePublished - 2021


  • Image-to-speech generation
  • cross-modal captioning
  • multimodal modelling
  • speech synthesis

ASJC Scopus subject areas

  • Computer Science (miscellaneous)
  • Acoustics and Ultrasonics
  • Computational Mathematics
  • Electrical and Electronic Engineering


Dive into the research topics of 'Synthesizing Spoken Descriptions of Images'. Together they form a unique fingerprint.

Cite this