Unsupervised Improvement of Audio-Text Cross-Modal Representations

Zhepei Wang, Cem Subakan, Krishna Subramani, Junkai Wu, Tiago Tavares, Fabio Ayres, Paris Smaragdis

Research output: Contribution to journalConference articlepeer-review

Abstract

Recent advances in using language models to obtain cross-modal audio-text representations have overcome the limitations of conventional training approaches that use predefined labels. This has allowed the community to make progress in tasks like zero-shot classification, which would otherwise not be possible. However, learning such representations requires a large amount of human-annotated audio-text pairs. In this paper, we study unsupervised approaches to improve the learning framework of such representations with unpaired text and audio. We explore domain-unspecific and domain-specific curation methods to create audio-text pairs that we use to further improve the model. We also show that when domain-specific curation is used in conjunction with a soft-labeled contrastive loss, we are able to obtain significant improvement in terms of zero-shot classification performance on downstream sound event classification or acoustic scene classification tasks.

Original languageEnglish (US)
JournalIEEE Workshop on Applications of Signal Processing to Audio and Acoustics
Volume2023-January
DOIs
StatePublished - 2023
Event2023 IEEE Workshop on Applications of Signal Processing to Audio and Acoustics, WASPAA 2023 - New Paltz, United States
Duration: Oct 22 2023Oct 25 2023

Keywords

  • Audio-text representation learning
  • acoustic scene classification
  • contrastive learning
  • data augmentation
  • sound event classification

ASJC Scopus subject areas

  • Electrical and Electronic Engineering
  • Computer Science Applications

Fingerprint

Dive into the research topics of 'Unsupervised Improvement of Audio-Text Cross-Modal Representations'. Together they form a unique fingerprint.

Cite this