Dual-Path Cross-Modal Attention for Better Audio-Visual Speech Extraction

Zhongweiyang Xu, Xulin Fan, Mark Hasegawa-Johnson

Research output: Contribution to journalConference articlepeer-review

Abstract

Audiovisual target speaker extraction is the task of separating, from an audio mixture, the speaker whose face is visible in an accompanying video. Published approaches typically upsample the video or downsample the audio, then fuse the two streams using concatenation, multiplication, or cross-modal attention. This paper proposes, instead, to use a dual-path attention architecture in which the audio chunk length is comparable to the duration of a video frame. Audio is transformed by intra-chunk attention, concatenated to video features, then transformed by inter-chunk attention. Because of residual connections, the audio and video features remain logically distinct across multiple network layers, therefore dual-path audiovisual feature fusion can be performed repeatedly across multiple layers. When given 2-5-speaker mixtures constructed from the challenging LRS3 test set, results are about 7dB better than Con-vTasNet or AV-ConvTasNet, with the performance gap widening slightly as the number of speakers increases.

Keywords

  • Audio-Visual Attention
  • Target Speaker Extraction

ASJC Scopus subject areas

  • Software
  • Signal Processing
  • Electrical and Electronic Engineering

Fingerprint

Dive into the research topics of 'Dual-Path Cross-Modal Attention for Better Audio-Visual Speech Extraction'. Together they form a unique fingerprint.

Cite this