TY - GEN
T1 - Dual-Path Cross-Modal Attention for Better Audio-Visual Speech Extraction
AU - Xu, Zhongweiyang
AU - Fan, Xulin
AU - Hasegawa-Johnson, Mark
N1 - Publisher Copyright:
© 2023 IEEE.
PY - 2023
Y1 - 2023
N2 - Audiovisual target speaker extraction is the task of separating, from an audio mixture, the speaker whose face is visible in an accompanying video. Published approaches typically upsample the video or downsample the audio, then fuse the two streams using concatenation, multiplication, or cross-modal attention. This paper proposes, instead, to use a dual-path attention architecture in which the audio chunk length is comparable to the duration of a video frame. Audio is transformed by intra-chunk attention, concatenated to video features, then transformed by inter-chunk attention. Because of residual connections, the audio and video features remain logically distinct across multiple network layers, therefore dual-path audiovisual feature fusion can be performed repeatedly across multiple layers. When given 2-5-speaker mixtures constructed from the challenging LRS3 test set, results are about 7dB better than Con-vTasNet or AV-ConvTasNet, with the performance gap widening slightly as the number of speakers increases.
AB - Audiovisual target speaker extraction is the task of separating, from an audio mixture, the speaker whose face is visible in an accompanying video. Published approaches typically upsample the video or downsample the audio, then fuse the two streams using concatenation, multiplication, or cross-modal attention. This paper proposes, instead, to use a dual-path attention architecture in which the audio chunk length is comparable to the duration of a video frame. Audio is transformed by intra-chunk attention, concatenated to video features, then transformed by inter-chunk attention. Because of residual connections, the audio and video features remain logically distinct across multiple network layers, therefore dual-path audiovisual feature fusion can be performed repeatedly across multiple layers. When given 2-5-speaker mixtures constructed from the challenging LRS3 test set, results are about 7dB better than Con-vTasNet or AV-ConvTasNet, with the performance gap widening slightly as the number of speakers increases.
KW - Audio-Visual Attention
KW - Target Speaker Extraction
UR - http://www.scopus.com/inward/record.url?scp=86000380824&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=86000380824&partnerID=8YFLogxK
U2 - 10.1109/ICASSP49357.2023.10096732
DO - 10.1109/ICASSP49357.2023.10096732
M3 - Conference contribution
AN - SCOPUS:86000380824
T3 - ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings
BT - ICASSP 2023 - 2023 IEEE International Conference on Acoustics, Speech and Signal Processing, Proceedings
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 48th IEEE International Conference on Acoustics, Speech and Signal Processing, ICASSP 2023
Y2 - 4 June 2023 through 10 June 2023
ER -