TY - GEN
T1 - DON'T SEPARATE, LEARN TO REMIX
T2 - 47th IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2022
AU - Yang, Haici
AU - Firodiya, Shivani
AU - Bryan, Nicholas J.
AU - Kim, Minje
N1 - Publisher Copyright:
© 2022 IEEE
PY - 2022
Y1 - 2022
N2 - The task of manipulating the level and/or effects of individual instruments to recompose a mixture of recordings, or remixing, is common across a variety of applications such as music production, audiovisual post-production, podcasts, and more. This process, however, traditionally requires access to individual source recordings, restricting the creative process. To work around this, source separation algorithms can separate a mixture into its respective components. Then, a user can adjust their levels and mix them back together. This two-step approach, however, still suffers from audible artifacts and motivates further work. In this work, we re-purpose Conv-TasNet, a well-known source separation model, into two neural remixing architectures that learn to remix directly rather than just to separate sources. We use an explicit loss term that directly measures remix quality and jointly optimize it with a separation loss. We evaluate our methods using the Slakh and MUSDB18 datasets and report remixing performance as well as the impact on source separation as a byproduct. Our results suggest that learning-to-remix significantly outperforms a strong separation baseline and is particularly useful for small volume changes.
AB - The task of manipulating the level and/or effects of individual instruments to recompose a mixture of recordings, or remixing, is common across a variety of applications such as music production, audiovisual post-production, podcasts, and more. This process, however, traditionally requires access to individual source recordings, restricting the creative process. To work around this, source separation algorithms can separate a mixture into its respective components. Then, a user can adjust their levels and mix them back together. This two-step approach, however, still suffers from audible artifacts and motivates further work. In this work, we re-purpose Conv-TasNet, a well-known source separation model, into two neural remixing architectures that learn to remix directly rather than just to separate sources. We use an explicit loss term that directly measures remix quality and jointly optimize it with a separation loss. We evaluate our methods using the Slakh and MUSDB18 datasets and report remixing performance as well as the impact on source separation as a byproduct. Our results suggest that learning-to-remix significantly outperforms a strong separation baseline and is particularly useful for small volume changes.
KW - Music remix
KW - source separation
UR - http://www.scopus.com/inward/record.url?scp=85131243674&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85131243674&partnerID=8YFLogxK
U2 - 10.1109/ICASSP43922.2022.9746077
DO - 10.1109/ICASSP43922.2022.9746077
M3 - Conference contribution
AN - SCOPUS:85131243674
T3 - ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings
SP - 116
EP - 120
BT - 2022 IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2022 - Proceedings
PB - Institute of Electrical and Electronics Engineers Inc.
Y2 - 23 May 2022 through 27 May 2022
ER -