TY - JOUR
T1 - Towards Practical Non-Adversarial Distribution Matching
AU - Gong, Ziyu
AU - Usman, Ben
AU - Zhao, Han
AU - Inouye, David I.
N1 - Z.G. and D.I. acknowledge support from NSF (IIS-2212097) and ARL (W911NF-2020-221). H.Z. was partly supported by the Defense Advanced Research Projects Agency (DARPA) under Cooperative Agreement Number: HR00112320012, an IBM-IL Discovery Accelerator Institute research award, and Amazon AWS Cloud Credit.
PY - 2024
Y1 - 2024
N2 - Distribution matching can be used to learn invariant representations with applications in fairness and robustness. Most prior works resort to adversarial matching methods but the resulting minimax problems are unstable and challenging to optimize. Non-adversarial likelihood-based approaches either require model invertibility, impose constraints on the latent prior, or lack a generic framework for distribution matching. To overcome these limitations, we propose a non-adversarial VAE-based matching method that can be applied to any model pipeline. We develop a set of alignment upper bounds for distribution matching (including a noisy bound) that have VAE-like objectives but with a different perspective. We carefully compare our method to prior VAE-based matching approaches both theoretically and empirically. Finally, we demonstrate that our novel matching losses can replace adversarial losses in standard invariant representation learning pipelines without modifying the original architectures—thereby significantly broadening the applicability of non-adversarial matching methods.
AB - Distribution matching can be used to learn invariant representations with applications in fairness and robustness. Most prior works resort to adversarial matching methods but the resulting minimax problems are unstable and challenging to optimize. Non-adversarial likelihood-based approaches either require model invertibility, impose constraints on the latent prior, or lack a generic framework for distribution matching. To overcome these limitations, we propose a non-adversarial VAE-based matching method that can be applied to any model pipeline. We develop a set of alignment upper bounds for distribution matching (including a noisy bound) that have VAE-like objectives but with a different perspective. We carefully compare our method to prior VAE-based matching approaches both theoretically and empirically. Finally, we demonstrate that our novel matching losses can replace adversarial losses in standard invariant representation learning pipelines without modifying the original architectures—thereby significantly broadening the applicability of non-adversarial matching methods.
UR - http://www.scopus.com/inward/record.url?scp=85194190319&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85194190319&partnerID=8YFLogxK
M3 - Conference article
AN - SCOPUS:85194190319
SN - 2640-3498
VL - 238
SP - 4276
EP - 4284
JO - Proceedings of Machine Learning Research
JF - Proceedings of Machine Learning Research
T2 - 27th International Conference on Artificial Intelligence and Statistics, AISTATS 2024
Y2 - 2 May 2024 through 4 May 2024
ER -