An Empirical Study of Self-Supervised Learning with Wasserstein Distance

Makoto Yamada, Yuki Takezawa, Guillaume Houry, Kira Michaela Düsterwald, Deborah Sulem, Han Zhao, Yao-Hung Tsai

Research output: Contribution to journalArticlepeer-review

Abstract

In this study, we consider the problem of self-supervised learning (SSL) utilizing the 1-Wasserstein distance on a tree structure (a.k.a., Tree-Wasserstein distance (TWD)), where TWD is defined as the L1 distance between two tree-embedded vectors. In SSL methods, the cosine similarity is often utilized as an objective function; however, it has not been well studied when utilizing the Wasserstein distance. Training the Wasserstein distance is numerically challenging. Thus, this study empirically investigates a strategy for optimizing the SSL with the Wasserstein distance and finds a stable training procedure. More specifically, we evaluate the combination of two types of TWD (total variation and ClusterTree) and several probability models, including the softmax function, the ArcFace probability model, and simplicial embedding. We propose a simple yet effective Jeffrey divergence-based regularization method to stabilize optimization. Through empirical experiments on STL10, CIFAR10, CIFAR100, and SVHN, we find that a simple combination of the softmax function and TWD can obtain significantly lower results than the standard SimCLR. Moreover, a simple combination of TWD and SimSiam fails to train the model. We find that the model performance depends on the combination of TWD and probability model, and that the Jeffrey divergence regularization helps in model training. Finally, we show that the appropriate combination of the TWD and probability model outperforms cosine similarity-based representation learning.
Original languageEnglish (US)
Article number939
JournalEntropy
Volume26
Issue number11
DOIs
StatePublished - Nov 2024

Keywords

  • optimal transport
  • self-supervised learning
  • Wasserstein distance

Fingerprint

Dive into the research topics of 'An Empirical Study of Self-Supervised Learning with Wasserstein Distance'. Together they form a unique fingerprint.

Cite this