Self-supervised Metric Learning in Multi-View Data: A Downstream Task Perspective

Research output: Contribution to journalArticlepeer-review

Abstract

Self-supervised metric learning has been a successful approach for learning a distance from an unlabeled dataset. The resulting distance is broadly useful for improving various distance-based downstream tasks, even when no information from downstream tasks is used in the metric learning stage. To gain insights into this approach, we develop a statistical framework to theoretically study how self-supervised metric learning can benefit downstream tasks in the context of multi-view data. Under this framework, we show that the target distance of metric learning satisfies several desired properties for the downstream tasks. On the other hand, our investigation suggests the target distance can be further improved by moderating each direction’s weights. In addition, our analysis precisely characterizes the improvement by self-supervised metric learning on four commonly used downstream tasks: sample identification, two-sample testing, k-means clustering, and k-nearest neighbor classification. When the distance is estimated from an unlabeled dataset, we establish the upper bound on distance estimation’s accuracy and the number of samples sufficient for downstream task improvement. Finally, numerical experiments are presented to support the theoretical results in the article. Supplementary materials for this article are available online.

Original languageEnglish (US)
Pages (from-to)2454-2467
Number of pages14
JournalJournal of the American Statistical Association
Volume118
Issue number544
DOIs
StatePublished - 2023

Keywords

  • Metric learning
  • Two-sample testing
  • k-means
  • k-nearest neighbor

ASJC Scopus subject areas

  • Statistics and Probability
  • Statistics, Probability and Uncertainty

Fingerprint

Dive into the research topics of 'Self-supervised Metric Learning in Multi-View Data: A Downstream Task Perspective'. Together they form a unique fingerprint.

Cite this