Abstract
Self-supervised metric learning has been a successful approach for learning a distance from an unlabeled dataset. The resulting distance is broadly useful for improving various distance-based downstream tasks, even when no information from downstream tasks is used in the metric learning stage. To gain insights into this approach, we develop a statistical framework to theoretically study how self-supervised metric learning can benefit downstream tasks in the context of multi-view data. Under this framework, we show that the target distance of metric learning satisfies several desired properties for the downstream tasks. On the other hand, our investigation suggests the target distance can be further improved by moderating each direction’s weights. In addition, our analysis precisely characterizes the improvement by self-supervised metric learning on four commonly used downstream tasks: sample identification, two-sample testing, k-means clustering, and k-nearest neighbor classification. When the distance is estimated from an unlabeled dataset, we establish the upper bound on distance estimation’s accuracy and the number of samples sufficient for downstream task improvement. Finally, numerical experiments are presented to support the theoretical results in the article. Supplementary materials for this article are available online.
Original language | English (US) |
---|---|
Pages (from-to) | 2454-2467 |
Number of pages | 14 |
Journal | Journal of the American Statistical Association |
Volume | 118 |
Issue number | 544 |
DOIs | |
State | Published - 2023 |
Keywords
- Metric learning
- Two-sample testing
- k-means
- k-nearest neighbor
ASJC Scopus subject areas
- Statistics and Probability
- Statistics, Probability and Uncertainty