TY - JOUR
T1 - Self-supervised Metric Learning in Multi-View Data
T2 - A Downstream Task Perspective
AU - Wang, Shulei
N1 - Publisher Copyright:
© 2022 American Statistical Association.
PY - 2022
Y1 - 2022
N2 - Self-supervised metric learning has been a successful approach for learning a distance from an unlabeled dataset. The resulting distance is broadly useful for improving various distance-based downstream tasks, even when no information from downstream tasks is used in the metric learning stage. To gain insights into this approach, we develop a statistical framework to theoretically study how self-supervised metric learning can benefit downstream tasks in the context of multi-view data. Under this framework, we show that the target distance of metric learning satisfies several desired properties for the downstream tasks. On the other hand, our investigation suggests the target distance can be further improved by moderating each direction’s weights. In addition, our analysis precisely characterizes the improvement by self-supervised metric learning on four commonly used downstream tasks: sample identification, two-sample testing, k-means clustering, and k-nearest neighbor classification. When the distance is estimated from an unlabeled dataset, we establish the upper bound on distance estimation’s accuracy and the number of samples sufficient for downstream task improvement. Finally, numerical experiments are presented to support the theoretical results in the article. Supplementary materials for this article are available online.
AB - Self-supervised metric learning has been a successful approach for learning a distance from an unlabeled dataset. The resulting distance is broadly useful for improving various distance-based downstream tasks, even when no information from downstream tasks is used in the metric learning stage. To gain insights into this approach, we develop a statistical framework to theoretically study how self-supervised metric learning can benefit downstream tasks in the context of multi-view data. Under this framework, we show that the target distance of metric learning satisfies several desired properties for the downstream tasks. On the other hand, our investigation suggests the target distance can be further improved by moderating each direction’s weights. In addition, our analysis precisely characterizes the improvement by self-supervised metric learning on four commonly used downstream tasks: sample identification, two-sample testing, k-means clustering, and k-nearest neighbor classification. When the distance is estimated from an unlabeled dataset, we establish the upper bound on distance estimation’s accuracy and the number of samples sufficient for downstream task improvement. Finally, numerical experiments are presented to support the theoretical results in the article. Supplementary materials for this article are available online.
KW - Metric learning
KW - Two-sample testing
KW - k-means
KW - k-nearest neighbor
UR - http://www.scopus.com/inward/record.url?scp=85130971728&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85130971728&partnerID=8YFLogxK
U2 - 10.1080/01621459.2022.2057317
DO - 10.1080/01621459.2022.2057317
M3 - Article
AN - SCOPUS:85130971728
SN - 0162-1459
JO - Journal of the American Statistical Association
JF - Journal of the American Statistical Association
ER -