TY - GEN
T1 - Self-similarity grouping
T2 - 17th IEEE/CVF International Conference on Computer Vision, ICCV 2019
AU - Fu, Yang
AU - Wei, Yunchao
AU - Wang, Guanshuo
AU - Zhou, Yuqian
AU - Shi, Honghui
AU - Huang, Thomas
N1 - Funding Information:
Acknowledgements: This work is part supported by IBM-ILLINOIS Center for Cognitive Computing Systems Research (C3SR) and ARC DECRA DE190101315.
Publisher Copyright:
© 2019 IEEE.
PY - 2019/10
Y1 - 2019/10
N2 - Domain adaptation in person re-identification (re-ID) has always been a challenging task. In this work, we explore how to harness the similar natural characteristics existing in the samples from the target domain for learning to conduct person re-ID in an unsupervised manner. Concretely, we propose a Self-similarity Grouping (SSG) approach, which exploits the potential similarity (from the global body to local parts) of unlabeled samples to build multiple clusters from different views automatically. These independent clusters are then assigned with labels, which serve as the pseudo identities to supervise the training process. We repeatedly and alternatively conduct such a grouping and training process until the model is stable. Despite the apparent simplify, our SSG outperforms the state-of-the-arts by more than 4.6% (DukeMTMC→Market1501) and 4.4% (Market1501→DukeMTMC) in mAP, respectively. Upon our SSG, we further introduce a clustering-guided semisupervised approach named SSG ++ to conduct the one-shot domain adaption in an open set setting (i.e. the number of independent identities from the target domain is unknown). Without spending much effort on labeling, our SSG ++ can further promote the mAP upon SSG by 10.7% and 6.9%, respectively. Our Code is available at: Https://github.com/OasisYang/SSG.
AB - Domain adaptation in person re-identification (re-ID) has always been a challenging task. In this work, we explore how to harness the similar natural characteristics existing in the samples from the target domain for learning to conduct person re-ID in an unsupervised manner. Concretely, we propose a Self-similarity Grouping (SSG) approach, which exploits the potential similarity (from the global body to local parts) of unlabeled samples to build multiple clusters from different views automatically. These independent clusters are then assigned with labels, which serve as the pseudo identities to supervise the training process. We repeatedly and alternatively conduct such a grouping and training process until the model is stable. Despite the apparent simplify, our SSG outperforms the state-of-the-arts by more than 4.6% (DukeMTMC→Market1501) and 4.4% (Market1501→DukeMTMC) in mAP, respectively. Upon our SSG, we further introduce a clustering-guided semisupervised approach named SSG ++ to conduct the one-shot domain adaption in an open set setting (i.e. the number of independent identities from the target domain is unknown). Without spending much effort on labeling, our SSG ++ can further promote the mAP upon SSG by 10.7% and 6.9%, respectively. Our Code is available at: Https://github.com/OasisYang/SSG.
UR - http://www.scopus.com/inward/record.url?scp=85081898046&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85081898046&partnerID=8YFLogxK
U2 - 10.1109/ICCV.2019.00621
DO - 10.1109/ICCV.2019.00621
M3 - Conference contribution
AN - SCOPUS:85081898046
T3 - Proceedings of the IEEE International Conference on Computer Vision
SP - 6111
EP - 6120
BT - Proceedings - 2019 International Conference on Computer Vision, ICCV 2019
PB - Institute of Electrical and Electronics Engineers Inc.
Y2 - 27 October 2019 through 2 November 2019
ER -