TY - GEN
T1 - The Surprising Positive Knowledge Transfer in Continual 3D Object Shape Reconstruction
AU - Thai, Anh
AU - Stojanov, Stefan
AU - Huang, Zixuan
AU - Rehg, James M.
N1 - Funding Information:
We would like to thank Miao Liu and Meera Hahn for the helpful discussion. This work was supported by NIH R01-MH114999 and NSF Award 1936970.
Publisher Copyright:
© 2022 IEEE.
PY - 2022
Y1 - 2022
N2 - Continual learning has been extensively studied for classification tasks with methods developed to primarily avoid catastrophic forgetting, a phenomenon where earlier learned concepts are forgotten at the expense of more recent samples. In this work, we present a set of continual 3D object shape reconstruction tasks, including complete 3D shape reconstruction from different input modalities, as well as visible surface (2.5D) reconstruction which, surprisingly demonstrate positive knowledge (backward and forward) transfer when training with solely standard SGD and without additional heuristics. We provide evidence that continuously updated representation learning of single-view 3D shape reconstruction improves the performance on learned and novel categories over time. We provide a novel analysis of knowledge transfer ability by looking at the output distribution shift across sequential learning tasks. Finally, we show that the robustness of these tasks leads to the potential of having a proxy representation learning task for continual classification. The codebase, dataset and pretrained models released with this article can be found at https://github.com/rehg-lab/CLRec
AB - Continual learning has been extensively studied for classification tasks with methods developed to primarily avoid catastrophic forgetting, a phenomenon where earlier learned concepts are forgotten at the expense of more recent samples. In this work, we present a set of continual 3D object shape reconstruction tasks, including complete 3D shape reconstruction from different input modalities, as well as visible surface (2.5D) reconstruction which, surprisingly demonstrate positive knowledge (backward and forward) transfer when training with solely standard SGD and without additional heuristics. We provide evidence that continuously updated representation learning of single-view 3D shape reconstruction improves the performance on learned and novel categories over time. We provide a novel analysis of knowledge transfer ability by looking at the output distribution shift across sequential learning tasks. Finally, we show that the robustness of these tasks leads to the potential of having a proxy representation learning task for continual classification. The codebase, dataset and pretrained models released with this article can be found at https://github.com/rehg-lab/CLRec
KW - 3D Shape Reconstruction
KW - Continual Learning
UR - http://www.scopus.com/inward/record.url?scp=85149336748&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85149336748&partnerID=8YFLogxK
U2 - 10.1109/3DV57658.2022.00033
DO - 10.1109/3DV57658.2022.00033
M3 - Conference contribution
AN - SCOPUS:85149336748
T3 - Proceedings - 2022 International Conference on 3D Vision, 3DV 2022
SP - 209
EP - 218
BT - Proceedings - 2022 International Conference on 3D Vision, 3DV 2022
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 10th International Conference on 3D Vision, 3DV 2022
Y2 - 12 September 2022 through 15 September 2022
ER -