TY - GEN
T1 - Do Pre-trained Models Benefit Equally in Continual Learning?
AU - Lee, Kuan Ying
AU - Zhong, Yuanyi
AU - Wang, Yu Xiong
N1 - Publisher Copyright:
© 2023 IEEE.
PY - 2023
Y1 - 2023
N2 - Existing work on continual learning (CL) is primarily devoted to developing algorithms for models trained from scratch. Despite their encouraging performance on contrived benchmarks, these algorithms show dramatic performance drop in real-world scenarios. Therefore, this paper advocates the systematic introduction of pre-training to CL, which is a general recipe for transferring knowledge to downstream tasks but is substantially missing in the CL community. Our investigation reveals the multifaceted complexity of exploiting pre-trained models for CL, along three different axes: pre-trained models, CL algorithms, and CL scenarios. Perhaps most intriguingly, improvements in CL algorithms from pre-training are very inconsistent - an underperforming algorithm could become competitive and even state of the art, when all algorithms start from a pretrained model. This indicates that the current paradigm, where all CL methods are compared in from-scratch training, is not well reflective of the true CL objective and desired progress. In addition, we make several other important observations, including that 1) CL algorithms that exert less regularization benefit more from a pre-trained model; and 2) a stronger pre-trained model such as CLIP does not guarantee a better improvement. Based on these findings, we introduce a simple yet effective baseline that employs minimum regularization and leverages the more beneficial pre-trained model, coupled with a two-stage training pipeline. We recommend including this strong baseline in the future development of CL algorithms, due to its demonstrated state-of-the-art performance. Our code is available at https://github.com/eric11220/pretrained-models-in-CL.
AB - Existing work on continual learning (CL) is primarily devoted to developing algorithms for models trained from scratch. Despite their encouraging performance on contrived benchmarks, these algorithms show dramatic performance drop in real-world scenarios. Therefore, this paper advocates the systematic introduction of pre-training to CL, which is a general recipe for transferring knowledge to downstream tasks but is substantially missing in the CL community. Our investigation reveals the multifaceted complexity of exploiting pre-trained models for CL, along three different axes: pre-trained models, CL algorithms, and CL scenarios. Perhaps most intriguingly, improvements in CL algorithms from pre-training are very inconsistent - an underperforming algorithm could become competitive and even state of the art, when all algorithms start from a pretrained model. This indicates that the current paradigm, where all CL methods are compared in from-scratch training, is not well reflective of the true CL objective and desired progress. In addition, we make several other important observations, including that 1) CL algorithms that exert less regularization benefit more from a pre-trained model; and 2) a stronger pre-trained model such as CLIP does not guarantee a better improvement. Based on these findings, we introduce a simple yet effective baseline that employs minimum regularization and leverages the more beneficial pre-trained model, coupled with a two-stage training pipeline. We recommend including this strong baseline in the future development of CL algorithms, due to its demonstrated state-of-the-art performance. Our code is available at https://github.com/eric11220/pretrained-models-in-CL.
KW - Algorithms: Machine learning architectures
KW - and algorithms (including transfer)
KW - formulations
UR - http://www.scopus.com/inward/record.url?scp=85149015416&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85149015416&partnerID=8YFLogxK
U2 - 10.1109/WACV56688.2023.00642
DO - 10.1109/WACV56688.2023.00642
M3 - Conference contribution
AN - SCOPUS:85149015416
T3 - Proceedings - 2023 IEEE Winter Conference on Applications of Computer Vision, WACV 2023
SP - 6474
EP - 6482
BT - Proceedings - 2023 IEEE Winter Conference on Applications of Computer Vision, WACV 2023
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 23rd IEEE/CVF Winter Conference on Applications of Computer Vision, WACV 2023
Y2 - 3 January 2023 through 7 January 2023
ER -