TY - GEN
T1 - Class-Incremental Exemplar Compression for Class-Incremental Learning
AU - Luo, Zilin
AU - Liu, Yaoyao
AU - Schiele, Bernt
AU - Sun, Qianru
N1 - The author gratefully acknowledges the support by the Lee Kong Chian (LKC) Fellowship fund awarded by Singapore Management University.
PY - 2023
Y1 - 2023
N2 - Exemplar-based class-incremental learning (CIL) [36] finetunes the model with all samples of new classes but few-shot exemplars of old classes in each incremental phase, where the 'few-shot' abides by the limited memory budget. In this paper, we break this 'few-shot' limit based on a simple yet surprisingly effective idea: compressing exemplars by downsampling non-discriminative pixels and saving 'many-shot' compressed exemplars in the memory. Without needing any manual annotation, we achieve this compression by generating 0-1 masks on discriminative pixels from class activation maps (CAM) [49]. We propose an adaptive mask generation model called class-incremental masking (CIM) to explicitly resolve two difficulties of using CAM: 1) transforming the heatmaps of CAM to 0-1 masks with an arbitrary threshold leads to a trade-off between the coverage on discriminative pixels and the quantity of exemplars, as the total memory is fixed; and 2) optimal thresholds vary for different object classes, which is particularly obvious in the dynamic environment of CIL. We optimize the CIM model alternatively with the conventional CIL model through a bilevel optimization problem [40]. We conduct extensive experiments on high-resolution CIL benchmarks including Food-101, ImageNet-100, and ImageNet-1000, and show that using the compressed exemplars by CIM can achieve a new state-of-the-art CIL accuracy, e.g., 4.8 percentage points higher than FOSTER [42] on 10-Phase ImageNet-1000. Our code is available at https://github.com/xfflzlICIM-CIL.
AB - Exemplar-based class-incremental learning (CIL) [36] finetunes the model with all samples of new classes but few-shot exemplars of old classes in each incremental phase, where the 'few-shot' abides by the limited memory budget. In this paper, we break this 'few-shot' limit based on a simple yet surprisingly effective idea: compressing exemplars by downsampling non-discriminative pixels and saving 'many-shot' compressed exemplars in the memory. Without needing any manual annotation, we achieve this compression by generating 0-1 masks on discriminative pixels from class activation maps (CAM) [49]. We propose an adaptive mask generation model called class-incremental masking (CIM) to explicitly resolve two difficulties of using CAM: 1) transforming the heatmaps of CAM to 0-1 masks with an arbitrary threshold leads to a trade-off between the coverage on discriminative pixels and the quantity of exemplars, as the total memory is fixed; and 2) optimal thresholds vary for different object classes, which is particularly obvious in the dynamic environment of CIL. We optimize the CIM model alternatively with the conventional CIL model through a bilevel optimization problem [40]. We conduct extensive experiments on high-resolution CIL benchmarks including Food-101, ImageNet-100, and ImageNet-1000, and show that using the compressed exemplars by CIM can achieve a new state-of-the-art CIL accuracy, e.g., 4.8 percentage points higher than FOSTER [42] on 10-Phase ImageNet-1000. Our code is available at https://github.com/xfflzlICIM-CIL.
KW - continual
KW - low-shot
KW - meta
KW - or long-tail learning
KW - Transfer
UR - http://www.scopus.com/inward/record.url?scp=85170836380&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85170836380&partnerID=8YFLogxK
U2 - 10.1109/CVPR52729.2023.01094
DO - 10.1109/CVPR52729.2023.01094
M3 - Conference contribution
AN - SCOPUS:85170836380
T3 - Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition
SP - 11371
EP - 11380
BT - Proceedings - 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2023
PB - IEEE Computer Society
T2 - 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2023
Y2 - 18 June 2023 through 22 June 2023
ER -