/Attentive_Replay_IL

Data Efficient Incremental Learning Via Attentive Knowledge Replay (IEEE SMC 2023)

Primary LanguagePython

Data Efficient Incremental Learning Via Attentive Knowledge Replay (IEEE SMC 2023)

Official PyTorch implementaton of IEEE SMC 2023 paper "Data Efficient Incremental Learning Via Attentive Knowledge Replay".
You can visit our project website here.

Introduction

Class-incremental learning (CIL) tackles the problem of continuously optimizing a classification model to support growing number of classes, where the data of novel classes arrive in streams. Recent works propose to use representative exemplars of learnt classes, and replay the knowledge of them afterward under certain memory constraints. However, training on a fixed set of exemplars with an imbalanced proportion to the new data leads to strong biases in the trained models. In this paper, we propose an attentive knowledge replay framework to refresh the knowledge of previously learnt classes during incremental learning, which generates virtual training samples by blending between pairs of data. Particularly, we design an attention module that learns to predict the adaptive blending weights in accordance with their relative importance to the overall objective, where the importance is derived from the change of the image features over incremental phases. Our strategy of attentive knowledge replay encourages the model to learn smoother decision boundaries and thus improves its generalization beyond memorizing the exemplars. We validate our design in a standard class-incremental learning setup and demonstrate its flexibility in various settings.

Usage

Enviroment

Prerequisites

Python = 3.7.4

Pytorch = 1.4.0

CUDA = 12.0

Other requirements

pip install -r requirements.txt

Prepare Dataset

To Do

Evaluation

# python __main__.py --c resnet32 --model icarl_attentive_replay --method attentive_replay --name [exp_name] -order 0 -init_base 10 -inc 10 --load_path [model_path_dir] --eval

Train

python __main__.py --c resnet32 --model icarl_attentive_replay --method attentive_replay --name [exp_name] -order 0 -init_base 10 -inc 10 --alpha 1.0 1.0 -e 90 -sc 50 64 -wp 20 -alr 2e-6 -lrds 1 -arl 0 -mp 0 -fn 1 -seed 20200920 --save

Citation

If you find this work useful for your research, please cite:

@inproceedings{lee2023smc,
 title = {Data Efficient Incremental Learning Via Attentive Knowledge Replay},
 author = {Yi-Lun Lee and Dian-Shan Chen and Chen-Yu Lee and Yi-Hsuan Tsai and Wei-Chen Chiu},
 booktitle = {IEEE International Conference on Systems, Man, and Cybernetics (SMC)},
 year = {2023}
}

Acknowledgements

This code is based on link, and the ResNet for CIFAR10/100 is based on link.