Source code for the Continual Learning survey paper:
@article{de2019continual,
title={A continual learning survey: Defying forgetting in classification tasks},
author={De Lange, Matthias and Aljundi, Rahaf and Masana, Marc and Parisot, Sarah and Jia, Xu and Leonardis, Ale{\v{s}} and Slabaugh, Gregory and Tuytelaars, Tinne},
journal={arXiv preprint arXiv:1909.08383},
year={2019}
}
The code contains a generalizing framework for 11 SOTA methods and 4 baselines:
- Methods: SI, EWC, MAS, mean/mode-IMM, LWF, EBLL, PackNet, HAT, GEM, iCaRL
- Baselines
- Joint: Learn from all task data at once with a single head (multi-task learning baseline).
- Finetuning: standard SGD
- Finetuning with Full Memory replay: Allocate memory dynamically to incoming tasks.
- Finetuning with Partial Memory replay: Divide memory a priori over all tasks.
This source code is released under a Attribution-NonCommercial 4.0 International license, find out more about it in the LICENSE file.
Reproducability: Results from the paper can be obtained from src/main_'dataset'.sh. Full pipeline example in src/main_tinyimagenet.sh .
Pipeline: Constructing a custom pipeline typically requires the following steps.
- Project Setup
- For all requirements see requirements.txt.
Main packages can be installed as in
conda create --name <ENV-NAME> python=3.7 conda activate <ENV-NAME> # Main packages conda install -c conda-forge matplotlib tqdm conda install pytorch torchvision cudatoolkit=10.2 -c pytorch # For GEM QP conda install -c omnia quadprog # For PackNet: torchnet pip install git+https://github.com/pytorch/tnt.git@master
- Set paths in 'config.init' (or leave default)
- '{tr,test}_results_root_path': where to save training/testing results.
- 'models_root_path': where to store initial models (to ensure same initial model)
- 'ds_root_path': root path of your datasets
- Prepare dataset: see src/data/"dataset"_dataprep.py (e.g. src/data/tinyimgnet_dataprep.py)
- For all requirements see requirements.txt.
Main packages can be installed as in
- Train any out of the 11 SOTA methods or 4 baselines
- Regularization-based/replay methods: We run a first task model dump, for Synaptic Intelligence (SI) as it acquires importance weights during training. Other methods start from this same initial model.
- Baselines/parameter isolation methods: Start training sequence from scratch
- Evaluate performance, sequence for testing on a task is saved in dictionary format under test_results_root_path defined in config.init.
- Plot the evaluation results, using one of the configuration files in utilities/plot_configs
- Find class "YourMethod" in methods/method.py. Implement the framework phases (documented in code).
- Implement your task-based training script in methods: methods/"YourMethodDir". The class "YourMethod" will call this code for training/eval/processing of a single task.
- src/data: datasets and automated preparation scripts for Tiny Imagenet and iNaturalist.
- src/framework: the novel task incremental continual learning framework. main.py starts training pipeline, specify --test argument to perform evaluation with eval.py.
- src/methods: all methods source code and method.py wrapper.
- src/models: net.py all model preprocessing.
- src/utilities: utils used across all modules and plotting.
- Config:
- src/data/{datasets/models}: default datasets and models directory (see config.init)
- src/results/{train/test}: default training and testing results directory (see config.init)
- Consider citing our work upon using this repo.
- Thanks to Huawei for funding this project.
- Thanks to the following repositories:
- If you want to join the Continual Learning community, checkout https://www.continualai.org
- If you have troubles, please open a Git issue.
- Have you defined your method in the framework and want to share it with the community? Send a pull request!