FACIL started as code for the paper:
Class-incremental learning: survey and performance evaluation
Marc Masana, Xialei Liu, Bartlomiej Twardowski, Mikel Menta, Andrew D. Bagdanov, Joost van de Weijer
(arxiv)
It allows to reproduce the results in the paper as well as provide a (hopefully!) helpful framework to develop new methods for incremental learning and analyse existing ones. Our idea is to expand the available approaches and tools with the help of the community. To help FACIL grow, don't forget to star this github repository and share it to friends and coworkers!
We provide a framework based on class-incremental learning. However, task-incremental learning is also fully supported. Experiments by default provide results on both task-aware and task-agnostic evaluation. Furthermore, if an experiment runs with one task on one dataset, results would be equivalent to 'common' supervised learning.
Setting | task-ID at train time | task-ID at test time | # of tasks |
---|---|---|---|
class-incremental learning | yes | no | ≥1 |
task-incremental learning | yes | yes | ≥1 |
non-incremental supervised learning | yes | yes | 1 |
Current available approaches include:
Finetuning • Freezing • Joint
LwF • iCaRL • EWC • PathInt • MAS • RWalk • EEIL • LwM • DMC • BiC • LUCIR • IL2M
Clone this github repository:
git clone https://github.com/mmasana/FACIL.git
cd FACIL
Optionally, create an environment to run the code (click to expand).
The library requirements of the code are detailed in requirements.txt. You can install them using pip with:
python3 -m pip install -r requirements.txt
Development environment based on Conda distribution. All dependencies are in environment.yml
file.
To create a new environment check out the repository and type:
conda env create --file environment.yml --name FACIL
Notice: set the appropriate version of your CUDA driver for cudatoolkit
in environment.yml
.
conda activate FACIL
conda deactivate
To run the basic code:
python3 -u src/main_incremental.py
More options are explained in the src
, including GridSearch usage. Also, more specific options on approaches,
loggers, datasets and networks.
We provide scripts to reproduce the specific scenarios proposed in Class-incremental learning: survey and performance evaluation:
- CIFAR-100 (10 tasks) with ResNet-32 without exemplars
- CIFAR-100 (10 tasks) with ResNet-32 with fixed and growing memory
- MORE COMING SOON...
All scripts run 10 times to later calculate mean and standard deviation of the results. Check out all available in the scripts folder.
Please check the MIT license that is listed in this repository.
If you want to cite the framework feel free to use this preprint citation while we await publication:
@article{masana2020class,
title={Class-incremental learning: survey and performance evaluation},
author={Masana, Marc and Liu, Xialei and Twardowski, Bartlomiej and Menta, Mikel and Bagdanov, Andrew D and van de Weijer, Joost},
journal={arXiv preprint arXiv:2010.15277},
year={2020}
}
The basis of FACIL is made possible thanks to Marc Masana, Xialei Liu, Bartlomiej Twardowski and Mikel Menta. Code structure is inspired by HAT. Feel free to contribute or propose new features by opening an issue!