/MEGA

[NeurIPS 2020, Spotlight] Improved Schemes for Episodic Memory-based Lifelong Learning

Primary LanguageJupyter Notebook

Improved Schemes for Episodic Memory-based Lifelong Learning [arXiv]

Authors: Yunhui Guo*, Mingrui Liu*, Tianbao Yang, Tajana Rosing
*Equal contribution

NeurIPS 2020 (Spotlight)

@article{guo2020improved,
  title={Improved Schemes for Episodic Memory-based Lifelong Learning},
  author={Guo, Yunhui and Liu, Mingrui and Yang, Tianbao and Rosing, Tajana},
  journal={Advances in Neural Information Processing Systems},
  volume={33},
  year={2020}
}

Abstract

Current deep neural networks can achieve remarkable performance on a singletask. However, when the deep neural network is continually trained on a sequenceof tasks, it seems to gradually forget the previous learned knowledge. This phe-nomenon is referred to ascatastrophic forgettingand motivates the field calledlifelong learning. Recently, episodic memory based approaches such as GEM [1]and A-GEM [2] have shown remarkable performance. In this paper, we providethe first unified view of episodic memory based approaches from an optimization’sperspective. This view leads to two improved schemes for episodic memory basedlifelong learning, called MEGA-I and MEGA-II. MEGA-I and MEGA-II modulatethe balance between old tasks and the new task by integrating the current gradientwith the gradient computed on the episodic memory. Notably, we show that GEMand A-GEM are degenerate cases of MEGA-I and MEGA-II which consistentlyput the same emphasis on the current task, regardless of how the loss changesover time. Our proposed schemes address this issue by using novel loss-balancingupdating rules, which drastically improve the performance over GEM and A-GEM.Extensive experimental results show that the proposed schemes significantly ad-vance the state-of-the-art on four commonly used lifelong learning benchmarks,reducing the error by up to 18%.

Requirements

TensorFlow >= v1.9.0. The code is based on https://github.com/facebookresearch/agem.

Training

To replicate the results of the paper on a particular dataset, execute (see the Note below for downloading the CUB and AWA datasets):

$ ./replicate_results.sh <DATASET> <THREAD-ID> 

Example runs are:

$ ./replicate_results.sh MNIST 4     /* Train MEGA on MNIST */

$ ./replicate_results.sh CIFAR 3     /* Train MEGA on CIFAR */

$ ./replicate_results.sh CUB 3 0   /* Train MEGA on CUB */

$ ./replicate_results.sh AWA 7 0    /* Train MEGA on AWA */

Note

For CUB and AWA experiments, download the dataset prior to running the above script. Run following for downloading the datasets:

$ ./download_cub_awa.sh

The plotting code is provided under the folder plotting_code/. Update the paths in the plotting code accordingly.

Results on MNIST

Results on CIFAR

Results on CUB

Results on AWA

Legend