This repository is maintained by Massimo Caccia and Timothée Lesort don't hesitate to send us an email to collaborate or fix some entries ({massimo.p.caccia , t.lesort} at gmail.com). The automation script of this repo is adapted from Automatic_Awesome_Bibliography.
For contributing to the repository please follow the process here
You can directly use our bib.tex in overleaf with this link
- Classics
- Empirical Study
- Surveys
- Influentials
- New Settings or Metrics
- General Continual Learning Methods (SL and RL)
- Task-Agnostic Continual Learning
- Regularization Methods
- Distillation Methods
- Rehearsal Methods
- Generative Replay Methods
- Dynamic Architectures or Routing Methods
- Hybrid Methods
- Continual Few-Shot Learning
- Meta-Continual Learning
- Lifelong Reinforcement Learning
- Task-Agnostic Lifelong Reinforcement Learning
- Continual Generative Modeling
- Biologically-Inspired
- Miscellaneous
- Applications
- Thesis
- Libraries
- Workshops
- Catastrophic forgetting in connectionist networks , (1999) by French, Robert M. [bib]
- Lifelong robot learning , (1995) by Thrun, Sebastian and Mitchell, Tom M [bib]
Argues knowledge transfer is essential if robots are to learn control with moderate learning times
- Catastrophic Forgetting, Rehearsal and Pseudorehearsal , (1995) by * Anthony Robins * [bib]
- Catastrophic interference in connectionist networks: The sequential learning problem , (1989) by McCloskey, Michael and Cohen, Neal J [bib]
Introduces CL and reveals the catastrophic forgetting problem
- Effects of Auxiliary Knowledge on Continual Learning , (ICPR 2022) by Bellitto, Giovanni, Pennisi, Matteo, Palazzo, Simone, Bonicelli, Lorenzo, Boschini, Matteo, Calderara, Simone and Spampinato, Concetto [bib]
- Rethinking Experience Replay: a Bag of Tricks for Continual Learning , (ICPR 2021) by Buzzega, Pietro, Boschini, Matteo, Porrello, Angelo and Calderara, Simone [bib]
- A comprehensive study of class incremental learning algorithms for visual tasks , (Neural Networks 2021) by Eden Belouadah, Adrian Popescu and Ioannis Kanellos [bib]
- Online Continual Learning in Image Classification: An Empirical Survey, (2021) by Zheda Mai, Ruiwen Li, Jihwan Jeong, David Quispe, Hyunwoo Kim and Scott Sanner [bib]
- GDumb: A simple approach that questions our progress in continual learning, (ECCV 2020) by Prabhu, Ameya, Torr, Philip HS and Dokania, Puneet K [bib]
introduces a super simple methods that outperforms almost all methods in all of the CL benchmarks. We need new better benchamrks
- Continual learning: A comparative study on how to defy forgetting in classification tasks , (2019) by Matthias De Lange, Rahaf Aljundi, Marc Masana, Sarah Parisot, Xu Jia, Ales Leonardis, Gregory Slabaugh and Tinne Tuytelaars [bib]
Extensive empirical study of CL methods (in the multi-head setting)
- Three scenarios for continual learning , (arXiv 2019) by van de Ven, Gido M and Tolias, Andreas S [bib]
An extensive review of CL methods in three different scenarios (task-, domain-, and class-incremental learning)
- Continuous learning in single-incremental-task scenarios, (Neural Networks 2019) by Maltoni, Davide and Lomonaco, Vincenzo [bib]
- Towards Robust Evaluations of Continual Learning , (arXiv 2018) by Farquhar, Sebastian and Gal, Yarin [bib]
Proposes desideratas and reexamines the evaluation protocol
- Catastrophic forgetting: still a problem for DNNs, (ICANN 2018) by Pf"ulb, B, Gepperth, A, Abdullah, S and Krawczyk, A [bib]
- Measuring Catastrophic Forgetting in Neural Networks, (2017) by Kemker, R., McClure, M., Abitino, A. and Hayes, T. and Kanan, C. [bib]
- CORe50: a New Dataset and Benchmark for Continuous Object Recognition , (CoRL 2017) by Vincenzo Lomonaco and Davide Maltoni [bib]
- An Empirical Investigation of Catastrophic Forgetting in Gradient-Based Neural Networks , (2013) by Goodfellow, I.~J., Mirza, M., Xiao, D., Courville, A. and Bengio, Y. [bib]
Investigates CF in neural networks
- An Investigation of Replay-based Approaches for Continual Learning , (IJCNN 2021) by Bagus, Benedikt and Gepperth, Alexander [bib]
- Embracing Change: Continual Learning in Deep Neural Networks, (2020) by Hadsell, Raia, Rao, Dushyant, Rusu, Andrei and Pascanu, Razvan [bib]
- Towards Continual Reinforcement Learning: A Review and Perspectives, (2020) by Khimya Khetarpal, Matthew Riemer, Irina Rish and Doina Precup [bib]
A review on continual reinforcement learning
- Continual learning for robotics: Definition, framework, learning strategies, opportunities and challenges , (Information Fusion 2020) by Timothée Lesort, Vincenzo Lomonaco, Andrei Stoian, Davide Maltoni, David Filliat and Natalia Díaz-Rodríguez [bib]
- A Wholistic View of Continual Learning with Deep Neural Networks: Forgotten Lessons and the Bridge to Active and Open World Learning , (arXiv 2020) by Mundt, Martin, Hong, Yong Won, Pliushch, Iuliia and Ramesh, Visvanathan [bib]
propose a consolidated view to bridge continual learning, active learning and open set recognition in DNNs
- Continual Lifelong Learning in Natural Language Processing: A Survey , (2020) by Magdalena Biesialska, Katarzyna Biesialska, Marta R. Costa-jussà [bib]
An extensive review of CL in Natural Language Processing (NLP)
- Continual lifelong learning with neural networks: A review , (Neural Networks 2019) by German I. Parisi, Ronald Kemker, Jose L. Part, Christopher Kanan and Stefan Wermter [bib]
An extensive review of CL
- Incremental learning algorithms and applications , (2016) by Gepperth, Alexander and Hammer, Barbara [bib]
A survey on incremental learning and the various applications fields
- Efficient Lifelong Learning with A-GEM , (ICLR 2019) by Chaudhry, Arslan, Ranzato, Marc’Aurelio, Rohrbach, Marcus and Elhoseiny, Mohamed [bib]
More efficient GEM; Introduces online continual learning
- Towards Robust Evaluations of Continual Learning , (arXiv 2018) by Farquhar, Sebastian and Gal, Yarin [bib]
Proposes desideratas and reexamines the evaluation protocol
- Continual Learning in Practice , (NeurIPS Workshop 2018) by Diethe, Tom, Borchert, Tom, Thereska, Eno, Pigem, Borja de Balle and Lawrence, Neil [bib]
Proposes a reference architecture for a continual learning system
- Overcoming catastrophic forgetting in neural networks , (PNAS 2017) by Kirkpatrick, James, Pascanu, Razvan, Rabinowitz, Neil, Veness, Joel, Desjardins, Guillaume, Rusu, Andrei A, Milan, Kieran, Quan, John, Ramalho, Tiago, Grabska-Barwinska, Agnieszka and others [bib]
- Gradient Episodic Memory for Continual Learning , (NeurIPS 2017) by Lopez-Paz, David and Ranzato, Marc-Aurelio [bib]
A model that alliviates CF via constrained optimization
- Continual learning with deep generative replay , (NeurIPS 2017) by Shin, Hanul, Lee, Jung Kwon, Kim, Jaehong and Kim, Jiwon [bib]
Introduces generative replay
- An Empirical Investigation of Catastrophic Forgetting in Gradient-Based Neural Networks , (2013) by Goodfellow, I.~J., Mirza, M., Xiao, D., Courville, A. and Bengio, Y. [bib]
Investigates CF in neural networks
- IIRC: Incremental Implicitly-Refined Classification , (CVPR 2021) by Mohamed Abdelsalam, Mojtaba Faramarzi, Shagun Sodhani and Sarath Chandar [bib]
A setup and benchmark to evaluate lifelong learning models in more real-life aligned scenarios.
- Sequoia - Towards a Systematic Organization of Continual Learning Research , (2021) by Fabrice Normandin, Florian Golemo, Oleksiy Ostapenko, Matthew Riemer, Pau Rodriguez, Julio Hurtado, Khimya Khetarpal, Timothée Lesort, Laurent Charlin, Irina Rish and Massimo Caccia [bib]
A library that unifies Continual Supervised and Continual Reinforcement Learning research
- Wandering Within a World: Online Contextualized Few-Shot Learning , (2020) by Mengye Ren, Michael L. Iuzzolino, Michael C. Mozer and Richard S. Zemel [bib]
proposes a new continual few-shot setting where spacial and temporal context can be leveraged to and unseen classes need to be predicted
- Defining Benchmarks for Continual Few-Shot Learning , (arXiv 2020) by Antoniou, Antreas, Patacchiola, Massimiliano, Ochal, Mateusz and Storkey, Amos [bib]
(title is a good enough summary)
- Online Fast Adaptation and Knowledge Accumulation: a New Approach to Continual Learning , (2020) by Caccia, Massimo, Rodriguez, Pau, Ostapenko, Oleksiy, Normandin, Fabrice, Lin, Min, Caccia, Lucas, Laradji, Issam, Rish, Irina, Lacoste, Alexandre, Vazquez, David and Charlin, Laurent [bib]
Proposes a new approach to CL evaluation more aligned with real-life applications, bringing CL closer to Online Learning and Open-World learning
- Compositional Language Continual Learning , (ICLR 2020) by Yuanpeng Li, Liang Zhao, Kenneth Church and Mohamed Elhoseiny [bib]
method for compositional continual learning of sequence-to-sequence models
- A Wholistic View of Continual Learning with Deep Neural Networks: Forgotten Lessons and the Bridge to Active and Open World Learning , (arXiv 2020) by Mundt, Martin, Hong, Yong Won, Pliushch, Iuliia and Ramesh, Visvanathan [bib]
propose a consolidated view to bridge continual learning, active learning and open set recognition in DNNs
- Don't forget, there is more than forgetting: new metrics for Continual Learning, (arXiv 2018) by D{'\i}az-Rodr{'\i}guez, Natalia, Lomonaco, Vincenzo, Filliat, David and Maltoni, Davide [bib]
introduces a CL score that takes more than just forgetting into account
- Overcoming catastrophic forgetting in neural networks , (PNAS 2017) by Kirkpatrick, James, Pascanu, Razvan, Rabinowitz, Neil, Veness, Joel, Desjardins, Guillaume, Rusu, Andrei A, Milan, Kieran, Quan, John, Ramalho, Tiago, Grabska-Barwinska, Agnieszka and others [bib]
- PathNet: Evolution Channels Gradient Descent in Super Neural Networks , (2017) by Chrisantha Fernando and Dylan Banarse and Charles Blundell and Yori Zwols and David Ha and Andrei A. Rusu and Alexander Pritzel and Daan Wierstra [bib]
- Task-agnostic Continual Learning with Hybrid Probabilistic Models , (2021) by Polina Kirichenko, Mehrdad Farajtabar, Dushyant Rao, Balaji Lakshminarayanan, Nir Levine, Ang Li, Huiyi Hu, Andrew Gordon Wilson and Razvan Pascanu [bib]
- Learning where to learn: Gradient sparsity in meta and continual learning , (2021) by Von Oswald, Johannes, Zhao, Dominic, Kobayashi, Seijin, Schug, Simon, Caccia, Massimo, Zucchet, Nicolas and Sacramento, Jo{~a}o [bib]
- Uncertainty-guided Continual Learning with Bayesian Neural Networks , (ICLR 2020) by Sayna Ebrahimi, Mohamed Elhoseiny, Trevor Darrell and Marcus Rohrbach [bib]
Uses Bayes by Backprop for variational Continual Learning.
- Online Fast Adaptation and Knowledge Accumulation: a New Approach to Continual Learning , (2020) by Caccia, Massimo, Rodriguez, Pau, Ostapenko, Oleksiy, Normandin, Fabrice, Lin, Min, Caccia, Lucas, Laradji, Issam, Rish, Irina, Lacoste, Alexandre, Vazquez, David and Charlin, Laurent [bib]
Proposes a new approach to CL evaluation more aligned with real-life applications, bringing CL closer to Online Learning and Open-World learning
- iTAML: An Incremental Task-Agnostic Meta-learning Approach, (CVPR 2020) by Rajasegaran, Jathushan, Khan, Salman, Hayat, Munawar, Khan, Fahad Shahbaz and Shah, Mubarak [bib]
- Continual Unsupervised Representation Learning , (2019) by Dushyant Rao, Francesco Visin, Andrei A. Rusu, Yee Whye Teh, Razvan Pascanu and Raia Hadsell [bib]
Introduces unsupervised continual learning (no task label and no task boundaries)
- A Neural Dirichlet Process Mixture Model for Task-Free Continual Learning , (ICLR 2019) by Lee, Soochan, Ha, Junsoo, Zhang, Dongsu and Kim, Gunhee [bib]
This paper introduces expansion-based approach for task-free continual learning
- Task Agnostic Continual Learning Using Online Variational Bayes , (2018) by Chen Zeno, Itay Golan, Elad Hoffer and Daniel Soudry [bib]
Introduces an optimizer for CL that relies on closed form updates of mu and sigma of BNN; introduce label trick for class learning (single-head) but warning: it isn't really task-agnostic
- Continual Learning in Deep Networks: an Analysis of the Last Layer , (arXiv 2021) by Lesort, Timoth{'e}e, George, Thomas and Rish, Irina [bib]
- Continual Learning with Bayesian Neural Networks for Non-Stationary Data , (ICLR 2020) by Richard Kurle, Botond Cseke, Alexej Klushyn, Patrick van der Smagt and Stephan Günnemann [bib]
continual learning for non-stationary data using Bayesian neural networks and memory-based online variational Bayes
- Improving and Understanding Variational Continual Learning , (2019) by Siddharth Swaroop, Cuong V. Nguyen, Thang D. Bui and Richard E. Turner [bib]
Improved results and interpretation of VCL.
- Uncertainty-based Continual Learning with Adaptive Regularization , (NeurIPS 2019) by Ahn, Hongjoon, Cha, Sungmin, Lee, Donggyu and Moon, Taesup [bib]
Introduces VCL with uncertainty measured for neurons instead of weights.
- Functional Regularisation for Continual Learning with Gaussian Processes , (ICLR 2019) by Titsias, Michalis K, Schwarz, Jonathan, Matthews, Alexander G de G, Pascanu, Razvan and Teh, Yee Whye [bib]
functional regularisation for Continual Learning: avoids forgetting a previous task by constructing and memorising an approximate posterior belief over the underlying task-specific function
- Task Agnostic Continual Learning Using Online Variational Bayes , (2018) by Chen Zeno, Itay Golan, Elad Hoffer and Daniel Soudry [bib]
Introduces an optimizer for CL that relies on closed form updates of mu and sigma of BNN; introduce label trick for class learning (single-head) but warning: it isn't really task-agnostic
- Overcoming Catastrophic Interference using Conceptor-Aided Backpropagation , (ICLR 2018) by Xu He and Herbert Jaeger [bib]
Conceptor-Aided Backprop (CAB): gradients are shielded by conceptors against degradation of previously learned tasks
- Overcoming Catastrophic Forgetting with Hard Attention to the Task , (ICML 2018) by Serra, Joan, Suris, Didac, Miron, Marius and Karatzoglou, Alexandros [bib]
Introducing a hard attention idea with binary masks
- Riemannian Walk for Incremental Learning: Understanding Forgetting and Intransigence , (ECCV 2018) by Chaudhry, Arslan, Dokania, Puneet K, Ajanthan, Thalaiyasingam and Torr, Philip HS [bib]
Formalizes the shortcomings of multi-head evaluation, as well as the importance of replay in single-head setup. Presenting an improved version of EWC.
- Variational Continual Learning , (ICLR 2018) by Cuong V. Nguyen, Yingzhen Li, Thang D. Bui and Richard E. Turner [bib]
- Progress & compress: A scalable framework for continual learning , (ICML 2018) by Schwarz, Jonathan, Luketina, Jelena, Czarnecki, Wojciech M, Grabska-Barwinska, Agnieszka, Teh, Yee Whye, Pascanu, Razvan and Hadsell, Raia [bib]
A new P\&C architecture; online EWC for keeping the knowledge about the previous task, knowledge for keeping the knowledge about the current task (Multi-head setting, RL)
- Online structured laplace approximations for overcoming catastrophic forgetting, (NeurIPS 2018) by Ritter, Hippolyt, Botev, Aleksandar and Barber, David [bib]
- Facilitating Bayesian Continual Learning by Natural Gradients and Stein Gradients , (NeurIPS Workshop 2018) by Chen, Yu, Diethe, Tom and Lawrence, Neil [bib]
Improves on VCL
- Overcoming catastrophic forgetting in neural networks , (PNAS 2017) by Kirkpatrick, James, Pascanu, Razvan, Rabinowitz, Neil, Veness, Joel, Desjardins, Guillaume, Rusu, Andrei A, Milan, Kieran, Quan, John, Ramalho, Tiago, Grabska-Barwinska, Agnieszka and others [bib]
- Memory Aware Synapses: Learning what (not) to forget , (2017) by Rahaf Aljundi, Francesca Babiloni, Mohamed Elhoseiny, Marcus Rohrbach and Tinne Tuytelaars [bib]
Importance of parameter measured based on their contribution to change in the learned prediction function
- Continual Learning Through Synaptic Intelligence , (ICML 2017) by *Zenke, Friedeman, Poole, Ben and Ganguli, Surya * [bib]
Synaptic Intelligence (SI). Importance of parameter measured based on their contribution to change in the loss.
- Overcoming catastrophic forgetting by incremental moment matching, (NeurIPS 2017) by Lee, Sang-Woo, Kim, Jin-Hwa, Jun, Jaehyun, Ha, Jung-Woo and Zhang, Byoung-Tak [bib]
- Class-Incremental Continual Learning into the eXtended DER-verse , (TPAMI 2022) by Boschini, Matteo, Bonicelli, Lorenzo, Buzzega, Pietro, Porrello, Angelo and Calderara, Simone [bib]
- Transfer without Forgetting , (ECCV 2022) by Boschini, Matteo, Bonicelli, Lorenzo, Porrello, Angelo, Bellitto, Giovanni, Pennisi, Matteo, Palazzo, Simone, Spampinato, Concetto and Calderara, Simone [bib]
- Self-Supervised Models are Continual Learners , (CVPR 2022) by Fini, Enrico, da Costa, Victor G Turrisi, Alameda-Pineda, Xavier, Ricci, Elisa, Alahari, Karteek and Mairal, Julien [bib]
Explores Continual Self-Supervised Learning and proposes a simple and effective feature distillation method
- Uncertainty-aware Contrastive Distillation for Incremental Semantic Segmentation , (TPAMI 2022) by Yang, Guanglei, Fini, Enrico, Xu, Dan, Rota, Paolo, Ding, Mingli, Nabi, Moin, Alameda-Pineda, Xavier and Ricci, Elisa [bib]
- Continual Attentive Fusion for Incremental Learning in Semantic Segmentation , (TMM 2022) by Yang, Guanglei, Fini, Enrico, Xu, Dan, Rota, Paolo, Ding, Mingli, Hao, Tang, Alameda-Pineda, Xavier and Ricci, Elisa [bib]
- Dark Experience for General Continual Learning: a Strong, Simple Baseline , (NeurIPS 2020) by Buzzega, Pietro, Boschini, Matteo, Porrello, Angelo, Abati, Davide and Calderara, Simone [bib]
- Online Continual Learning under Extreme Memory Constraints , (ECCV 2020) by Fini, Enrico, Lathuilière, Stèphane, Sangineto, Enver, Nabi, Moin and Ricci, Elisa [bib]
Introduces Memory-Constrained Online Continual Learning, a setting where no information can be transferred between tasks, and proposes a distillation-based solution (Batch-level Distillation)
- PODNet: Pooled Outputs Distillation for Small-Tasks Incremental Learning , (ECCV 2020) by Douillard, Arthur, Cord, Matthieu, Ollion, Charles, Robert, Thomas and Valle, Eduardo [bib]
Novel knowledge distillation that trades efficiently rigidity and plasticity to learn large amount of small tasks
- Overcoming Catastrophic Forgetting With Unlabeled Data in the Wild , (ICCV 2019) by Lee, Kibok, Lee, Kimin, Shin, Jinwoo and Lee, Honglak [bib]
Introducing global distillation loss and balanced finetuning; leveraging unlabeled data in the open world setting (Single-head setting)
- Large scale incremental learning , (CVPR 2019) by Wu, Yue, Chen, Yinpeng, Wang, Lijuan, Ye, Yuancheng, Liu, Zicheng, Guo, Yandong and Fu, Yun [bib]
Introducing bias parameters to the last fully connected layer to resolve the data imbalance issue (Single-head setting)
- Continual Reinforcement Learning deployed in Real-life using PolicyDistillation and Sim2Real Transfer, (ICML Workshop 2019) by *Kalifou, René Traoré, Caselles-Dupré, Hugo, Lesort, Timothée, Sun, Te, Diaz-Rodriguez, Natalia and Filliat, David * [bib]
- Lifelong learning via progressive distillation and retrospection , (ECCV 2018) by Hou, Saihui, Pan, Xinyu, Change Loy, Chen, Wang, Zilei and Lin, Dahua [bib]
Introducing an expert of the current task in the knowledge distillation method (Multi-head setting)
- End-to-end incremental learning , (ECCV 2018) by Castro, Francisco M, Marin-Jimenez, Manuel J, Guil, Nicolas, Schmid, Cordelia and Alahari, Karteek [bib]
Finetuning the last fully connected layer with a balanced dataset to resolve the data imbalance issue (Single-head setting)
- Learning without forgetting , (TPAMI 2017) by Li, Zhizhong and Hoiem, Derek [bib]
Functional regularization through distillation (keeping the output of the updated network on the new data close to the output of the old network on the new data)
- icarl: Incremental classifier and representation learning , (CVPR 2017) by Rebuffi, Sylvestre-Alvise, Kolesnikov, Alexander, Sperl, Georg and Lampert, Christoph H [bib]
Binary cross-entropy loss for representation learning \& exemplar memory (or coreset) for replay (Single-head setting)
- On the Effectiveness of Lipschitz-Driven Rehearsal in Continual Learning , (NeurIPS 2022) by Bonicelli, Lorenzo, Boschini, Matteo, Porrello, Angelo, Spampinato, Concetto and Calderara, Simone [bib]
- Class-Incremental Continual Learning into the eXtended DER-verse , (TPAMI 2022) by Boschini, Matteo, Bonicelli, Lorenzo, Buzzega, Pietro, Porrello, Angelo and Calderara, Simone [bib]
- Continual semi-supervised learning through contrastive interpolation consistency , (PRL 2022) by Boschini, Matteo, Buzzega, Pietro, Bonicelli, Lorenzo, Porrello, Angelo and Calderara, Simone [bib]
- Transfer without Forgetting , (ECCV 2022) by Boschini, Matteo, Bonicelli, Lorenzo, Porrello, Angelo, Bellitto, Giovanni, Pennisi, Matteo, Palazzo, Simone, Spampinato, Concetto and Calderara, Simone [bib]
- Effects of Auxiliary Knowledge on Continual Learning , (ICPR 2022) by Bellitto, Giovanni, Pennisi, Matteo, Palazzo, Simone, Bonicelli, Lorenzo, Boschini, Matteo, Calderara, Simone and Spampinato, Concetto [bib]
- Rethinking Experience Replay: a Bag of Tricks for Continual Learning , (ICPR 2021) by Buzzega, Pietro, Boschini, Matteo, Porrello, Angelo and Calderara, Simone [bib]
- Graph-Based Continual Learning , (ICLR 2021) by Binh Tang and David S. Matteson [bib]
Use graphs to link saved samples and improve the memory quality.
- Online Class-Incremental Continual Learning with Adversarial Shapley Value , (2021) by Dongsub Shim, Zheda Mai, Jihwan Jeong*, Scott Sanner, Hyunwoo Kim and Jongseong Jang [bib]
Use Shapley Value adversarially to select which samples to relay
- Dark Experience for General Continual Learning: a Strong, Simple Baseline , (NeurIPS 2020) by Buzzega, Pietro, Boschini, Matteo, Porrello, Angelo, Abati, Davide and Calderara, Simone [bib]
- GDumb: A simple approach that questions our progress in continual learning, (ECCV 2020) by Prabhu, Ameya, Torr, Philip HS and Dokania, Puneet K [bib]
introduces a super simple methods that outperforms almost all methods in all of the CL benchmarks. We need new better benchamrks
- Continual Learning: Tackling Catastrophic Forgetting in Deep Neural Networks with Replay Processes , (2020) by Timothée Lesort [bib]
- Imbalanced Continual Learning with Partitioning Reservoir Sampling , (ECCV 2020) by Kim, Chris Dongjoo, Jeong, Jinseo and Kim, Gunhee [bib]
- PODNet: Pooled Outputs Distillation for Small-Tasks Incremental Learning , (ECCV 2020) by Douillard, Arthur, Cord, Matthieu, Ollion, Charles, Robert, Thomas and Valle, Eduardo [bib]
Novel knowledge distillation that trades efficiently rigidity and plasticity to learn large amount of small tasks
- {REMIND Your Neural Network to Prevent Catastrophic Forgetting} , (ECCV 2020) by Hayes, Tyler L., Kafle, Kushal, Shrestha, Robik and Acharya, Manoj and Kanan, Christopher [bib]
- Efficient Lifelong Learning with A-GEM , (ICLR 2019) by Chaudhry, Arslan, Ranzato, Marc’Aurelio, Rohrbach, Marcus and Elhoseiny, Mohamed [bib]
More efficient GEM; Introduces online continual learning
- Orthogonal Gradient Descent for Continual Learning , (2019) by Mehrdad Farajtabar, Navid Azizan, Alex Mott and Ang Li [bib]
projecting the gradients from new tasks onto a subspace in which the neural network output on previous task does not change and the projected gradient is still in a useful direction for learning the new task
- Gradient based sample selection for online continual learning , (NeurIPS 2019) by Aljundi, Rahaf, Lin, Min, Goujaud, Baptiste and Bengio, Yoshua [bib]
sample selection as a constraint reduction problem based on the constrained optimization view of continual learning
- Online Continual Learning with Maximal Interfered Retrieval , (NeurIPS 2019) by Aljundi, Rahaf and
, Lucas, Belilovsky, Eugene, Caccia, Massimo, Lin, Min, Charlin, Laurent and Tuytelaars, Tinne [bib]
Controlled sampling of memories for replay to automatically rehearse on tasks currently undergoing the most forgetting
- Online Learned Continual Compression with Adaptative Quantization Module , (arXiv 2019) by Caccia, Lucas, Belilovsky, Eugene, Caccia, Massimo and Pineau, Joelle [bib]
Uses stacks of VQ-VAE modules to progressively compress the data stream, enabling better rehearsal
- Large scale incremental learning , (CVPR 2019) by Wu, Yue, Chen, Yinpeng, Wang, Lijuan, Ye, Yuancheng, Liu, Zicheng, Guo, Yandong and Fu, Yun [bib]
Introducing bias parameters to the last fully connected layer to resolve the data imbalance issue (Single-head setting)
- Learning a Unified Classifier Incrementally via Rebalancing, (CVPR 2019) by Hou, Saihui, Pan, Xinyu, Loy, Chen Change, Wang, Zilei and Lin, Dahua [bib]
- Continual Reinforcement Learning deployed in Real-life using PolicyDistillation and Sim2Real Transfer, (ICML Workshop 2019) by *Kalifou, René Traoré, Caselles-Dupré, Hugo, Lesort, Timothée, Sun, Te, Diaz-Rodriguez, Natalia and Filliat, David * [bib]
- Experience replay for continual learning , (NeurIPS 2019) by Rolnick, David, Ahuja, Arun, Schwarz, Jonathan, Lillicrap, Timothy and Wayne, Gregory [bib]
- Gradient Episodic Memory for Continual Learning , (NeurIPS 2017) by Lopez-Paz, David and Ranzato, Marc-Aurelio [bib]
A model that alliviates CF via constrained optimization
- icarl: Incremental classifier and representation learning , (CVPR 2017) by Rebuffi, Sylvestre-Alvise, Kolesnikov, Alexander, Sperl, Georg and Lampert, Christoph H [bib]
Binary cross-entropy loss for representation learning \& exemplar memory (or coreset) for replay (Single-head setting)
- Catastrophic Forgetting, Rehearsal and Pseudorehearsal , (1995) by * Anthony Robins * [bib]
- Continual Learning: Tackling Catastrophic Forgetting in Deep Neural Networks with Replay Processes , (2020) by Timothée Lesort [bib]
- Brain-Like Replay For Continual Learning With Artificial Neural Networks , (2020) by van de Ven, Gido M, Siegelmann, Hava T and Tolias, Andreas S [bib]
- Learning to remember: A synaptic plasticity driven framework for continual learning , (CVPR 2019) by Ostapenko, Oleksiy, Puscas, Mihai, Klein, Tassilo, Jahnichen, Patrick and Nabi, Moin [bib]
introdudes Dynamic generative memory (DGM) which relies on conditional generative adversarial networks with learnable connection plasticity realized with neural masking
- Generative Models from the perspective of Continual Learning , (IJCNN 2019) by Lesort, Timothée, Caselles-Dupré, Hugo, Garcia-Ortiz, Michael, Goudou, Jean-Fran{\c c}ois and Filliat, David [bib]
Extensive evaluation of CL methods for generative modeling
- Closed-loop Memory GAN for Continual Learning , (IJCAI 2019) by Rios, Amanda and Itti, Laurent [bib]
- Marginal replay vs conditional replay for continual learning , (ICANN 2019) by Lesort, Timothée, Gepperth, Alexander, Stoian, Andrei and Filliat, David [bib]
Extensive evaluation of generative replay methods
- Generative replay with feedback connections as a general strategy for continual learning , (2018) by Michiel van der Ven and Andreas S. Tolias [bib]
smarter Generative Replay
- Continual learning with deep generative replay , (NeurIPS 2017) by Shin, Hanul, Lee, Jung Kwon, Kim, Jaehong and Kim, Jiwon [bib]
Introduces generative replay
- Avoiding Catastrophe: Active Dendrites Enable Multi-Task Learning in Dynamic Environments , (2022) by Iyer, Abhiram, Grewal, Karan, Velu, Akash, Souza, Lucas Oliveira, Forest, Jeremy and Ahmad, Subutai [bib]
bio-inspired method which dynamically restrict and route information in a context-specific manner
- DyTox: Transformers for Continual Learning with DYnamic TOken eXpansion , (arXiv 2021) by Douillard, Arthur, Ram{'e}, Alexandre, Couairon, Guillaume and Cord, Matthieu [bib]
- Supermasks in superposition , (2020) by Wortsman, Mitchell, Ramanujan, Vivek, Liu, Rosanne, Kembhavi, Aniruddha, Rastegari, Mohammad, Yosinski, Jason and Farhadi, Ali [bib]
a binary mask over the network is inferred based on the input, and only the masked part of the network is used to train/infer
- ORACLE: Order Robust Adaptive Continual Learning , (2019) by Jaehong Yoon and Saehoon Kim and Eunho Yang and Sung Ju Hwang [bib]
- Learn to Grow: {A} Continual Structure Learning Framework for Overcoming Catastrophic Forgetting , (2019) by Xilai Li and Yingbo Zhou and Tianfu Wu and Richard Socher and Caiming Xiong [bib]
- Alleviating catastrophic forgetting using context-dependent gating and synaptic stabilization , (2018) by Masse, Nicolas Y, Grant, Gregory D and Freedman, David J [bib]
a network trained to do CL where select subnetworks are used to learn each task; these subnetworks are chosen a priori
- Incremental Learning through Deep Adaptation , (2018) by Amir Rosenfeld and John K. Tsotsos [bib]
- Packnet: Adding multiple tasks to a single network by iterative pruning, (CVPR 2018) by Mallya, Arun and Lazebnik, Svetlana [bib]
- Piggyback: Adapting a single network to multiple tasks by learning to mask weights, (ECCV 2018) by Mallya, Arun, Davis, Dillon and Lazebnik, Svetlana [bib]
- Continual Learning in Practice , (NeurIPS Workshop 2018) by Diethe, Tom, Borchert, Tom, Thereska, Eno, Pigem, Borja de Balle and Lawrence, Neil [bib]
Proposes a reference architecture for a continual learning system
- Growing a brain: Fine-tuning by increasing model capacity, (CVPR 2017) by Wang, Yu-Xiong, Ramanan, Deva and Hebert, Martial [bib]
- PathNet: Evolution Channels Gradient Descent in Super Neural Networks , (2017) by Chrisantha Fernando and Dylan Banarse and Charles Blundell and Yori Zwols and David Ha and Andrei A. Rusu and Alexander Pritzel and Daan Wierstra [bib]
- Lifelong learning with dynamically expandable networks, (arXiv 2017) by Yoon, Jaehong, Yang, Eunho, Lee, Jeongtae and Hwang, Sung Ju [bib]
- Progressive Neural Networks , (2016) by Rusu, A.~A., Rabinowitz, N.~C., Desjardins, G. and
Soyer, H., Kirkpatrick, J., Kavukcuoglu, K. and
Pascanu, R. and Hadsell, R. [bib]
Each task have a specific model connected to the previous ones
- Continual learning with hypernetworks , (ICLR 2020) by Johannes von Oswald, Christian Henning, João Sacramento and Benjamin F. Grewe [bib]
Learning task-conditioned hypernetworks for continual learning as well as task embeddings; hypernetwors offers good model compression.
- Compacting, Picking and Growing for Unforgetting Continual Learning , (NeurIPS 2019) by Hung, Ching-Yi, Tu, Cheng-Hao, Wu, Cheng-En, Chen, Chien-Hung, Chan, Yi-Ming and Chen, Chu-Song [bib]
Approach leverages the principles of deep model compression, critical weights selection, and progressive networks expansion. All enforced in an iterative manner
- A Neural Dirichlet Process Mixture Model for Task-Free Continual Learning , (ICLR 2019) by Lee, Soochan, Ha, Junsoo, Zhang, Dongsu and Kim, Gunhee [bib]
This paper introduces expansion-based approach for task-free continual learning
- Learning where to learn: Gradient sparsity in meta and continual learning , (2021) by Von Oswald, Johannes, Zhao, Dominic, Kobayashi, Seijin, Schug, Simon, Caccia, Massimo, Zucchet, Nicolas and Sacramento, Jo{~a}o [bib]
- Wandering Within a World: Online Contextualized Few-Shot Learning , (2020) by Mengye Ren, Michael L. Iuzzolino, Michael C. Mozer and Richard S. Zemel [bib]
proposes a new continual few-shot setting where spacial and temporal context can be leveraged to and unseen classes need to be predicted
- Defining Benchmarks for Continual Few-Shot Learning , (arXiv 2020) by Antoniou, Antreas, Patacchiola, Massimiliano, Ochal, Mateusz and Storkey, Amos [bib]
(title is a good enough summary)
- Online Fast Adaptation and Knowledge Accumulation: a New Approach to Continual Learning , (2020) by Caccia, Massimo, Rodriguez, Pau, Ostapenko, Oleksiy, Normandin, Fabrice, Lin, Min, Caccia, Lucas, Laradji, Issam, Rish, Irina, Lacoste, Alexandre, Vazquez, David and Charlin, Laurent [bib]
Proposes a new approach to CL evaluation more aligned with real-life applications, bringing CL closer to Online Learning and Open-World learning
- Learning from the Past: Continual Meta-Learning via Bayesian Graph Modeling , (2019) by Yadan Luo, Zi Huang, Zheng Zhang, Ziwei Wang, Mahsa Baktashmotlagh and Yang Yang [bib]
- Online Meta-Learning , (ICML 2019) by Finn, Chelsea, Rajeswaran, Aravind, Kakade, Sham and Levine, Sergey [bib]
defines Online Meta-learning; propsoses Follow the Meta Leader (FTML) (~ Online MAML)
- Reconciling meta-learning and continual learning with online mixtures of tasks , (NeurIPS 2019) by Jerfel, Ghassen, Grant, Erin, Griffiths, Tom and Heller, Katherine A [bib]
Meta-learns a tasks structure; continual adaptation via non-parametric prior
- Deep Online Learning Via Meta-Learning: Continual Adaptation for Model-Based RL , (ICLR 2019) by Anusha Nagabandi, Chelsea Finn and Sergey Levine [bib]
Formulates an online learning procedure that uses SGD to update model parameters, and an EM with a Chinese restaurant process prior to develop and maintain a mixture of models to handle non-stationary task distribution
- Task Agnostic Continual Learning via Meta Learning , (2019) by Xu He, Jakub Sygnowski, Alexandre Galashov, Andrei A. Rusu, Yee Whye Teh and Razvan Pascanu [bib]
Introduces What \& How framework; enables Task Agnostic CL with meta learned task inference
- Learning where to learn: Gradient sparsity in meta and continual learning , (2021) by Von Oswald, Johannes, Zhao, Dominic, Kobayashi, Seijin, Schug, Simon, Caccia, Massimo, Zucchet, Nicolas and Sacramento, Jo{~a}o [bib]
- La-MAML: Look-ahead Meta Learning for Continual Learning , (2020) by Gunshi Gupta, Karmesh Yadav and Liam Paull [bib]
Proposes an online replay-based meta-continual learning algorithm with learning-rate modulation to mitigate catastrophic forgetting
- Learning to Continually Learn , (arXiv 2020) by Beaulieu, Shawn, Frati, Lapo, Miconi, Thomas, Lehman, Joel, Stanley, Kenneth O, Clune, Jeff and Cheney, Nick [bib]
Follow-up of OML. Meta-learns an activation-gating function instead.
- Meta-Learning Representations for Continual Learning , (NeurIPS 2019) by Javed, Khurram and White, Martha [bib]
Introduces Learns how to continually learn (OML) i.e. learns how to do online updates without forgetting.
- Meta-learnt priors slow down catastrophic forgetting in neural networks , (arXiv 2019) by Spigler, Giacomo [bib]
Learning MAML in a Meta continual learning way slows down forgetting
- Learning to Learn without Forgetting By Maximizing Transfer and Minimizing Interference , (ICLR 2019) by Matthew Riemer, Ignacio Cases, Robert Ajemian, Miao Liu, Irina Rish, Yuhai Tu and and Gerald Tesauro [bib]
- A Study of Continual Learning Methods for Q-Learning , (arXiv 2022) by Bagus, Benedikt and Gepperth, Alexander [bib]
Studies Q-Learning methods in CRL environments. When there's no task interference, (A-)GEM can outperform Experience Replay
- Co{MPS}: Continual Meta Policy Search , (ICLR 2022) by Glen Berseth, Zhiwei Zhang, Grace Zhang, Chelsea Finn and Sergey Levine [bib]
Co{MPS} is a novel meta-policy search algorithm for task-agnostic continual RL
- Task-Agnostic Continual Reinforcement Learning: In Praise of a Simple Baseline , (arXiv 2022) by Caccia, Massimo, Mueller, Jonas, Kim, Taesup, Charlin, Laurent and Fakoor, Rasool [bib]
combines replay and an RNN to set a simple baseline for TACRL: shows that the baselines matches and surpasses previously thought upper bounds
- Modular Lifelong Reinforcement Learning via Neural Composition , (ICLR 2022) by Jorge A Mendez, Harm van Seijen and ERIC EATON [bib]
- Reactive Exploration to Cope with Non-Stationarity in Lifelong Reinforcement Learning , (2022) by Steinparz, Christian, Schmied, Thomas, Paischer, Fabian, Dinu, Marius-Constantin, Patil, Vihang, Bitto-Nemling, Angela, Eghbal-zadeh, Hamid and Hochreiter, Sepp [bib]
Detects changes and explores when and where they happen to recover from non-stationarity.
- Same State, Different Task: Continual Reinforcement Learning without Interference , (2021) by Samuel Kessler, Jack Parker-Holder, Philip J. Ball, Stefan Zohren and Stephen J. Roberts [bib]
learns multiple policies and cast policy-retrieval as a multi-arm bandit problem
- CoMPS: Continual Meta Policy Search , (2021) by Glen Berseth, Zhiwei Zhang, Grace Zhang, Chelsea Finn and Sergey Levine [bib]
- Reset-Free Lifelong Learning with Skill-Space Planning , (ICLR 2021) by Kevin Lu, Aditya Grover, Pieter Abbeel and Igor Mordatch [bib]
- Task-Agnostic Online Reinforcement Learning with an Infinite Mixture of Gaussian Processes , (2020) by Mengdi Xu, Wenhao Ding, Jiacheng Zhu, Zuxin Liu, Baiming Chen and Ding Zhao [bib]
uses an infinite mixture of Gaussian Processes to learn a task-agnostic policy
- Lifelong Policy Gradient Learning of Factored Policies for Faster Training Without Forgetting , (2020) by Jorge A. Mendez, Boyu Wang and Eric Eaton [bib]
- Towards Continual Reinforcement Learning: A Review and Perspectives, (2020) by Khimya Khetarpal, Matthew Riemer, Irina Rish and Doina Precup [bib]
A review on continual reinforcement learning
- Continual learning for robotics: Definition, framework, learning strategies, opportunities and challenges , (Information Fusion 2020) by Timothée Lesort, Vincenzo Lomonaco, Andrei Stoian, Davide Maltoni, David Filliat and Natalia Díaz-Rodríguez [bib]
- Deep Online Learning Via Meta-Learning: Continual Adaptation for Model-Based RL , (ICLR 2019) by Anusha Nagabandi, Chelsea Finn and Sergey Levine [bib]
Formulates an online learning procedure that uses SGD to update model parameters, and an EM with a Chinese restaurant process prior to develop and maintain a mixture of models to handle non-stationary task distribution
- Continual Reinforcement Learning deployed in Real-life using PolicyDistillation and Sim2Real Transfer, (ICML Workshop 2019) by *Kalifou, René Traoré, Caselles-Dupré, Hugo, Lesort, Timothée, Sun, Te, Diaz-Rodriguez, Natalia and Filliat, David * [bib]
- Experience replay for continual learning , (NeurIPS 2019) by Rolnick, David, Ahuja, Arun, Schwarz, Jonathan, Lillicrap, Timothy and Wayne, Gregory [bib]
- PathNet: Evolution Channels Gradient Descent in Super Neural Networks , (2017) by Chrisantha Fernando and Dylan Banarse and Charles Blundell and Yori Zwols and David Ha and Andrei A. Rusu and Alexander Pritzel and Daan Wierstra [bib]
- Incremental robot learning of new objects with fixed update time, (2017) by R. {Camoriano}, G. {Pasquale}, C. {Ciliberto}, L. {Natale}, L. {Rosasco} and G. {Metta} [bib]
- Co{MPS}: Continual Meta Policy Search , (ICLR 2022) by Glen Berseth, Zhiwei Zhang, Grace Zhang, Chelsea Finn and Sergey Levine [bib]
Co{MPS} is a novel meta-policy search algorithm for task-agnostic continual RL
- Task-Agnostic Continual Reinforcement Learning: In Praise of a Simple Baseline , (arXiv 2022) by Caccia, Massimo, Mueller, Jonas, Kim, Taesup, Charlin, Laurent and Fakoor, Rasool [bib]
combines replay and an RNN to set a simple baseline for TACRL: shows that the baselines matches and surpasses previously thought upper bounds
- Reactive Exploration to Cope with Non-Stationarity in Lifelong Reinforcement Learning , (2022) by Steinparz, Christian, Schmied, Thomas, Paischer, Fabian, Dinu, Marius-Constantin, Patil, Vihang, Bitto-Nemling, Angela, Eghbal-zadeh, Hamid and Hochreiter, Sepp [bib]
Detects changes and explores when and where they happen to recover from non-stationarity.
- Same State, Different Task: Continual Reinforcement Learning without Interference , (2021) by Samuel Kessler, Jack Parker-Holder, Philip J. Ball, Stefan Zohren and Stephen J. Roberts [bib]
learns multiple policies and cast policy-retrieval as a multi-arm bandit problem
- CoMPS: Continual Meta Policy Search , (2021) by Glen Berseth, Zhiwei Zhang, Grace Zhang, Chelsea Finn and Sergey Levine [bib]
- Task-Agnostic Online Reinforcement Learning with an Infinite Mixture of Gaussian Processes , (2020) by Mengdi Xu, Wenhao Ding, Jiacheng Zhu, Zuxin Liu, Baiming Chen and Ding Zhao [bib]
uses an infinite mixture of Gaussian Processes to learn a task-agnostic policy
- Deep Online Learning Via Meta-Learning: Continual Adaptation for Model-Based RL , (ICLR 2019) by Anusha Nagabandi, Chelsea Finn and Sergey Levine [bib]
Formulates an online learning procedure that uses SGD to update model parameters, and an EM with a Chinese restaurant process prior to develop and maintain a mixture of models to handle non-stationary task distribution
- Continual Unsupervised Representation Learning , (2019) by Dushyant Rao, Francesco Visin, Andrei A. Rusu, Yee Whye Teh, Razvan Pascanu and Raia Hadsell [bib]
Introduces unsupervised continual learning (no task label and no task boundaries)
- Generative Models from the perspective of Continual Learning , (IJCNN 2019) by Lesort, Timothée, Caselles-Dupré, Hugo, Garcia-Ortiz, Michael, Goudou, Jean-Fran{\c c}ois and Filliat, David [bib]
Extensive evaluation of CL methods for generative modeling
- Closed-loop Memory GAN for Continual Learning , (IJCAI 2019) by Rios, Amanda and Itti, Laurent [bib]
- Lifelong Generative Modeling , (arXiv 2017) by Ramapuram, Jason, Gregorova, Magda and Kalousis, Alexandros [bib]
- Avoiding Catastrophe: Active Dendrites Enable Multi-Task Learning in Dynamic Environments , (2022) by Iyer, Abhiram, Grewal, Karan, Velu, Akash, Souza, Lucas Oliveira, Forest, Jeremy and Ahmad, Subutai [bib]
bio-inspired method which dynamically restrict and route information in a context-specific manner
- A rapid and efficient learning rule for biological neural circuits , (2021) by Eren Sezener, Agnieszka Grabska-Barwinska, Dimitar Kostadinov, Maxime Beau, Sanjukta Krishnagopal, David Budden, Marcus Hutter, Joel Veness, Matthew M. Botvinick, Claudia Clopath, Michael H{"a}usser and Peter E. Latham [bib]
- Alleviating catastrophic forgetting using context-dependent gating and synaptic stabilization , (2018) by Masse, Nicolas Y, Grant, Gregory D and Freedman, David J [bib]
a network trained to do CL where select subnetworks are used to learn each task; these subnetworks are chosen a priori
- Learning causal models online , (arXiv 2020) by Javed, Khurram, White, Martha and Bengio, Yoshua [bib]
- On the Limitations of Continual Learning for Malware Classification , (2022) by Rahman, Mohammad Saidur, Coull, Scott E and Wright, Matthew [bib]
This paper investigates overcoming catastrophic forgetting for malware classification
- CLOPS: Continual Learning of Physiological Signals , (arXiv 2020) by Kiyasseh, Dani, Zhu, Tingting and Clifton, David A [bib]
a healthcare-specific replay-based method to mitigate destructive interference during continual learning
- LAMAL: LAnguage Modeling Is All You Need for Lifelong Language Learning , (ICLR 2020) by Fan-Keng Sun, Cheng-Hao Ho and Hung-Yi Lee [bib]
- Compositional Language Continual Learning , (ICLR 2020) by Yuanpeng Li, Liang Zhao, Kenneth Church and Mohamed Elhoseiny [bib]
method for compositional continual learning of sequence-to-sequence models
- Incremental Lifelong Deep Learning for Autonomous Vehicles , (2018) by Pierre, John M. [bib]
- Unsupervised real-time anomaly detection for streaming data , (2017) by Ahmad, Subutai, Lavin, Alexander, Purdy, Scott and Agha, Zuha [bib]
HTM applied to real-world anomaly detection problem
- Continuous online sequence learning with an unsupervised neural network model , (2016) by Cui, Yuwei, Ahmad, Subutai and Hawkins, Jeff [bib]
HTM applied to a prediction problem of taxi passenger demand
- Continual Learning: Tackling Catastrophic Forgetting in Deep Neural Networks with Replay Processes , (2020) by Timothée Lesort [bib]
- Continual Learning with Deep Architectures , (2019) by Vincenzo Lomonaco [bib]
- Continual Learning in Neural Networks , (arXiv 2019) by Aljundi, Rahaf [bib]
- Continual learning in reinforcement environments , (1994) by Ring, Mark Bishop [bib]
- Renate: a library for real-world continual learning , (2022) by ** [bib]
A library for real-world continual learning with integrated hyperparameter tuning.
- Sequoia - Towards a Systematic Organization of Continual Learning Research , (2021) by Fabrice Normandin, Florian Golemo, Oleksiy Ostapenko, Matthew Riemer, Pau Rodriguez, Julio Hurtado, Khimya Khetarpal, Timothée Lesort, Laurent Charlin, Irina Rish and Massimo Caccia [bib]
A library that unifies Continual Supervised and Continual Reinforcement Learning research
- Avalanche: an End-to-End Library for Continual Learning , (2021) by Vincenzo Lomonaco, Lorenzo Pellegrini, Andrea Cossu, Gabriele Graffieti and Antonio Carta [bib]
A library for Continual Supervised Learning
- Continuous Coordination As a Realistic Scenario for Lifelong Learning , (2021) by Hadi Nekoei, Akilesh Badrinaaraayanan, Aaron Courville and Sarath Chandar [bib]
a multi-agent lifelong learning testbed that supports both zero-shot and few-shot settings.
- River: machine learning for streaming data in Python, (2020) by Jacob Montiel, Max Halford, Saulo Martiello Mastelini
and Geoffrey Bolmier, Raphael Sourty, Robin Vaysse
and Adil Zouitine, Heitor Murilo Gomes, Jesse Read
and Talel Abdessalem and Albert Bifet [bib]
A library for online learning.
- Continuum, Data Loaders for Continual Learning, (2020) by Douillard, Arthur and Lesort, Timothée [bib]
A library proposing continual learning scenarios and metrics.
- Framework for Analysis of Class-Incremental Learning , (arXiv 2020) by Masana, Marc, Liu, Xialei, Twardowski, Bartlomiej, Menta, Mikel, Bagdanov, Andrew D and van de Weijer, Joost [bib]
A library for Continual Class-Incremental Learning
- Workshop on Continual Learning at ICML 2020 , (2020) by Rahaf Aljundi, Haytham Fayek, Eugene Belilovsky, David Lopez-Paz, Arslan Chaudhry, Marc Pickett, Puneet Dokania, Jonathan Schwarz and Sayna Ebrahimi [bib]
- 4th Lifelong Machine Learning Workshop at ICML 2020 , (2020) by Shagun Sodhani, Sarath Chandar, Balaraman Ravindran and Doina Precup [bib]
- CVPR 2020 Continual Learning in Computer Vision Competition: Approaches, Results, Current Challenges and Future Directions, (arXiv 2020) by Lomonaco, Vincenzo, Pellegrini, Lorenzo, Rodriguez, Pau, Caccia, Massimo, She, Qi, Chen, Yu, Jodelet, Quentin, Wang, Ruiping, Mai, Zheda, Vazquez, David and others [bib]
surveys the results of the first CL competition at CVPR
- 1st Lifelong Learning for Machine Translation Shared Task at WMT20 (EMNLP 2020) , (2020) by Loïc Barrault, Magdalena Biesialska, Marta R. Costa-jussà, Fethi Bougares and Olivier Galibert [bib]