Stacking Quantization blocks for efficient lifelong online compression
Code for reproducing all results in our paper which can be found here
You can find a quick demo on Google Colab here
- Python 3.7
- Pytorch 1.4.0
├── Common
├── modular.py # Module (QLayer) and Stacked modules (QStack). Includes most key ops, such as adaptive buffer
├── quantize.py # Discretization Ops (GumbelSoftmax, Vector/Tensor Quantization and Argmax Quantization)
├── model.py # Encoder, Decoder, Classifier Blocks
├── config # .yaml files specifying different AQM architectures and hyperparameters used in the paper
├── Lidar
├── .... # files to run LiDAR experiments
├── Utils
├── args.py # Contains command-line args
├── buffer.py # Basic buffer implementation. Handled raw and compressed representations
├── data.py # CL datasets and dataloaders
├── utils.py # Logging / Saving & Loading Models, Args, point cloud processing
├── gen_main.py # files to run the offline classification (e.g. Imagenet) experiments
├── eval.py # evaluation loops for drift, test acc / mse, and lidar
├── cls_main.py # files to run the online classification (e.g. CIFAR) experiments
├── reproduce.txt # All command and information to reproduce the results in the paper
We would like to thank authors of the following repositories (from which we borrowed code) for making the code public.
Gradient Episodic Memory
VQ-VAE
VQ-VAE-2
MIR
For any questions / comments / concerns, feel free to open an issue via github, or to send me an email at
lucas.page-caccia@mail.mcgill.ca
.
We strongly believe in fully reproducible research. To that end, if you find any discrepancy between our code and the paper, please let us know, and we will make sure to address it.
Happy streaming compression :)
If you find this code useful please cite us in your work.
@article{caccia2019online,
title={Online Learned Continual Compression with Adaptive Quantization Modules},
author={Caccia, Lucas and Belilovsky, Eugene and Caccia, Massimo and Pineau, Joelle},
journal={Proceedings of the 37th International Conference on Machine Learning},
year={2020}
}