/PRE-DFKD

Official implementation of the work titled "Robust and Resource-Efficient Data-Free Knowledge Distillation by Generative Pseudo Replay"

Primary LanguagePython

PRE-DFKD

built with Python3.6 built with PyTorch1.4

Official implementation of the paper Robust and Resource Efficient Data-Free Knowledge Distillation by Generative Pseudo Replay (AAAI'22)

Authors: Kuluhan Binici, Shivam Aggarwal, Pham Nam Trung, Tulika Mitra, Karianto Leman

Project Structure

The project is structured as follows:

  • networks/: This folder contains PyTorch neural network descriptions for teacher/ student models.
  • utils/: Collection of auxiliary scripts that help the main functionality.
  • distillation.py: Script that contains core functions to be used in distillation.
  • main.py: Main PRE-DFKD script
  • student-eval.py: Script to evaluate student models.
  • README.md: This instructions file.
  • LICENSE: CC BY 4.0 Licence file.

Running the code

First create a cache directory in the parent directory of this project with two sub-folders: [data/] and [models/]

+-- cache
    +-- data
    +-- models
+-- PRE-DFKD
    +-- ...

The datasets will be downloaded to the [cache/data] folder

Then, obtain teacher models to be distilled and save them in [cache/models] folder

Download pre-trained teachers

Download link: https://drive.google.com/drive/folders/1fTUl8Igs5gEbWrdrwZd_22YEhl9ZO7rP?usp=sharing

run PRE-DFKD (dataset: CIFAR100, teacher: ResNet34, student: ResNet18)

python main.py

Citation

Please consider citing our work if you make use of it

@article{binici2022robust,
  title={Robust and Resource-Efficient Data-Free Knowledge Distillation by Generative Pseudo Replay},
  author={Binici, Kuluhan and Aggarwal, Shivam and Pham, Nam Trung and Leman, Karianto and Mitra, Tulika},
  journal={arXiv preprint arXiv:2201.03019},
  year={2022}
}