Implementation of "Knowledge Transfer Graph for Deep Collaborative Learning" (arXiv)
- python 3.6.9
- ipython 7.8.0
- jupyterlab 0.35.3
- pytorch 1.2.0
- torchvision 0.4.0
- Optuna 3.0.6
- easydict 1.9
- graphviz 0.10.1
- Train pre-trained model.
bash -c "./docker/run.sh ipython ./pre-train.py -- --target_model=ResNet32 --dataset=CIFAR100 --gpu_id=0 --save_dir=./pre-train/ResNet32/"
bash -c "./docker/run.sh ipython ./pre-train.py -- --target_model=ResNet110 --dataset=CIFAR100 --gpu_id=0 --save_dir=./pre-train/ResNet110/"
bash -c "./docker/run.sh ipython ./pre-train.py -- --target_model=WRN28_2 --dataset=CIFAR100 --gpu_id=0 --save_dir=./pre-train/WRN28_2/"
- Optimize knowledge transfer graph in parallel distributed environment. Run train.py mltiple times.
bash -c "./docker/run.sh ipython ./train.py -- --num_nodes=3 --target_model=ResNet32 --dataset=CIFAR100 --gpu_id=0 --num_trial=1500 --optuna_dir=./result/"
- Watch result
Open watch.ipynb on jupyterlab and run all cells.