Yet another Few-Shot ViT training framework.
Our code is mainly based on MetaBaseline, and SUN-F (means FEAT)/D (means DeepEMD) are based on the corresponding codebase. Sincerely thanks for their contribution.
- PyTorch (>= 1.9)
- TorchVision
- timm (latest)
- einops
- tqdm
- numpy
- scikit-learn
- scipy
- argparse
- tensorboardx
June 4th, 2022: we upload the meta-training phase and the meta-tuning phase of SUN-D.
June 4th, 2022: we upload the teacher training code in the meta-training phase of SUN.
June 3rd, 2022: we upload the meta-tuning phase of SUN-M.
Currently we provide SUN-M (Visformer) trained on miniImageNet (5-way 1-shot and 5-way 5-shot), see Google Drive for details.
More pretrained checkpoints coming soon.
For example, miniImageNet:
cd test_phase
Download miniImageNet dataset from miniImageNet (courtesy of Spyros Gidaris)
unzip the package to materials/mini-imagenet, then obtain materials/mini-imagenet with pickle files.
Download corresponding checkpoints from Google Drive and store the checkpoints in test_phase/ directory.
cd test_phase
python test_few_shot.py --config configs/test_1_shot.yaml --shot 1 --gpu 1 # for 1-shot
python test_few_shot.py --config configs/test_5_shot.yaml --shot 5 --gpu 1 # for 5-shot
For 1-shot, you can obtain: test epoch 1: acc=67.80 +- 0.45 (%)
For 5-shot, you can obtain: test epoch 1: acc=83.25 +- 0.28 (%)
Test accuracy may slightly vary with different pytorch/cuda versions or different hardwares
@inproceedings{dong2022self,
title={Self-Promoted Supervision for Few-Shot Transformer},
author={Dong, Bowen and Zhou, Pan and Yan, Shuicheng and Zuo, Wangmeng},
booktitle={European Conference on Computer Vision (ECCV)},
year={2022}
}
- more checkpoints