paper link: https://openreview.net/forum?id=JWOiYxMG92s
To install requirements:
pip install -r requirements.txt
Download the dataset and create base/val/novel splits:
CUB
- Change directory to filelists/CUB/
- Run 'source ./download_CUB.sh'
To train the feature extractor in the paper, run this command:
python train.py --dataset [miniImagenet/CUB]
-
Create an empty 'checkpoints' directory.
-
Run:
python save_plk.py --dataset [miniImagenet/CUB]
https://drive.google.com/drive/folders/1IjqOYLRH0OwkMZo8Tp4EG02ltDppi61n?usp=sharing
Actually, all our algorithm is built upon the extracted features (We perform data augmentation in the feature space). The training procedure and the pretrained backbone are both irrelevant to our method. The pretrained model and the extracted features we used are the same as the reference work 'S2M2' (Their project page https://github.com/nupurkmr9/S2M2_fewshot). You can reproduce our work by just simply applying evaluate_DC.py on the provided features. Or you can apply our method on your own model.
After downloading the extracted features, please adjust your file path according to the code.
To evaluate our distribution calibration method, run:
python evaluate_DC.py
Charting the Right Manifold: Manifold Mixup for Few-shot Learning