This repository is the code of our paper "Conditional Prototype Rectification Prompt Learning".
To run our code you need to install Dassl and the basic torch environment.
We suggest putting all datasets under the same folder (say $DATA
) to ease management and following the instructions below to organize datasets to avoid modifying the source code. The file structure looks like
$DATA/
|–– imagenet/
|–– caltech-101/
|–– oxford_pets/
|–– stanford_cars/
If you have some datasets already installed somewhere else, you can create symbolic links in $DATA/dataset_name
that point to the original data to avoid duplicate download.
The instructions to prepare each dataset are detailed below. To ensure reproducibility and fair comparison for future work, we utilize CoOp-style train/val/test splits for all datasets except ImageNet where the validation set is used as test set.
- Create a folder named
imagenet/
under$DATA
. - Create
images/
underimagenet/
. - Download the dataset from the official website and extract the training and validation sets to
$DATA/imagenet/images
. The directory structure should look like
imagenet/
|–– images/
| |–– train/ # contains 1,000 folders like n01440764, n01443537, etc.
| |–– val/
- If you had downloaded the ImageNet dataset before, you can create symbolic links to map the training and validation sets to
$DATA/imagenet/images
. - Download the
classnames.txt
to$DATA/imagenet/
from this link. The class names are copied from CLIP.
- Create a folder named
caltech-101/
under$DATA
. - Download
101_ObjectCategories.tar.gz
from http://www.vision.caltech.edu/Image_Datasets/Caltech101/101_ObjectCategories.tar.gz and extract the file under$DATA/caltech-101
. - Download
split_zhou_Caltech101.json
from this link and put it under$DATA/caltech-101
.
The directory structure should look like
caltech-101/
|–– 101_ObjectCategories/
|–– split_zhou_Caltech101.json
- Create a folder named
oxford_pets/
under$DATA
. - Download the images from https://www.robots.ox.ac.uk/~vgg/data/pets/data/images.tar.gz.
- Download the annotations from https://www.robots.ox.ac.uk/~vgg/data/pets/data/annotations.tar.gz.
- Download
split_zhou_OxfordPets.json
from this link.
The directory structure should look like
oxford_pets/
|–– images/
|–– annotations/
|–– split_zhou_OxfordPets.json
- Create a folder named
stanford_cars/
under$DATA
. - Download the train images http://ai.stanford.edu/~jkrause/car196/cars_train.tgz.
- Download the test images http://ai.stanford.edu/~jkrause/car196/cars_test.tgz.
- Download the train labels https://ai.stanford.edu/~jkrause/cars/car_devkit.tgz.
- Download the test labels http://ai.stanford.edu/~jkrause/car196/cars_test_annos_withlabels.mat.
- Download
split_zhou_StanfordCars.json
from this link.
The directory structure should look like
stanford_cars/
|–– cars_test\
|–– cars_test_annos_withlabels.mat
|–– cars_train\
|–– devkit\
|–– split_zhou_StanfordCars.json
- Create a folder named
oxford_flowers/
under$DATA
. - Download the images and labels from https://www.robots.ox.ac.uk/~vgg/data/flowers/102/102flowers.tgz and https://www.robots.ox.ac.uk/~vgg/data/flowers/102/imagelabels.mat respectively.
- Download
cat_to_name.json
from here. - Download
split_zhou_OxfordFlowers.json
from here.
The directory structure should look like
oxford_flowers/
|–– cat_to_name.json
|–– imagelabels.mat
|–– jpg/
|–– split_zhou_OxfordFlowers.json
- Download the dataset from https://data.vision.ee.ethz.ch/cvl/datasets_extra/food-101/ and extract the file
food-101.tar.gz
under$DATA
, resulting in a folder named$DATA/food-101/
. - Download
split_zhou_Food101.json
from here.
The directory structure should look like
food-101/
|–– images/
|–– license_agreement.txt
|–– meta/
|–– README.txt
|–– split_zhou_Food101.json
- Download the data from https://www.robots.ox.ac.uk/~vgg/data/fgvc-aircraft/archives/fgvc-aircraft-2013b.tar.gz.
- Extract
fgvc-aircraft-2013b.tar.gz
and keep onlydata/
. - Move
data/
to$DATA
and rename the folder tofgvc_aircraft/
.
The directory structure should look like
fgvc_aircraft/
|–– images/
|–– ... # a bunch of .txt files
- Create a folder named
sun397/
under$DATA
. - Download the images http://vision.princeton.edu/projects/2010/SUN/SUN397.tar.gz.
- Download the partitions https://vision.princeton.edu/projects/2010/SUN/download/Partitions.zip.
- Extract these files under
$DATA/sun397/
. - Download
split_zhou_SUN397.json
from this link.
The directory structure should look like
sun397/
|–– SUN397/
|–– split_zhou_SUN397.json
|–– ... # a bunch of .txt files
- Download the dataset from https://www.robots.ox.ac.uk/~vgg/data/dtd/download/dtd-r1.0.1.tar.gz and extract it to
$DATA
. This should lead to$DATA/dtd/
. - Download
split_zhou_DescribableTextures.json
from this link.
The directory structure should look like
dtd/
|–– images/
|–– imdb/
|–– labels/
|–– split_zhou_DescribableTextures.json
- Create a folder named
eurosat/
under$DATA
. - Download the dataset from http://madm.dfki.de/files/sentinel/EuroSAT.zip and extract it to
$DATA/eurosat/
. - Download
split_zhou_EuroSAT.json
from here.
The directory structure should look like
eurosat/
|–– 2750/
|–– split_zhou_EuroSAT.json
- Create a folder named
ucf101/
under$DATA
. - Download the zip file
UCF-101-midframes.zip
from here and extract it to$DATA/ucf101/
. This zip file contains the extracted middle video frames. - Download
split_zhou_UCF101.json
from this link.
The directory structure should look like
ucf101/
|–– UCF-101-midframes/
|–– split_zhou_UCF101.json
For few-shot learning tasks, you need to set base2new to False in main.py and modify backbone to RN50 in yaml to run code in the following format:
python main.py --config ./configs/fgvc.yaml --shots 1 --model CPR --subsample all
For few-shot learning tasks, you need to set base2new to False in main.py and modify backbone to ViT in yaml to run code in the following format:
python main.py --config ./configs/fgvc.yaml --shots 16 --model CPR --subsample base