/LPE

Codes for the WACV 2023 paper: "Semantic Guided Latent Parts Embedding for Few-Shot Learning"

Primary LanguagePythonMIT LicenseMIT

Semantic Guided Latent Parts Embedding for Few-Shot Learning
(WACV 2023)

Fengyuan Yang, Ruiping Wang, Xilin Chen

[Paper link], [Supp link]

1. Requirements

  • Python 3.7
  • PyTorch 1.9.0

2. Datasets

  • Original datasets

    • All 4 datasets are the same as previous works (e.g., DeepEMD, renet), and can be download from their links: miniImagenet, tieredImageNet, CIFAR-FS, CUB-FS.
    • Download and extract them in a certain folder, let's say /data/FSLDatasets/LPE_dataset, then remember to set args.data_dir to this folder when running the code later.
  • Semantic embeddings

    • Additional semantic embeddings of these 4 datasets leveraged by our method can be downloaded here.
    • Download and put them in the corresponding dataset folder (e.g., put miniimagenet/wnid2CLIPemb_zscore.npy to /data/FSLDatasets/LPE_dataset/miniimagenet/wnid2CLIPemb_zscore.npy), then remember to set args.semantic_path to the location of this file and args.sem_dim accordingly when running the code later.

3. Usage

Our training and testing scripts are all at scripts/train.sh, and corresponding output logs can found at this folder too.

4. Results

The 1-shot and 5-shot classification results can be found in the corresponding output logs.

Citation

If you find our paper or codes useful, please consider citing our paper:

@InProceedings{Yang_2023_WACV,
    author    = {Yang, Fengyuan and Wang, Ruiping and Chen, Xilin},
    title     = {Semantic Guided Latent Parts Embedding for Few-Shot Learning},
    booktitle = {Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)},
    month     = {January},
    year      = {2023},
    pages     = {5447-5457}
}

Acknowledgments

Our codes are based on renet and DeepEMD, and we really appreciate it.

Further

If you have any question, feel free to contact me. My email is fengyuan.yang@vipl.ict.ac.cn