/vim

Official repository for CVPR2022 publication, ViM: Out-Of-Distribution with Virtual-logit Matching

Primary LanguagePythonOtherNOASSERTION

Official code for ViM: Out-Of-Distribution with Virtual-logit Matching

🌊 - Project Page 🦢 - Paper

ViM.Presentation.mp4

DataSets

Dataset source can be downloaded here.

  • ImageNet. The ILSVRC 2012 dataset as In-distribution (ID) dataset. The training subset we used is this file.
  • OpenImage-O. The OpenImage-O dataset is a subset of the OpenImage-V3 testing set. The filelist is here. Please refer to our paper of ViM for details of dataset construction.
  • Texture. We rule out four classes that coincides with ImageNet. The filelist used in the paper is here.
  • iNaturalist. Follow the instructions in the link to prepare the iNaturalist OOD dataset.
  • ImageNet-O. Follow the guide to download the ImageNet-O OOD dataset.
mkdir data
cd data
ln -s /path/to/imagenet imagenet
ln -s /path/to/openimage_o openimage_o
ln -s /path/to/texture texture
ln -s /path/to/inaturalist inaturalist
ln -s /path/to/imagenet_o imagenet_o
cd ..

Pretrained Model Preparation

VIT

  1. install mmpretrain
  2. download checkpoint
    mkdir checkpoints
    cd checkpoints
    wget https://download.openmmlab.com/mmclassification/v0/vit/finetune/vit-base-p16_in21k-pre-3rdparty_ft-64xb64_in1k-384_20210928-98e8652b.pth
    cd ..
  3. extract features
    ./extract_feature_vit.py data/imagenet outputs/vit_imagenet_val.pkl --img_list datalists/imagenet2012_val_list.txt
    ./extract_feature_vit.py data/imagenet outputs/vit_train_200k.pkl --img_list datalists/imagenet2012_train_random_200k.txt
    ./extract_feature_vit.py data/openimage_o outputs/vit_openimage_o.pkl --img_list datalists/openimage_o.txt
    ./extract_feature_vit.py data/texture outputs/vit_texture.pkl --img_list datalists/texture.txt
    ./extract_feature_vit.py data/inaturalist outputs/vit_inaturalist.pkl
    ./extract_feature_vit.py data/imagenet_o outputs/vit_imagenet_o.pkl
    ./extract_feature_vit.py data/cifar10 outputs/vit_cifar10_train.pkl --img_list datalists/cifar10_train.txt
    ./extract_feature_vit.py data/cifar10 outputs/vit_cifar10_test.pkl --img_list datalists/cifar10_test.txt
  4. extract w and b in fc
    ./extract_feature_vit.py a b --fc_save_path outputs/vit_fc.pkl
  5. evaluation
    ./benchmark.py outputs/vit_fc.pkl outputs/vit_train_200k.pkl outputs/vit_imagenet_val.pkl outputs/vit_openimage_o.pkl outputs/vit_texture.pkl outputs/vit_inaturalist.pkl outputs/vit_imagenet_o.pkl
    ./benchmark.py outputs/vit_fc.pkl outputs/vit_cifar10_train.pkl outputs/vit_cifar10_test.pkl outputs/vit_openimage_o.pkl outputs/vit_texture.pkl outputs/vit_inaturalist.pkl outputs/vit_imagenet_o.pkl

BIT

  1. download checkpoint
    mkdir checkpoints
    cd checkpoints
    wget https://storage.googleapis.com/bit_models/BiT-S-R101x1.npz
    cd ..
  2. extract features
    ./extract_feature_bit.py data/imagenet outputs/bit_imagenet_val.pkl --img_list datalists/imagenet2012_val_list.txt
    ./extract_feature_bit.py data/imagenet outputs/bit_train_200k.pkl --img_list datalists/imagenet2012_train_random_200k.txt
    ./extract_feature_bit.py data/openimage_o outputs/bit_openimage_o.pkl --img_list datalists/openimage_o.txt
    ./extract_feature_bit.py data/texture outputs/bit_texture.pkl --img_list datalists/texture.txt
    ./extract_feature_bit.py data/inaturalist outputs/bit_inaturalist.pkl
    ./extract_feature_bit.py data/imagenet_o outputs/bit_imagenet_o.pkl
  3. extract w and b in fc
    ./extract_feature_bit.py a b --fc_save_path outputs/bit_fc.pkl
  4. evaluation
    ./benchmark.py outputs/bit_fc.pkl outputs/bit_train_200k.pkl outputs/bit_imagenet_val.pkl outputs/bit_openimage_o.pkl outputs/bit_texture.pkl outputs/bit_inaturalist.pkl outputs/bit_imagenet_o.pkl

RepVGG, Res50d, Swin, DeiT

  1. extract features, use repvgg_b3, resnet50d, swin, deit as model
    # choose one of them
    export MODEL=repvgg_b3 && export NAME=repvgg
    export MODEL=resnet50d && export NAME=resnet50d
    export MODEL=swin_base_patch4_window7_224 && export NAME=swin
    export MODEL=deit_base_patch16_224 && export NAME=deit
    
    ./extract_feature_timm.py data/imagenet outputs/${NAME}_imagenet_val.pkl ${MODEL} --img_list datalists/imagenet2012_val_list.txt
    ./extract_feature_timm.py data/imagenet outputs/${NAME}_train_200k.pkl ${MODEL} --img_list datalists/imagenet2012_train_random_200k.txt
    ./extract_feature_timm.py data/openimage_o outputs/${NAME}_openimage_o.pkl ${MODEL} --img_list datalists/openimage_o.txt
    ./extract_feature_timm.py data/texture outputs/${NAME}_texture.pkl ${MODEL} --img_list datalists/texture.txt
    ./extract_feature_timm.py data/inaturalist outputs/${NAME}_inaturalist.pkl ${MODEL}
    ./extract_feature_timm.py data/imagenet_o outputs/${NAME}_imagenet_o.pkl ${MODEL}
  2. extract w and b in fc
    ./extract_feature_timm.py a b ${MODEL} --fc_save_path outputs/${NAME}_fc.pkl
  3. evaluation
    ./benchmark.py outputs/${NAME}_fc.pkl outputs/${NAME}_train_200k.pkl outputs/${NAME}_imagenet_val.pkl outputs/${NAME}_openimage_o.pkl outputs/${NAME}_texture.pkl outputs/${NAME}_inaturalist.pkl outputs/${NAME}_imagenet_o.pkl

Note: To reproduce ODIN baseline, please refer to this repo.

Citation

@inproceedings{haoqi2022vim,
title = {ViM: Out-Of-Distribution with Virtual-logit Matching},
author = {Wang, Haoqi and Li, Zhizhong and Feng, Litong and Zhang, Wayne},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
year = {2022}
}

Acknowledgement

Part of the code is modified from MOS repo.

Related Project

Get the Best of Both Worlds: Improving Accuracy and Transferability by Grassmann Class Representation (ICCV 2023)