/SEED_ICLR21

SEED: Self-supervised Distillation for Visual Representation

Primary LanguagePythonOtherNOASSERTION

SEED: Self-supervised Distillation for Visual Representation

This is an unofficial PyTorch implementation of the SEED (ICLR-2021):

We implement SEED based on the official code of MoCo.

Implementation Results

Teacher model is MoCo-v2 (top-1: 67.6) pretrained on ImageNet-1k with ResNet-50. We distill it to ResNet-18. Results show that our code is credible.

SEED Top-1 acc Top-5 acc
Official results (hidden_dim=512) 57.60 81.80
**Ours** (hidden_dim=512) 58.03 82.44
**Ours** (hidden_dim=2048) 60.32 83.50
**Ours** (symmetry, hidden_dim=2048) 61.27 84.06

Hidden dimension (hidden_dim) can be modified by

self.encoder_q.fc = nn.Sequential(nn.Linear(dim_smlp, hidden_dim), nn.ReLU(), nn.Linear(hidden_dim, dim))

Start Training

Install PyTorch and ImageNet dataset following the official PyTorch ImageNet training code.

This repo aims to be minimal modifications on MoCo. Running by:

sh train.sh

To Do

More student architectures.

Citation

@inproceedings{fang2021seed,
  author  = {Zhiyuan Fang, Jianfeng Wang, Lijuan Wang, Lei Zhang, Yezhou Yang, and Zicheng Liu},
  title   = {SEED: Self-supervised Distillation for Visual Representation},
  booktitle = {ICLR},
  year    = {2021},
}