/SparK

[ICLR'23 Spotlight] The first successful BERT/MAE-style pretraining on any convolutional network; Pytorch impl. of "Designing BERT for Convolutional Networks: Sparse and Hierarchical Masked Modeling"

Primary LanguagePythonMIT LicenseMIT

SparK: the first successful BERT/MAE-style pretraining on any convolutional networks  Reddit Twitter

This is the official implementation of ICLR paper Designing BERT for Convolutional Networks: Sparse and Hierarchical Masked Modeling. We've tried our best to make the codebase clean, short, easy to read, state-of-the-art, and only rely on minimal dependencies.

SOTA  OpenReview  arXiv

🔥 News

📺 Video demo

spark-demo.mp4

What's new here?

🔥 On ResNets, generative pre-training surpasses contrastive learning for the first time:

🔥 ConvNeXt gains more from pre-training than Swin-Transformer, up to +3.5 points:

🔥 Larger models benefit more from SparK pre-training, showing a scaling behavior:

🔥 Pre-trained model can make reasonable predictions:

See our paper for more analysis, discussions, and evaluations.

Catalog

  • Pre-training code
  • Fine-tuning code
  • Colab visualization playground
  • Weights & visualization playground on Huggingface
  • Weights in timm

ImageNet-1k results and pre-trained networks weights

Note: for network definitions, we directly use timm.models.ResNet and official ConvNeXt.

arch. acc@1 #params flops model
ResNet50 80.6 26M 4.1G drive
ResNet101 82.2 45M 7.9G drive
ResNet152 82.7 60M 11.6G drive
ResNet200 83.1 65M 15.1G drive
ConvNeXt-S 84.1 50M 8.7G drive
ConvNeXt-B 84.8 89M 15.4G drive
ConvNeXt-L 85.4 198M 34.4G drive

Installation

For pre-training and fine-tuning on ImageNet-1k, we highly recommended you to use torch==1.10.0, torchvision==0.11.1, and timm==0.5.4.

Check INSTALL.md to install all dependencies for pre-training and ImageNet fine-tuning.

Pre-training

See pretrain/ to pre-train models on ImageNet-1k.

Fine-tuning

Acknowledgement

We referred to these useful codebases:

License

This project is under the MIT license. See LICENSE for more details.

Citation

If you found this project useful, you can kindly give us a star ⭐, or cite us in your work 📖:

@Article{tian2023designing,
  author  = {Keyu Tian and Yi Jiang and Qishuai Diao and Chen Lin and Liwei Wang and Zehuan Yuan},
  title   = {Designing BERT for Convolutional Networks: Sparse and Hierarchical Masked Modeling},
  journal = {arXiv:2301.03580},
  year    = {2023},
}