This is the official implementation of ICLR paper Designing BERT for Convolutional Networks: Sparse and Hierarchical Masked Modeling. We've tried our best to make the codebase clean, short, easy to read, state-of-the-art, and only rely on minimal dependencies.
- The share on TechBeat (将门创投) is scheduled on Mar. 16th (UTC+0 12am) too! [
📹Recorded Video
] - We are honored to be invited by Synced ("机器之心机动组 视频号" on WeChat) to give a talk about SparK on Feb. 27th (UTC+0 11am, UTC+8 7pm), welcome! [
📹Recorded Video
] - This work got accepted to ICLR 2023 as a Spotlight (notable-top-25%).
- Other articles: [
Synced
] [DeepAI
] [TheGradient
] [Bytedance
] [CVers
[QbitAI(量子位)
] [BAAI(智源)
] [机器之心机动组
] [极市平台
] [ReadPaper笔记
]
spark-demo.mp4
See our paper for more analysis, discussions, and evaluations.
- Pre-training code
- Fine-tuning code
- Colab visualization playground
- Weights & visualization playground on
Huggingface
- Weights in
timm
Note: for network definitions, we directly use timm.models.ResNet
and official ConvNeXt.
arch. | acc@1 | #params | flops | model |
---|---|---|---|---|
ResNet50 | 80.6 | 26M | 4.1G | drive |
ResNet101 | 82.2 | 45M | 7.9G | drive |
ResNet152 | 82.7 | 60M | 11.6G | drive |
ResNet200 | 83.1 | 65M | 15.1G | drive |
ConvNeXt-S | 84.1 | 50M | 8.7G | drive |
ConvNeXt-B | 84.8 | 89M | 15.4G | drive |
ConvNeXt-L | 85.4 | 198M | 34.4G | drive |
For pre-training and fine-tuning on ImageNet-1k, we highly recommended you to use torch==1.10.0
, torchvision==0.11.1
, and timm==0.5.4
.
Check INSTALL.md to install all dependencies for pre-training and ImageNet fine-tuning.
See pretrain/ to pre-train models on ImageNet-1k.
- All models on ImageNet: check downstream_imagenet/ for subsequent instructions.
- ResNets on COCO: see downstream_d2/ for details.
- ConvNeXts on COCO: see downstream_mmdet/ for details.
We referred to these useful codebases:
This project is under the MIT license. See LICENSE for more details.
If you found this project useful, you can kindly give us a star ⭐, or cite us in your work 📖:
@Article{tian2023designing,
author = {Keyu Tian and Yi Jiang and Qishuai Diao and Chen Lin and Liwei Wang and Zehuan Yuan},
title = {Designing BERT for Convolutional Networks: Sparse and Hierarchical Masked Modeling},
journal = {arXiv:2301.03580},
year = {2023},
}