Searching for BurgerFormer with Micro-Meso-Macro Space Design (ICML 2022)

This is an official pytorch implementation for "Searching for BurgerFormer with Micro-Meso-Macro Space Design". BurgerFormer-img1 BurgerFormer-img2

Requirements

  • PyTorch 1.8.0
  • timm 0.4.12

BurgerFormer Models

Pre-trained checkpoints are released google drive/baiduyun. Place them in the .checkpoints/ folder.

Note: access code for baiduyun is gvfl.

Validation

To evaluate a pre-trained BurgerFormer model on ImageNet, run:

bash script/test.sh

Train

To retrain a BurgerFormer model on ImageNet, run:

bash script/train.sh

TODO

  • Searching Code

Citation

Please cite our paper if you find anything helpful.

@InProceedings{yang2022burgerformer,
  title={Searching for BurgerFormer with Micro-Meso-Macro Space Design},
  author={Yang, Longxing and Hu, Yu and Lu, Shun and Sun, Zihao and Mei, Jilin and Han, Yinhe and Li, Xiaowei},
  booktitle={ICML},
  year={2022}
}

Acknowledgment

This code is heavily based on poolformer, ViT-ResNAS, pytorch-image-models, mmdetection. Great thanks to their contributions.