/QARepVGG_

Make RepVGG Greater Again: A Quantization-aware Approach

Primary LanguagePython

Make RepVGG Greater Again: A Quantization-aware Approach (AAAI2024)

hf_space License: MIT arXiv

📸 Release

  • ⏳ QARepVGG training code. Note that the implementation is already provided in YOLOv6 and used in YOLO-NAS. Both are well known object detectors.

🦙 Model Zoo

Some models (continue updating)

Model Checkpoint Log
QARepVGG-B0 TBD B0_log

🔔 Usage and License Notices: This project utilizes certain datasets and checkpoints that are subject to their respective original licenses. Users must comply with all terms and conditions of these original licenses. This project is licensed permissively under the MIT license and does not impose any additional constraints.

🛠️ Install

  1. Clone this repository and navigate to QARepVGG folder

    git clone  https://github.com/cxxgtxy/QARepVGG.git
    cd QARepVGG
  2. Install Package

    conda create -n  QARepVGG python=3.10 -y
    conda activate QARepVGG
    pip install --upgrade pip
    pip install -r requirements.txt

🗝️ Quick Start

QARepVGGBlockV2 is the default implementation, and we also provide other variants(for ablation and not recommended for use)
We use B0 for example, which is trained for 120 epochs on ImageNet 1k dataset.
```Shell
sh train_QAV2_B0.sh
```

🤝 Acknowledgments

  • RepVGG: the codebase we built upon. Thanks for their wonderful work! 👏
  • mmsegmentation: the great open-sourced framework for segmentation! 👏

✏️ Reference

If you find QARepVGG useful in your research or applications, please consider giving a star ⭐ and citing using the following BibTeX:

@inproceedings{chu2023make,
  title={Make RepVGG Greater Again: A Quantization-aware Approach},
  author={Chu, Xiangxiang and Li, Liang and Zhang, Bo},
  booktitle={AAAI},
  year={2024}
}