Introduction
YOLOX is an anchor-free version of YOLO, with a simpler design but better performance! It aims to bridge the gap between research and industrial communities. For more details, please refer to our report on Arxiv.
Updates!!
- 【2021/07/20】 We have released our technical report on Arxiv.
Comming soon
- YOLOX-P6 and larger model.
- Objects365 pretrain.
- Transformer modules.
- More features in need.
Benchmark
Standard Models.
Model | size | mAPtest 0.5:0.95 |
Speed V100 (ms) |
Params (M) |
FLOPs (G) |
weights |
---|---|---|---|---|---|---|
YOLOX-s | 640 | 39.6 | 9.8 | 9.0 | 26.8 | Download |
YOLOX-m | 640 | 46.4 | 12.3 | 25.3 | 73.8 | Download |
YOLOX-l | 640 | 50.0 | 14.5 | 54.2 | 155.6 | Download |
YOLOX-x | 640 | 51.2 | 17.3 | 99.1 | 281.9 | Download |
YOLOX-Darknet53 | 640 | 47.4 | 11.1 | 63.7 | 185.3 | Download |
Light Models.
Model | size | mAPval 0.5:0.95 |
Params (M) |
FLOPs (G) |
weights |
---|---|---|---|---|---|
YOLOX-Nano | 416 | 25.3 | 0.91 | 1.08 | Download |
YOLOX-Tiny | 416 | 31.7 | 5.06 | 6.45 | Download |
Quick Start
Installation
Step1. Install YOLOX.
git clone git@github.com:Megvii-BaseDetection/YOLOX.git
cd YOLOX
pip3 install -U pip && pip3 install -r requirements.txt
pip3 install -v -e . # or python3 setup.py develop
Step2. Install apex.
git clone https://github.com/NVIDIA/apex
cd apex
pip3 install -v --disable-pip-version-check --no-cache-dir --global-option="--cpp_ext" --global-option="--cuda_ext" ./
Step3. Install pycocotools.
pip3 install cython; pip3 install 'git+https://github.com/cocodataset/cocoapi.git#subdirectory=PythonAPI'
Demo
Step1. Download a pretrained model from the benchmark table.
Step2. Use either -n or -f to specify your detector's config. For example:
python tools/demo.py image -n yolox-s -c /path/to/your/yolox_s.pth.tar --path assets/dog.jpg --conf 0.3 --nms 0.65 --tsize 640 --save_result
or
python tools/demo.py image -f exps/default/yolox_s.py -c /path/to/your/yolox_s.pth.tar --path assets/dog.jpg --conf 0.3 --nms 0.65 --tsize 640 --save_result
Demo for video:
python tools/demo.py video -n yolox-s -c /path/to/your/yolox_s.pth.tar --path /path/to/your/video --conf 0.3 --nms 0.65 --tsize 640 --save_result
Reproduce our results on COCO
Step1. Prepare COCO dataset
cd <YOLOX_HOME>
ln -s /path/to/your/COCO ./datasets/COCO
Step2. Reproduce our results on COCO by specifying -n:
python tools/train.py -n yolox-s -d 8 -b 64 --fp16 -o
yolox-m
yolox-l
yolox-x
- -d: number of gpu devices
- -b: total batch size, the recommended number for -b is num-gpu * 8
- --fp16: mixed precision training
When using -f, the above commands are equivalent to:
python tools/train.py -f exps/default/yolox-s.py -d 8 -b 64 --fp16 -o
exps/default/yolox-m.py
exps/default/yolox-l.py
exps/default/yolox-x.py
Evaluation
We support batch testing for fast evaluation:
python tools/eval.py -n yolox-s -c yolox_s.pth.tar -b 64 -d 8 --conf 0.001 [--fp16] [--fuse]
yolox-m
yolox-l
yolox-x
- --fuse: fuse conv and bn
- -d: number of GPUs used for evaluation. DEFAULT: All GPUs available will be used.
- -b: total batch size across on all GPUs
To reproduce speed test, we use the following command:
python tools/eval.py -n yolox-s -c yolox_s.pth.tar -b 1 -d 1 --conf 0.001 --fp16 --fuse
yolox-m
yolox-l
yolox-x
Tutorials
Deployment
- ONNX export and an ONNXRuntime
- TensorRT in C++ and Python
- ncnn in C++ and Java
- OpenVINO in C++ and Python
Cite YOLOX
If you use YOLOX in your research, please cite our work by using the following BibTeX entry:
@article{yolox2021,
title={YOLOX: Exceeding YOLO Series in 2021},
author={Ge, Zheng and Liu, Songtao and Wang, Feng and Li, Zeming and Sun, Jian},
journal={arXiv preprint arXiv:2107.08430},
year={2021}
}