Cheng Han, Qichao Zhao, Shuyi Zhang, Yinzi Chen, Zhenlin Zhang, Jinwei Yuan
-
August 30, 2022
: We've released the inference code / trained model and published web demo, just enjoy it ! -
August 24, 2022
: We've released the tech report for YOLOPv2. This work is still in progress and code/models are coming soon. Please stay tuned! ☕️
😁We present an excellent multi-task network based on YOLOP💙,which is called YOLOPv2: Better, Faster, Stronger for Panoptic driving Perception. The advantages of YOLOPv2 can be summaried as below:
- Better👏: we proposed the end-to-end perception network which possess better feature extraction backbone, better bag-of-freebies were developed for dealing with the training process.
- Faster
✈️ : we employed more efficient ELAN structures to achieve reasonable memory allocation for our model. - Stronger💪: the proposed model has stable network design and has powerful robustness for adapting to various scenarios .
We used the BDD100K as our datasets,and experiments are run on NVIDIA TESLA V100.
- Integrated into Huggingface Spaces 🤗 using Gradio. Try out the Web Demo !
model : trained on the BDD100k dataset and test on T3CAIC camera.
Model | Size | Params | Speed (fps) |
---|---|---|---|
YOLOP |
640 | 7.9M | 49 |
HybridNets |
640 | 12.8M | 28 |
YOLOPv2 |
640 | 38.9M | 91 (+42) ⏫ |
Result | Visualization | ||||||||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
|
Result | Visualization | |||||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
|
Result | Visualization | |||||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
|
You can get the model from here.
We provide two testing method.You can store the image or video.
python demo.py --source data/example.jpg
- YOLOPv2 NCNN C++ Demo: YOLOPv2-ncnn from FeiGeChuanShu
- YOLOPv2 ONNX and OpenCV DNN Demo: yolopv2-opencv-onnxrun-cpp-py from hpc203
YOLOPv2 is released under the MIT Licence.