GhostNet: More Features from Cheap Operations - 75.7% top-1 (better than than MobileNetV3)
AlexeyAB opened this issue · 17 comments
-
model: ghostnet.cfg.txt
GPU GeForce RTX 2070 - Darknet framework (GPU=1 CUDNN=1 CUDNN_HALF=1)
CPU Intel Core i7 6700k - Darknet framework (OPENMP=1 AVX=1)
- darknet.cfg - GPU 360 FPS - CPU 63 FPS - 0.400 BFlops - 7.3M params - 61.1% Top1
- darknet19.cfg - GPU 179 FPS - CPU 14 FPS - 2.793 BFlops - 20.8M params - 72.9% Top1
- GhostNet-1.0 - GPU 61 FPS - CPU 12 FPS - 0.117 BFlops - 5.0M params - xx.x% Top1
- MixNet-M-GPU - GPU 82 FPS - CPU 4.6 FPS - 0.533 BFlops - 11.9M params - 71.5% Top1
- EfficientNetB0 - GPU 110 FPS - CPU 6.3 FPS - 0.450 BFlops - 4.9M params - 71.3% Top1
- darknet53.cfg - GPU 85 FPS - CPU 4.8 FPS- 9.285 BFlops - 41.6M params - 77.2% Top1
- GhostNet-1.0 - 5.0M params - 0.117 BFlops - xx.x% Top1 - xx.x% Top5 - MY URL
- GhostNet-1.0 - 5.2M params - 0.141 BFlops - 73.9% Top1 - 91.4% Top5 - Official
- MobileNetV3 - 5.4M params - 0.219 BFlops - 75.2% Top1 - --- Top5
- GhostNet-1.3 - 7.3M params - 0.226 BFlops - 75.7% Top1 - 92.7% Top5 - Official
- EfficientNetB0 - 4.9M params - 0.450 BFlops - 76.3% (71.3%) Top1 - 93.2% (90.4%) Top5 - MY URL
- MixNet-M - 5.0M params - 0.360 BFlops - 77.0% (71.5%) Top1 - 93.3% ( 90.5%) Top5 - #4203
Comparison table: #4203 (comment)
maybe better than mobilenetv3, efficientnet, mixnet..., etc. huawei-noah/Efficient-AI-Backbones#1
We measure the actual inference speed on an ARM-based mobile phone using the TFLite tool, we use single-threaded mode with batch size 1:
@WongKinYiu Hi,
I added ghostnet.cfg.txt
so you can try to train it for 600 000 iterations with batch_size=192 (mini_batch_size=96).
It can be trained for 2 weeks on GeForce RTX 2070.
May be it can be fast on CPU/Neurochips (OpenCV-dnn).
@AlexeyAB thank you!
@WongKinYiu I just added dropout after avg-pooling. So if you already started training, you can download new cfg-file and continue training.
我在cpu度测试darnet-19 速度1.3秒一张图片,测试图片大小500*374,请问哪里出问题了
Do you build darknet with (OPENMP=1 AVX=1)?
And which CPU do you use?
Do you build darknet with (OPENMP=1 AVX=1)?
And which CPU do you use?
thanks,i will try update my code and try again
top-1 1.5%, top-5 5.6%.
@WongKinYiu Thanks!
ghostnet.cfg.txt - 1.5% Top1? is it near ~0 ?
Can you share cfg/weights file?
@WongKinYiu This repo may help: https://github.com/d-li14/ghostnet.pytorch
@iamhankai thank you very much.
@WongKinYiu I have tested with your cfg/weights. The result is almost same as yours.
I started training few days ago with almost same your cfg except for batch and subdivisions.
My result is top-1 30%, top-5 64% (300000 iterations, continuing). This is strange despite of using almost same cfg.
@WongKinYiu Thanks!
@rsek147 Can you attach your cfg-file?
@AlexeyAB @WongKinYiu I think the ghostnet.cfg.txt is wrong,it can refer to https://github.com/d-li14/ghostnet.pytorch/blob/master/ghostnet.py , i use ghiost moudle in mobilenetv3 Small , it get 20% top1 after 20000 iters with 256 batch size
@AlexeyAB hi!
When I use GhostNet for training, the loss -nan,How to solve?
I am completing a fruit test(class=1) using the ghostnet.cfg file above. What parts should I modify?
Thanks!
@WongKinYiu Hi, have you been retraining Ghostnet since?If so, can you share .cfg and .weights files?
I did not get good result.