FeiGeChuanShu/ncnn-android-yolox

Can you pls add a tutorial on how to add a custom trained yolox (tiny/nano) model to the app?

liminghu opened this issue · 13 comments

Can you pls add a tutorial on how to add a custom trained yolox (tiny/nano) model to the app? Thanks.

@FeiGeChuanShu @hylrh2008 Are the steps at:
https://github.com/Megvii-BaseDetection/YOLOX/tree/main/demo/ncnn/cpp
enough?

yes, follow these steps you can convert your yolox model to a ncnn model. then replace the model file and change the postprocess in this android demo code.

@FeiGeChuanShu Thanks a lot. so if the custom trained yolox has different number of classes and names, we just need to replace:
https://github.com/FeiGeChuanShu/ncnn-android-yolox/blob/f0acf18f23899bf58b113a162530a24ef72011ef/app/src/main/jni/yolox.cpp

line: 409 with the corresponding new class names?

@FeiGeChuanShu Thanks a lot. so if the custom trained yolox has different number of classes and names, we just need to replace:
https://github.com/FeiGeChuanShu/ncnn-android-yolox/blob/f0acf18f23899bf58b113a162530a24ef72011ef/app/src/main/jni/yolox.cpp

line: 409 with the corresponding new class names?

yes

can we get the app link?

I tried it, I got an error while transform *.pth to *.onnx model:
image

image

The issue is with onnxsim:
from onnxsim import simplify

# use onnxsimplify to reduce reduent model.
onnx_model = onnx.load(args.output_name)
model_simp, check = simplify(onnx_model)

if we add:
--no-onnxsim

then no issue at all.

One more question, if the *.param is:
7767517
310 346
Input images 0 1 images
Split splitncnn_input0 1 4 images images_splitncnn_0 images_splitncnn_1 images_splitncnn_2 images_splitncnn_3
MemoryData 1156 0 1 1156 0=1
MemoryData 1164 0 1 1164 0=1
MemoryData 1172 0 1 1172 0=1
Crop Slice_4 1 1 images_splitncnn_3 647 -23309=1,0 -23310=1,2147483647 -23311=1,1
Crop Slice_9 1 1 647 652 -23309=1,0 -23310=1,2147483647 -23311=1,2
Crop Slice_14 1 1 images_splitncnn_2 657 -23309=1,0 -23310=1,2147483647 -23311=1,1
Crop Slice_19 1 1 657 662 -23309=1,1 -23310=1,2147483647 -23311=1,2
Crop Slice_24 1 1 images_splitncnn_1 667 -23309=1,1 -23310=1,2147483647 -23311=1,1
Crop Slice_29 1 1 667 672 -23309=1,0 -23310=1,2147483647 -23311=1,2
Crop Slice_34 1 1 images_splitncnn_0 677 -23309=1,1 -23310=1,2147483647 -23311=1,1
Crop Slice_39 1 1 677 682 -23309=1,1 -23310=1,2147483647 -23311=1,2
Concat Concat_40 4 1 652 672 662 682 683 0=0
Convolution Conv_41 1 1 683 1177 0=16 1=3 11=3 2=1 12=1 3=1 13=1 4=1 14=1 15=1 16=1 5=1 6=1728
Swish Mul_43 1 1 1177 687

How to manually modify it?
Thanks a lot.

One more question, if the *.param is:
7767517
310 346
Input images 0 1 images
Split splitncnn_input0 1 4 images images_splitncnn_0 images_splitncnn_1 images_splitncnn_2 images_splitncnn_3
MemoryData 1156 0 1 1156 0=1
MemoryData 1164 0 1 1164 0=1
MemoryData 1172 0 1 1172 0=1
Crop Slice_4 1 1 images_splitncnn_3 647 -23309=1,0 -23310=1,2147483647 -23311=1,1
Crop Slice_9 1 1 647 652 -23309=1,0 -23310=1,2147483647 -23311=1,2
Crop Slice_14 1 1 images_splitncnn_2 657 -23309=1,0 -23310=1,2147483647 -23311=1,1
Crop Slice_19 1 1 657 662 -23309=1,1 -23310=1,2147483647 -23311=1,2
Crop Slice_24 1 1 images_splitncnn_1 667 -23309=1,1 -23310=1,2147483647 -23311=1,1
Crop Slice_29 1 1 667 672 -23309=1,0 -23310=1,2147483647 -23311=1,2
Crop Slice_34 1 1 images_splitncnn_0 677 -23309=1,1 -23310=1,2147483647 -23311=1,1
Crop Slice_39 1 1 677 682 -23309=1,1 -23310=1,2147483647 -23311=1,2
Concat Concat_40 4 1 652 672 662 682 683 0=0
Convolution Conv_41 1 1 683 1177 0=16 1=3 11=3 2=1 12=1 3=1 13=1 4=1 14=1 15=1 16=1 5=1 6=1728
Swish Mul_43 1 1 1177 687

How to manually modify it?
Thanks a lot.

https://zhuanlan.zhihu.com/p/391788686 refer this url

@FeiGeChuanShu Thanks a lot.
So for my above *.param, I should modify it as:
7767517
301 346
Input images 0 1 images
MemoryData 1156 0 1 1156 0=1
MemoryData 1164 0 1 1164 0=1
MemoryData 1172 0 1 1172 0=1
YoloV5Focus focus 1 1 images 683
Convolution Conv_41 1 1 683 1177 0=16 1=3 11=3 2=1 12=1 3=1 13=1 4=1 14=1 15=1 16=1 5=1 6=1728
Swish Mul_43 1 1 1177 687

Shall I keep the 3 lines of 'MemoryData'?

@FeiGeChuanShu Thanks a lot.
So for my above *.param, I should modify it as:
7767517
301 346
Input images 0 1 images
MemoryData 1156 0 1 1156 0=1
MemoryData 1164 0 1 1164 0=1
MemoryData 1172 0 1 1172 0=1
YoloV5Focus focus 1 1 images 683
Convolution Conv_41 1 1 683 1177 0=16 1=3 11=3 2=1 12=1 3=1 13=1 4=1 14=1 15=1 16=1 5=1 6=1728
Swish Mul_43 1 1 1177 687

Shall I keep the 3 lines of 'MemoryData'?

i don‘s know why your model have these ops,could you show me your onnx?

@FeiGeChuanShu I am trying to attach it, but the system is not allowed.
According to: https://zhuanlan.zhihu.com/p/391788686

请教一下,我转出来的param多出了MemoryData这三行:
Input images 0 1 images
Split splitncnn_input0 1 4 images images_splitncnn_0 images_splitncnn_1 images_splitncnn_2 images_splitncnn_3
MemoryData 1156 0 1 1156 0=1
MemoryData 1164 0 1 1164 0=1
MemoryData 1172 0 1 1172 0=1

按照步骤删去crop之后优化模型也顺利,看不出什么问题。但是跑ncnn demo代码时,就提示layer Shape not exists or registered。怎么回事呢

需要用onnx-simplifier把这些层给去掉之后再改focus

When I use export_onnx.py to generate *.onnx, I disabled:
--no-onnxsim
When I enabled onnixsim, then I got the error message as shown before.
I also tried:
https://github.com/daquexian/onnx-simplifier

I got the exact same error message.

Thanks a lot.

I figured it out it is a version issue, we have to use:
image

otherwise, it will not work.