PaddlePaddle/FastDeploy

paddleocr3 在转成rknn模型时出错

Closed this issue · 17 comments

python3.8 FastDeploy/tools/rknpu2/export.py --config_path FastDeploy/tools/rknpu2/config/ppocrv3_det.yaml --target_platform rk3588
{'mean': [[123.675, 116.28, 103.53]], 'std': [[58.395, 57.12, 57.375]], 'model_path': './ch_PP-OCRv3_det_infer/ch_PP-OCRv3_det_infer.onnx', 'outputs_nodes': None, 'do_quantization': False, 'dataset': None, 'output_folder': './ch_PP-OCRv3_det_infer'}
Traceback (most recent call last):
File "FastDeploy/tools/rknpu2/export.py", line 35, in
model = RKNN(config.verbose)
File "/home/wuli/.local/lib/python3.8/site-packages/rknn/api/rknn.py", line 56, in init
self.rknn_base = RKNNBase(cur_path, verbose)
File "rknn/api/rknn_base.py", line 75, in rknn.api.rknn_base.RKNNBase.init
File "/home/wuli/.local/lib/python3.8/site-packages/pkg_resources/init.py", line 514, in get_distribution
dist = get_provider(dist)
File "/home/wuli/.local/lib/python3.8/site-packages/pkg_resources/init.py", line 386, in get_provider
return working_set.find(moduleOrReq) or require(str(moduleOrReq))[0]
File "/home/wuli/.local/lib/python3.8/site-packages/pkg_resources/init.py", line 683, in find
if dist is not None and dist not in req:
File "/home/wuli/.local/lib/python3.8/site-packages/pkg_resources/init.py", line 3135, in contains
return self.specifier.contains(item, prereleases=True)
File "/home/wuli/.local/lib/python3.8/site-packages/pkg_resources/_vendor/packaging/specifiers.py", line 902, in contains
item = Version(item)
File "/home/wuli/.local/lib/python3.8/site-packages/pkg_resources/_vendor/packaging/version.py", line 197, in init
raise InvalidVersion(f"Invalid version: '{version}'")
pkg_resources.extern.packaging.version.InvalidVersion: Invalid version: '1.4.0-22dcfef4'

安装测试版本的rknn包试一下

You can also download all packages, docker image, examples, docs and platform-tools from baidu cloud: RK_NPU_SDK, fetch code: rknn

请问具体是哪个? 里头有很多个包
M%(AF6L)EPO R~3T(~4G9{5

最新的即可,我们用的是1.4.2b3

image
网盘里头最新是1.4.0;
1.4.2b3版本能否分享下

请问具体是哪个? 里头有很多个包 M%(AF6L)EPO R~3T(~4G9{5

1.4.0的develop里面有,这图里不是有吗?

python3.6 FastDeploy/tools/rknpu2/export.py --config_path FastDeploy/tools/rknpu2/config/ppocrv3_det.yaml --target_platform rk3588
{'mean': [[123.675, 116.28, 103.53]], 'std': [[58.395, 57.12, 57.375]], 'model_path': './ch_PP-OCRv3_det_infer/ch_PP-OCRv3_det_infer.onnx', 'outputs_nodes': None, 'do_quantization': False, 'dataset': None, 'output_folder': './ch_PP-OCRv3_det_infer'}
W init: rknn-toolkit2 version: 1.4.2b3+0bdd72ff
W load_onnx: It is recommended onnx opset 12, but your onnx model opset is 11!
I base_optimize ...
I base_optimize done.
I
I fold_constant ...
E build: Catch exception when building RKNN model!
E build: Traceback (most recent call last):
E build: File "rknn/api/rknn_base.py", line 1595, in rknn.api.rknn_base.RKNNBase.build
E build: File "rknn/api/graph_optimizer.py", line 696, in rknn.api.graph_optimizer.GraphOptimizer.fold_constant
E build: File "rknn/api/load_checker.py", line 34, in rknn.api.load_checker.create_random_data
E build: File "/home/wuli/.local/lib/python3.6/site-packages/cv2/init.py", line 9, in
E build: from .cv2 import _registerMatType
E build: ImportError: cannot import name '_registerMatType'
W If you can't handle this error, please try updating to the latest version of the toolkit2 and runtime from:
https://eyun.baidu.com/s/3eTDMk6Y (Pwd: rknn) Path: RK_NPU_SDK / RK_NPU_SDK_1.X.0 / develop /
If the error still exists in the latest version, please collect the corresponding error logs and the model,
convert script, and input data that can reproduce the problem, and then submit an issue on:
https://redmine.rock-chips.com (Please consult our sales or FAE for the redmine account)
Traceback (most recent call last):
File "FastDeploy/tools/rknpu2/export.py", line 58, in
assert ret == 0, "Build model failed!"
AssertionError: Build model failed!

有装conda吗?装个conda试以下,这个ocr的模型转换其他人是转成功了的,一般不会出现问题

infer_static_shape_demo ./ch_PP-OCRv3_det_infer/ch_PP-OCRv3_det_infer_rk3588_unquantized.rknn ./ch_ppocr_mobile_v2.0_cls_infer/ch_ppocr_mobile_v20_cls_infer_rk3588_unquantized.rknn ./ch_PP-OCRv3_rec_infer/ch_PP-OCRv3_rec_infer_rk3588_unquantized.rknn ./ppocr_keys_v1.txt ./12.jpg 1
[INFO] fastdeploy/runtime/backends/rknpu2/rknpu2_backend.cc(57)::GetSDKAndDeviceVersion rknn_api/rknnrt version: 1.4.0 (a10f100eb@2022-09-09T09:07:14), driver version: 0.8.0
index=0, name=x, n_dims=4, dims=[1, 960, 960, 3], n_elems=2764800, size=5529600, fmt=NHWC, type=FP16, qnt_type=AFFINE, zp=0, scale=1.000000, pass_through=0
index=0, name=sigmoid_0.tmp_0, n_dims=4, dims=[1, 1, 960, 960], n_elems=921600, size=1843200, fmt=NCHW, type=FP16, qnt_type=AFFINE, zp=0, scale=1.000000, pass_through=0
[INFO] fastdeploy/runtime/runtime.cc(334)::CreateRKNPU2Backend Runtime initialized with Backend::RKNPU2 in Device::RKNPU.
[INFO] fastdeploy/runtime/backends/rknpu2/rknpu2_backend.cc(57)::GetSDKAndDeviceVersion rknn_api/rknnrt version: 1.4.0 (a10f100eb@2022-09-09T09:07:14), driver version: 0.8.0
index=0, name=x, n_dims=4, dims=[1, 48, 192, 3], n_elems=27648, size=55296, fmt=NHWC, type=FP16, qnt_type=AFFINE, zp=0, scale=1.000000, pass_through=0
index=0, name=softmax_0.tmp_0, n_dims=2, dims=[1, 2, 0, 0], n_elems=2, size=4, fmt=UNDEFINED, type=FP16, qnt_type=AFFINE, zp=0, scale=1.000000, pass_through=0
[INFO] fastdeploy/runtime/runtime.cc(334)::CreateRKNPU2Backend Runtime initialized with Backend::RKNPU2 in Device::RKNPU.
[INFO] fastdeploy/runtime/backends/rknpu2/rknpu2_backend.cc(57)::GetSDKAndDeviceVersion rknn_api/rknnrt version: 1.4.0 (a10f100eb@2022-09-09T09:07:14), driver version: 0.8.0
index=0, name=x, n_dims=4, dims=[1, 48, 320, 3], n_elems=46080, size=92160, fmt=NHWC, type=FP16, qnt_type=AFFINE, zp=0, scale=1.000000, pass_through=0
index=0, name=softmax_5.tmp_0, n_dims=4, dims=[1, 40, 6625, 1], n_elems=265000, size=530000, fmt=NCHW, type=FP16, qnt_type=AFFINE, zp=0, scale=1.000000, pass_through=0
[INFO] fastdeploy/runtime/runtime.cc(334)::CreateRKNPU2Backend Runtime initialized with Backend::RKNPU2 in Device::RKNPU.
[WARNING] fastdeploy/runtime/backends/rknpu2/rknpu2_backend.cc(326)::Infer The input tensor type != model's inputs type.The input_type need FP16,but inputs[0].type is UINT8
[WARNING] fastdeploy/runtime/backends/rknpu2/rknpu2_backend.cc(326)::Infer The input tensor type != model's inputs type.The input_type need FP16,but inputs[0].type is UINT8
[WARNING] fastdeploy/runtime/backends/rknpu2/rknpu2_backend.cc(326)::Infer The input tensor type != model's inputs type.The input_type need FP16,but inputs[0].type is UINT8
E RKNN: [21:00:54.124] failed to submit!, op id: 77, op name: exLayerNorm:p2o.ReduceMean.0_2layer_norm, flags: 0x5, task start: 152, task number: 1520, run task counter: 0, int status: 0
[ERROR] fastdeploy/runtime/backends/rknpu2/rknpu2_backend.cc(400)::Infer rknn run error! ret=-1
[ERROR] fastdeploy/vision/ocr/ppocr/recognizer.cc(121)::BatchPredict Failed to inference by runtime.
[ERROR] fastdeploy/vision/ocr/ppocr/ppocr_v2.cc(169)::BatchPredict There's error while recognizing image in PPOCR.
Failed to predict.

转换成功,推理发生错误

你用的驱动和fastdeploy的不一致吧,为啥还是1.4.0的,拉一下最新的develop版本的fastdeploy的代码

如果已经是最新的,看一下docs/cn/build/rknpu2.md 这个文档配置一下编译环境再运行试试

source PathToFastDeploySDK/fastdeploy_init.sh

能够正常使用了吗

fb495b03aacb1483f2f63531c380a5e
可以了,感谢支援~

@leokwu 请问你是怎么解决的呢?我也是转换成功,但推理时候出错。

neardi@LPA3588:~/ruida/work/FastDeploy/examples/vision/detection/paddledetection/rknpu2/cpp/build$ ./infer_ppyoloe_demo ./model smoke_11.jpg 1
[INFO] fastdeploy/vision/common/processors/transform.cc(45)::FuseNormalizeCast  Normalize and Cast are fused to Normalize in preprocessing pipeline.
[INFO] fastdeploy/vision/common/processors/transform.cc(93)::FuseNormalizeHWC2CHW       Normalize and HWC2CHW are fused to NormalizeAndPermute  in preprocessing pipeline.
[INFO] fastdeploy/vision/common/processors/transform.cc(159)::FuseNormalizeColorConvert BGR2RGB and NormalizeAndPermute are fused to NormalizeAndPermute with swap_rb=1
[INFO] fastdeploy/runtime/backends/rknpu2/rknpu2_backend.cc(81)::GetSDKAndDeviceVersion rknpu2 runtime version: 1.4.2b0 (c5d79ccf9@2023-02-14T17:55:39)
[INFO] fastdeploy/runtime/backends/rknpu2/rknpu2_backend.cc(82)::GetSDKAndDeviceVersion rknpu2 driver version: 0.8.2
index=0, name=image, n_dims=4, dims=[1, 640, 640, 3], n_elems=1228800, size=1228800, fmt=NHWC, type=INT8, qnt_type=AFFINE, zp=-128, scale=0.003922, pass_through=0
index=0, name=p2o.Mul.224, n_dims=4, dims=[1, 8400, 4, 1], n_elems=33600, size=33600, fmt=NCHW, type=FP32, qnt_type=AFFINE, zp=-68, scale=4.838850, pass_through=0
index=1, name=p2o.Concat.29, n_dims=4, dims=[1, 1, 8400, 1], n_elems=8400, size=8400, fmt=NCHW, type=FP32, qnt_type=AFFINE, zp=-128, scale=0.003733, pass_through=0
[INFO] fastdeploy/runtime/runtime.cc(367)::CreateRKNPU2Backend  Runtime initialized with Backend::RKNPU2 in Device::RKNPU.
[INFO] fastdeploy/vision/common/processors/transform.cc(159)::FuseNormalizeColorConvert BGR2RGB and Normalize are fused to Normalize with swap_rb=1
[WARNING] fastdeploy/runtime/backends/rknpu2/rknpu2_backend.cc(420)::InitRKNNTensorMemory       The input tensor type != model's inputs type.The input_type need INT8,but inputs[0].type is UINT8
E RKNN: [03:26:36.509] failed to submit!, op id: 4, op name: Conv:p2o.Conv.3, flags: 0x5, task start: 61, task number: 3, run task counter: 0, int status: 0
[ERROR] fastdeploy/runtime/backends/rknpu2/rknpu2_backend.cc(506)::Infer        rknn run error! ret=-1
[ERROR] fastdeploy/vision/detection/ppdet/base.cc(73)::BatchPredict     Failed to inference by runtime.
Failed to predict.

@leokwu 请问你是怎么解决的呢?我也是转换成功,但推理时候出错。

neardi@LPA3588:~/ruida/work/FastDeploy/examples/vision/detection/paddledetection/rknpu2/cpp/build$ ./infer_ppyoloe_demo ./model smoke_11.jpg 1
[INFO] fastdeploy/vision/common/processors/transform.cc(45)::FuseNormalizeCast  Normalize and Cast are fused to Normalize in preprocessing pipeline.
[INFO] fastdeploy/vision/common/processors/transform.cc(93)::FuseNormalizeHWC2CHW       Normalize and HWC2CHW are fused to NormalizeAndPermute  in preprocessing pipeline.
[INFO] fastdeploy/vision/common/processors/transform.cc(159)::FuseNormalizeColorConvert BGR2RGB and NormalizeAndPermute are fused to NormalizeAndPermute with swap_rb=1
[INFO] fastdeploy/runtime/backends/rknpu2/rknpu2_backend.cc(81)::GetSDKAndDeviceVersion rknpu2 runtime version: 1.4.2b0 (c5d79ccf9@2023-02-14T17:55:39)
[INFO] fastdeploy/runtime/backends/rknpu2/rknpu2_backend.cc(82)::GetSDKAndDeviceVersion rknpu2 driver version: 0.8.2
index=0, name=image, n_dims=4, dims=[1, 640, 640, 3], n_elems=1228800, size=1228800, fmt=NHWC, type=INT8, qnt_type=AFFINE, zp=-128, scale=0.003922, pass_through=0
index=0, name=p2o.Mul.224, n_dims=4, dims=[1, 8400, 4, 1], n_elems=33600, size=33600, fmt=NCHW, type=FP32, qnt_type=AFFINE, zp=-68, scale=4.838850, pass_through=0
index=1, name=p2o.Concat.29, n_dims=4, dims=[1, 1, 8400, 1], n_elems=8400, size=8400, fmt=NCHW, type=FP32, qnt_type=AFFINE, zp=-128, scale=0.003733, pass_through=0
[INFO] fastdeploy/runtime/runtime.cc(367)::CreateRKNPU2Backend  Runtime initialized with Backend::RKNPU2 in Device::RKNPU.
[INFO] fastdeploy/vision/common/processors/transform.cc(159)::FuseNormalizeColorConvert BGR2RGB and Normalize are fused to Normalize with swap_rb=1
[WARNING] fastdeploy/runtime/backends/rknpu2/rknpu2_backend.cc(420)::InitRKNNTensorMemory       The input tensor type != model's inputs type.The input_type need INT8,but inputs[0].type is UINT8
E RKNN: [03:26:36.509] failed to submit!, op id: 4, op name: Conv:p2o.Conv.3, flags: 0x5, task start: 61, task number: 3, run task counter: 0, int status: 0
[ERROR] fastdeploy/runtime/backends/rknpu2/rknpu2_backend.cc(506)::Infer        rknn run error! ret=-1
[ERROR] fastdeploy/vision/detection/ppdet/base.cc(73)::BatchPredict     Failed to inference by runtime.
Failed to predict.

解决了,下载最新的runtime,替换掉就可以了。

安装测试版本的rknn包试一下

你也可以从百度云下载所有包、docker 镜像、示例、文档和平台工具:RK_NPU_SDK,获取代码:rknn

你好,链接失效了,可以再提供一下吗