NVIDIA-AI-IOT/tf_to_trt_image_classification

Unsupported operation _FusedBatchNormV3 Failed to parse UFF when converting frozen to plan on Jetson Nano

flycat0101 opened this issue · 5 comments

ENV:
Jetson Nano board with JetPack4.2.1
cuda 10.0
cuDNN 7.5
TensorFlow with GPU 1.13.1

Following the README,

  1. clone git repository and checkout to trt_4plus, then build the uff_to_plan
  2. run "source scripts/download_models.sh", get the models, for example inrectpion_v1
  3. run "python scripts/models_to_frozen_graphs.py", convert models to frozen graphs, for exammple inception_v1.pb
  4. run "python3 scripts/convert_plan.py data/frozen_graphs/inception_v1.pb data/plans/inception_v1.plan input 224 224 InceptionV1/Logits/SpatialSqueeze 1 0 float"
    Failed, with below warning and error message.
    Then, how to fix this issue?

nano@nano-2:~/work/tf_to_trt_image_classification$ python3 scripts/convert_plan.py data/frozen_graphs/inception_v1.pb data/plans/inception_v1.plan input 224 224 InceptionV1/Logits/SpatialSqueeze 1 0 float
......

Using output node InceptionV1/Logits/SpatialSqueeze
Converting to UFF graph
Warning: No conversion function registered for layer: FusedBatchNormV3 yet.
Converting InceptionV1/InceptionV1/Mixed_5c/Branch_3/Conv2d_0b_1x1/BatchNorm/FusedBatchNormV3 as custom op: FusedBatchNormV3
Warning: No conversion function registered for layer: FusedBatchNormV3 yet.
Converting InceptionV1/InceptionV1/Mixed_5b/Branch_3/Conv2d_0b_1x1/BatchNorm/FusedBatchNormV3 as custom op: FusedBatchNormV3
Warning: No conversion function registered for layer: FusedBatchNormV3 yet.
Converting InceptionV1/InceptionV1/Mixed_4f/Branch_3/Conv2d_0b_1x1/BatchNorm/FusedBatchNormV3 as custom op: FusedBatchNormV3
Warning: No conversion function registered for layer: FusedBatchNormV3 yet.
Converting InceptionV1/InceptionV1/Mixed_4e/Branch_3/Conv2d_0b_1x1/BatchNorm/FusedBatchNormV3 as custom op: FusedBatchNormV3
Warning: No conversion function registered for layer: FusedBatchNormV3 yet.
Converting InceptionV1/InceptionV1/Mixed_4d/Branch_3/Conv2d_0b_1x1/BatchNorm/FusedBatchNormV3 as custom op: FusedBatchNormV3
...
Warning: No conversion function registered for layer: FusedBatchNormV3 yet.
Converting InceptionV1/InceptionV1/Mixed_5c/Branch_0/Conv2d_0a_1x1/BatchNorm/FusedBatchNormV3 as custom op: FusedBatchNormV3
No. nodes: 486
UFF Output written to data/tmp.uff
UffParser: Validator error: InceptionV1/InceptionV1/Mixed_5c/Branch_0/Conv2d_0a_1x1/BatchNorm/FusedBatchNormV3: Unsupported operation _FusedBatchNormV3
Failed to parse UFF

import tensorrt as trt
trt.version
'5.1.6.1'

I got the same problem

I got the same problem

Hello:
I got the similar issue when converting a pb to trt plan. Have you guys got solution?
Thanks

Warning: No conversion function registered for layer: Fill yet.
Converting batch_normalization_88/ones_like as custom op: Fill
Warning: No conversion function registered for layer: Fill yet.
Converting batch_normalization_86/ones_like as custom op: Fill
DEBUG [/usr/lib/python3.6/dist-packages/uff/converters/tensorflow/converter.py:96] Marking ['sequential_1/dense_2/Sigmoid'] as outputs
No. nodes: 981
UFF Output written to data/tmp.uff
UffParser: Validator error: batch_normalization_94/ones_like: Unsupported operation _Fill
Failed to parse UFF

Hello:
I got the similar issue when converting a pb to trt plan. Have you guys got solution?
Thanks

Warning: No conversion function registered for layer: Fill yet.
Converting batch_normalization_88/ones_like as custom op: Fill
Warning: No conversion function registered for layer: Fill yet.
Converting batch_normalization_86/ones_like as custom op: Fill
DEBUG [/usr/lib/python3.6/dist-packages/uff/converters/tensorflow/converter.py:96] Marking ['sequential_1/dense_2/Sigmoid'] as outputs
No. nodes: 981
UFF Output written to data/tmp.uff
UffParser: Validator error: batch_normalization_94/ones_like: Unsupported operation _Fill
Failed to parse UFF

you can try convert the model by tf->onnx->tensorrt