kan-bayashi/ParallelWaveGAN

Covert pwgan from torch model to onnx to tensorrt has bug

Tian14267 opened this issue · 1 comments

Hello , Thank you for your great work. I get a error when I use it to covert model ( pytorch --> onnx ---> tensorRT )
I get this error when covert onnx to TRT :

Loading ONNX file from path /data/vocoder_24k_3.onnx...
Beginning ONNX file parsing
[05/19/2022-19:11:54] [TRT] [W] onnx2trt_utils.cpp:365: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
Completed parsing of ONNX file
Building an engine from file /data/vocoder_24k_3.onnx; this may take a while...
[05/19/2022-19:11:54] [TRT] [E] 4: [network.cpp::validate::2726] Error Code 4: Internal Error (Network must have at least one output)
Completed creating Engine
Traceback (most recent call last):
  File "onnx2trt_torch.py", line 111, in <module>
    engine = ONNX_build_engine(onnx_file_path,trt_model_out, write_engine)
  File "onnx2trt_torch.py", line 54, in ONNX_build_engine
    f.write(engine.serialize())
AttributeError: 'NoneType' object has no attribute 'serialize'

TensorRT Version: 8.4.0.6
CUDA Version: 11.2

The relevant problem and onnx model is in this link:
https://github.com/onnx/onnx-tensorrt/issues/846

I think the problem is because of torch code, But I don't know how to solve it.

Your question is out of scope of this repository.
Please solve by yourself.