vits inference bug
unparalleled-ysj opened this issue · 1 comments
@Masao-Someki I encountered an error when inferring vits using the latest version
Traceback (most recent call last):
File "export2onnx.py", line 21, in
output_dict = tts("hello how are you")
File "/work/espnet_onnx/espnet_onnx/tts/tts_model.py", line 86, in call
output_dict = self.tts_model(text, **options)
File "/work/espnet_onnx/espnet_onnx/tts/model/tts_models/vits.py", line 58, in call
wav, att_w, dur = self.model.run(output_names, input_dict)
File "/root/anaconda3/envs/espnet/lib/python3.7/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 188, in run
raise ValueError("Model requires {} inputs. Input Feed contains {}".format(num_required_inputs, num_inputs))
ValueError: Model requires 2 inputs. Input Feed contains 1
Sorry I didn't re-export the vits model with the new version, the bug is related to the deletion of the text_length parameter