xcmyz/FastSpeech

fastspeech model to torch script convert for c++ inference

alokprasad opened this issue · 4 comments

Any idea how to convert Fastspeech model to torch script and relevant c++ code
for loading and inferencing from same to produce mel.pt file.
in def synthesis i did

ts_model = torch.jit.trace(model,(sequence,src_pos))
ts_mode.save("traced_fastspeech_model.pt")

it did saved but dont know how to proceed with c++ code and what will be input tensor..
( Note ONNX Conversion of model failed for me)

Can you attach the code for your conversion

@alokprasad i met the same problem。did you save that?

@alokprasad when i convert this to onnx ,
Inferred elem type differs from existing elem type: (DOUBLE) vs (FLOAT) i met this erro, how can i save that.

@alokprasad
onnxruntime.capi.onnxruntime_pybind11_state.Fail: [ONNXRuntimeError] : 1 : FAIL : Load model from fastwave.onnx failed:Type Error: Type parameter (T) bound to different types (tensor(double) and tensor(float) in node ().
hello, when i convert this to onnx.i met this erro. how can i solve that. thank you