mravanelli/pytorch-kaldi

Before switch to SpeechBrain, how to use trained model in pytorch

sun-peach opened this issue · 0 comments

Hi, I know we have new framework SpeechBrain now (which is very fantastic), but I still have old model trained with pytorch-kaldi. I would like to use it in Pytorch now. But when I verify the result, I found the results from Pytorch code and pytorch-kaldi are very different.

My pytorch kaldi is as below:

class BasePhModel(Module):
    def __init__(self, options):
        super(BasePhModel, self).__init__()
        cfg_file = options["cfg"]        # the cfg file is the same one when I trained the model
        config = configparser.ConfigParser()
        config.read(cfg_file)
        config["architecture1"]["to_do"] = "forward"
        config["architecture1"]["use_cuda"] = "False"
        config["architecture2"]["to_do"] = "forward"
        config["architecture2"]["use_cuda"] = "False"
        model1_file = options["architecture1_file"]
        model2_file = options["architecture2_file"]
        self.model1 = GRU(config["architecture1"], 16)    # I use GRU + MLP in my .conf file
        self.model2 = MLP(config["architecture2"], 1024)
        self.model1.load_state_dict(torch.load(model1_file)["model_par"])
        self.model2.load_state_dict(torch.load(model2_file)["model_par"])

    def forward(self, x):
        intermediate = self.model1(x)
        y = self.model2(intermediate)
        return y

when I use the same feature input to this model, the result is very different from the one in "forward_*_decode.ark". Is there any thing wrong with my code?

Thank you very much!