nlpyang/PreSumm

Use pretrained model : train_from

connie-n opened this issue · 9 comments

Hi, I want to use the trained model so tried to put the code train_from ../models path
but it doesn't work with below key error.

I found the same issues on this github so followed the solution
I revised the code from "optim = checkpoint['optim'][0]" to "optim = checkpoint['optim']"
but still there is same issue. how can i fix it?

image

In case of BertExt model, it works...but just trained one file and finished....
there is no any checkpoint file on model_path and only created event file.
could anyone hlep?

스크린샷 2022-07-07 오후 3 06 42

I have same problem,but I want to continue training the ext model,so I changed the code from "optim = checkpoint['optim'][0]" to "optim = checkpoint['optim']" and set the -train_from ../models/model_step_4000.pt。but it still training from the starting point

@SabrinaZhuangxx

I cleared the issue.
the trained models have already trained until specific number of model-steps.
so I had to set the -train_step more than this number. In my case, I used the trained BertExt model, which is trained until 18,400steps. so I set the train_step 20,000 and then it worked.

In addition, if you want to train abs model, you should change the code to "optim = checkpoint['optims'][0].
If you want to train ext model, it should be changed to "optim = checkpoint['optim']"

@SabrinaZhuangxx

I cleared the issue. the trained models have already trained until specific number of model-steps. so I had to set the -train_step more than this number. In my case, I used the trained BertExt model, which is trained until 18,400steps. so I set the train_step 20,000 and then it worked.

In addition, if you want to train abs model, you should change the code to "optim = checkpoint['optims'][0]. If you want to train ext model, it should be changed to "optim = checkpoint['optim']"

thank you for your reply.
And I still have a question:when I continued to train the ext model,I found that the model still trains from the first dataset and the order in which the datasets is used during training does not change.i e,123,91,39,6...
QQ图片20220713114124

When I set the parameter -train_from,I found that if it is a dataset that has been trained before, xent will be effectively reduced. But once it reaches the dataset that has not been trained before, then xent will suddenly return to a very high value. Is this caused by the code? And this code runs once and only includes the training process for one epoch, right?
X1QRIDG~I8FJUN6)L)PFG)U

@connie-n
sorry. I found it is a question about the setting of seed. right?
my bad

@connie-n

In case of BertExt model, it works...but just trained one file and finished.... there is no any checkpoint file on model_path and only created event file. could anyone hlep?

스크린샷 2022-07-07 오후 3 06 42

Hello , I am facing the same issue. Only event files are created but no checkpoint files. Can you please tell me how to fix this ?

@SabrinaZhuangxx @connie-n

Can you guys please share the trained BERTExt model along with their checkpoints ?

@keerthilogesh

Hi,
In my case, It was caused due to the problem that the train_step set less than pre-trained model.
Can you try setting train_step with a very large number, for example, 180,000. Maybe it will work.
You can download BertExt trained model from README.md of this repository.

@SabrinaZhuangxx

I cleared the issue. the trained models have already trained until specific number of model-steps. so I had to set the -train_step more than this number. In my case, I used the trained BertExt model, which is trained until 18,400steps. so I set the train_step 20,000 and then it worked.

In addition, if you want to train abs model, you should change the code to "optim = checkpoint['optims'][0]. If you want to train ext model, it should be changed to "optim = checkpoint['optim']"

Thank you so much! I have been puzzled it for two days.