georgian-io/Multimodal-Toolkit

[Question] How to Save, Load and Inference from the trained model with Multimodal-Toolkit?

Closed this issue · 3 comments

Hi Developers,

Thank you for building this amazing library, it has benefitted me tremendously.

So far, I have trained my model using the Multimodal-Toolkit, the example on Google Colab has been helpful.

However, I’m having trouble in effectively storing the model, loading it back(for example in a different script from the training script) and making Inferences on the trained model on my Test dataset(s).

Currently, I have tried using Huggingface trainer.save_model() to save my model.
Saving the model:

image

trainer.save_model() saves the just three files in the given path like:

image

Also, using trainer.save_state() and providing TrainingArguments() parameters like “load_best_model_at_end” and “save_strategy”, saves checkpoint in the “output_dir” path.

The following files are saved in the output_dir path:

image

Currently to make inferences, either I’m using trainer.evaluate(eval_dataset=test_dataset) or trainer.preditct() - when I need the predicted output labels, within the same training script.

I want to achieve the same steps but in an independent script, where I load the test dataset(s), load the saved model and make Predictions/Inferences on it, pretty much the same way. (like using the trainer.evaluate() and trainer.predict())

I would be grateful if you could guide me.

Thanks a lot for your time.

Hi @anirbandey303,

To load the model, you just need to repeat the same steps you did to create the model with just one small change. While calling model = AutoModelWithTabular.from_pretrained(...) make sure you set the first argument pretrained_model_name_or_path to the path that you saved your model in. Once you've loaded the model, you can recreate the trainer again following the same steps from the notebook.

That worked. 🥳 Thank you so much @akashsaravanan-georgian

Happy to help Anir! Closing this issue but feel free to open a new one if you have any other questions.