georgian-io/Multimodal-Toolkit

Help with inference

Closed this issue · 2 comments

First, I would like to say thank you for this amazing library!

I managed to train a model for my purposes, but I stumbled upon issues with using the trained model for further inference.
I'm fairly new to Huggingface's Transformers framework and I could not find any helpful resources on how I can easily use a customized model (such as these multimodal models) for inference in their framework. I have tried to bisect the script in main.py, to obtain only the parts of the code that is needed for inference, but it only got me so far.

I also saw that others are also having issues with using the output models from this library for further prediction, so I think it would be beneficial for the whole community of this library if some guidelines were provided in this matter. If this takes more than your current capacity can afford, I would be more than grateful, if you could give me some pointers on where I should start to be able to proceed with my work.

Thank you!

Hi @levente-murgas, apologies for the delay. We'll look into adding some information on this in our next release.

In the mean time, I can try to help you get set up with this. Could you walk me through the steps you've taken in terms of inference so far?

Closing due to lack of activity. Information on inference will be added to the example notebook in the next release (~1 week).