iusztinpaul/hands-on-llms

Using fine-tuned model for inference

dvquy13 opened this issue · 2 comments

Hi @iusztinpaul,

Love the course so far!

I have a question: Shouldn't us use our fine-tuned model for inferenced instead of using Paul's peft model here?

id: iusztinpaul/fin-falcon-7b-lora:1.0.5

If yes then how should we publish our model from experiment to Comet Model Registry? Is it done manually via the Register Model button in the Comet experiment console view?
image

Thanks!

Hello,

Happy to hear that!

Yes, you got that right! I added my own version to help you test things out, but ideally, you should use your own version.

You should pick the fine-tuned model and register it manually, as you suggested in the SS.

Just be careful to add a name and version that make sense and change them in the yaml file. When using your own fine-tuned model, it will be something like:
yourname/your-model-name:your-version

Got it working.