Luodian/Otter

How to perform inference after successfully configuring the environment?

Bin-ze opened this issue · 2 comments

Bin-ze commented

I want to deploy otter locally and want to try using otter to play mahjong

Currently there are the following issues:

  1. I found that the documentation seemed unclear. I read it carefully but could not find a description of inference locally.

  2. I try to use:

    python pipeline/serve/deploy/otterhd_endpoint.py

But you need to enter the MODEl ID. I didn't find a description of this ID. Then a connection problem is thrown, which should be caused by a network failure.

  1. I read in the README that 16G of video memory is required for local deployment, but this https://github.com/Luodian/Otter/blob/main/pipeline/demo does not work. I read in the README that 16G of video memory is required for local deployment, but this https://github.com/Luodian/Otter/blob/main/pipeline/demo is invalid.
    Then I saw at https://github.com/Luodian/Otter/blob/main/docs/huggingface_compatible.md that the deployment requires larger memory.
  2. I want to know which model in otter uses reinforcement learning to fine-tune the mahjong data set, and what is its winning rate, I have not found other descriptions now.

Sorry to bother you, I look forward to your reply.

Hi here you would see the inference scripts:

https://github.com/Luodian/Otter/tree/main/pipeline/demos/interactive

Both image model and video model's inference scripts are here.

And sorry for that we dont have a rl finetuned version for games.

Bin-ze commented

think you! this issues close