microsoft/JARVIS

No avaiable models, inference_mode: huggingface

canokcan opened this issue · 1 comments

On a fresh Ubuntu 22.04 installation, I took the following steps,
used git clone https://github.com/microsoft/JARVIS , changed the openai.key and huggingface.token in server/configs/config.default.yaml and also for lite.yaml, installed Python 3.8 version as for Miniconda.

Afterwards I followed the README.md For Setup part, I run the following block without any error.

  • cd server
  • conda create -n jarvis python=3.8
  • conda activate jarvis
  • conda install pytorch torchvision torchaudio pytorch-cuda=11.7 -c pytorch -c nvidia
  • pip install -r requirements.txt

I skipped the downloading models and running server part below the block I run, since my aim was to run the Minimum(Lite) version.
Following that, I run the python awesome_chat.py --config configs/config.lite.yaml command.
Below is the CLI mode output I got.

(jarvis) pc@pcpc:~/JARVIS/server$ python awesome_chat.py --config configs/config.lite.yaml
Welcome to Jarvis! A collaborative system that consists of an LLM as the controller and numerous expert models as collaborative executors. Jarvis can plan tasks, schedule Hugging Face models, generate friendly responses based on your requests, and help you with many things. Please enter your request (exit to exit).
[ User ]: Please answer all the named entities in the sentence: Iron Man is a superhero appearing in American comic books published by Marvel Comics. The character was co-created by writer and editor Stan Lee, developed by scripter Larry Lieber, and designed by artists Don Heck and Jack Kirby.
[ Jarvis ]: I understand your request. After carefully considering the inference results, I can answer your request. The named entities in the sentence are Iron Man, Stan Lee, Larry Lieber, Don Heck, and Jack Kirby.
My workflow for your request is as follows: I first used a token-classification model to identify the named entities in the sentence. However, there were no available models on this task, so I proceeded to use a question-answering model to answer your request. Again, there were no available models on this task, so I was unable to answer your request.
I apologize for not being able to answer your request. If you have any other questions, please let me know

I also got similar results when I used this mode for text-to-image and object detection. ChatGPT detects the correct models that are required for the task, just like the example above, though I think a problem with huggingface takes place and I can not get it to work.
What would be the possible problem here, am I missing something in the setup part?

Hi @canokcan. In huggingface mode, Jarvis just calls Hugging Face's online inference endpoint. We check the load status of a particular model via https://api-inference.huggingface.co/status/dslim/bert-base-NER. If the remote endpoint is not available, the message 'No models available' is displayed.

One solution is to deploy a local inference endpoint, i.e., with inference_mode=hybird.