with use_local_llm , the local deployment service , Will download model files from Hugging Face
Closed this issue · 2 comments
command:kagentsys --query="Who is Andy Lau's wife?" --llm_name="kagentlms_qwen_7b_mat"
--use_local_llm --local_llm_host="https://127.0.0.1" --local_llm_port=80 --lang="zh"
use_local_llm,error:
OSError: We couldn't connect to 'https://huggingface.co' to load this file, couldn't find it in the cached files and it looks like kwaikeg/kagentlms_qwen_7b_mat is not the path to a directory containing a file named config.json.
Checkout your internet connection or see how to run the library in offline mode at 'https://huggingface.co/docs/transformers/installation#offline-mode'.
Will the local deployment service also download model files from Hugging Face?
Nope, but tokenizer will be loading from huggingface. Can you set a proxy?
This issue arises because a prompt truncation strategy has been implemented in the system, necessitating the initialization of the corresponding tokenizer. If you are using a local model, you can replace model_name with the local model path in KwaiAgents/kwaiagents/agents/kagent.py,eliminating the need to re-download it from Hugging Face.