[BUG] Draft model section not being read in config.yml
atisharma opened this issue · 2 comments
OS
Linux
GPU Library
CUDA 12.x
Python version
3.11
Describe the bug
Cannot load a draft model for Mistral-Large. It seems like the draft model directory is not being recognised. This is different from #177. There is also an ambiguity in the documentation.
Reproduction steps
My config.yml (comments removed):
network:
host: 0.0.0.0
port: 5001
disable_auth: false
api_servers: ["OAI"]
model:
model_dir: /srv/tabby-api/models
model_name: Mistral-Large-Instruct-2407-4.0bpw-h6-exl2
max_seq_len: 65536
gpu_split_auto: false
gpu_split: [0, 40, 40]
fasttensors: true
tensor_parallel: true
draft_model:
draft_model_dir: /srv/tabby-api/models
draft_model_name: Mistral-7B-Instruct-v0.3-exl2-4.25
This follows the config_sample.yml format, where the draft_model is its own top-level section. I've also tried as per the docs which say it is "a sub-block of models".
Looking in the code this was all changed here
tabbyAPI/common/tabby_config.py
Line 73 in fb903ec
which is about the time this broke.
I also tried a draft
subsection.
Expected behavior
I'd expect it to load the draft model (as it used to).
Logs
INFO: ExllamaV2 version: 0.2.2
INFO: Your API key is: XXXXX
INFO: Your admin key is: XXXXX
INFO:
INFO: If these keys get compromised, make sure to delete api_tokens.yml and restart the server. Have fun!
INFO: Generation logging is disabled
WARNING: Draft model is disabled because a model name wasn't provided. Please check your config.yml!
WARNING: The given cache_size (65536) is less than 2 * max_seq_len and may be too small for requests using CFG.
WARNING: Ignore this warning if you do not plan on using CFG.
INFO: Attempting to load a prompt template if present.
INFO: Using template "from_tokenizer_config" for chat completions.
INFO: Loading model: /srv/models/Panchovix/Mistral-Large-Instruct-2407-4.0bpw-h6-exl2
INFO: Loading with tensor parallel
Loading model modules ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 100% 179/179 0:00:00
INFO: Model successfully loaded.
INFO: Developer documentation: http://0.0.0.0:5001/redoc
INFO: Starting OAI API
INFO: Completions: http://0.0.0.0:5001/v1/completions
INFO: Chat completions: http://0.0.0.0:5001/v1/chat/completions
INFO: Started server process [1]
INFO: Waiting for application startup.
INFO: Application startup complete.
INFO: Uvicorn running on http://0.0.0.0:5001 (Press CTRL+C to quit)
Additional context
No response
Acknowledgements
- I have looked for similar issues before submitting this one.
- I have read the disclaimer, and this issue is related to a code bug. If I have a question, I will use the Discord server.
- I understand that the developers have lives and my issue will be answered when possible.
- I understand the developers of this program are human, and I will ask my questions politely.
The config snippet here looks correct. Tested something similar and the draft model is recognized + loaded.
The docs are conflicting due to them not being updated for the new config changes because of time constraints.
There was a commit for the config migration that directly relates to this issue 754fb15
Please check that you're on the latest commit and if there's still problems, I'd advise asking in Discord since this is probably not a code bug.