TabbyML/vim-tabby

Output sometimes weird with Ollama backend

Closed this issue · 2 comments

When using deepseekcoder:1.3b via Ollama completion sometimes includes "<|end▁of▁sentence|><|begin▁of▁sentence|>".

All in all, using Ollama as backend changes behaviour from using it with --model DeepseekCoder-1.3B
Some of it might be expected since some settings are different and it might be me having messed things up but I just followed the https://tabby.tabbyml.com/docs/references/models-http-api/ollama/ guide to set it up

Would be really nice if someone could me to get things work properly, since I need my VRAM but do not like to start/stop tabby all the time

This my config.toml:

[model.completion.http]
kind = "ollama/completion"
model_name = "deepseek-coder:1.3b"
api_endpoint = "http://localhost:11434"
prompt_template = "<|fim▁begin|>{prefix}<|fim▁hole|>{suffix}<|fim▁end|>"

[model.embedding.http]
kind = "ollama/embedding"
model_name = "nomic-embed-text"
api_endpoint = "http://localhost:11434"

Hi, @ruffi123456789.
This repo is for releasing vim-tabby. Please open this issue in the main Tabby repo: https://github.com/TabbyML/tabby.

Sorry for keeping this open so long! Accidentally posted it here