gsuuon/model.nvim

llama.cpp without neovim nightly build? (0.10)

slenderq opened this issue · 4 comments

Is there anyway to use neovim 0.9+ for the llama.cpp feature?

Neovim 0.10 does not seem to be released yet. (currently in nightly). https://github.com/neovim/neovim/releases

As much as it might be cool getting a nightly neovim working, its not worth messing with my whole setup for a single plugin.

I could just wait for neovim 0.10 update, but seeing how the whole plugin already supports 0.8 this becomes a surprise roadblock.

Is vim.system critical for getting llama.cpp to work?

PS this is a really cool plugin. :) Would love to take a stab myself if I actually knew any lua.

Thinking as a stopgap solution I could just host a API via https://github.com/abetlen/llama-cpp-python and just have the provider talk to the completion endpoint. Though wanted to get some thoughts before just jumping into a custom solution.

gsuuon commented

Hi! Yeah it's totally possible - vim.system doesn't provide anything that's not possible without it. I implemented curl fetching with uv - vim.system just provides most of what I hand-wrote out of the box, and is probably a bit more robust. You're right though, requiring nightly for one feature of one plugin is a lot so I'll take a look at factoring out vim.system. I'd also welcome a PR - you could try to use this plugin to get an LLM to build it :)

For now I think using the OpenAI provider with a compatible server is the easiest way to get this working. You can override the url option like this per-prompt.

And thanks for checking out the plugin!

gsuuon commented

@slenderq I've removed the vim.system req. Just a note, for practical use I'd still go with the llama cpp rest API since the spin-up time for llamacpp CLI can be really long - it's useful for playing around with options that the rest API doesn't expose though.