gsuuon/model.nvim
Neovim plugin for interacting with LLM's and building editor integrated prompts.
LuaMIT
Issues
- 0
Replace to last line
#66 opened by gsuuon - 4
Improve README
#28 opened by gsuuon - 7
Ollama provider returns no results when ollama is running behind an https proxy on a remote server
#64 opened by FlippingBinary - 3
[feature request] Add support for templates.
#63 opened by codetalker7 - 1
- 1
Claude support
#60 opened by pwntester - 2
`:Mcancel` does not work for chats
#58 opened by GordianDziwis - 4
- 4
Use markdown for mchat filetype
#54 opened by GordianDziwis - 1
[feature request] - Integration with TGI
#55 opened by CrossNox - 10
- 1
- 3
Integration with github copilot
#36 opened by Andrej-Marsic - 1
switch default openai to gpt4
#43 opened by blankenshipz - 4
Failing to connect/start the llama.cpp server
#34 opened by mutlusun - 0
Notify on empty response
#35 opened by gsuuon - 7
[Feature request] proxy support for curl?
#31 opened by kohane27 - 5
how to add codeium?
#29 opened by WillEhrendreich - 0
Autostart the llama.cpp server example
#22 opened by gsuuon - 6
Setting up llamacpp
#27 opened by rhsimplex - 1
Buffer mode options
#20 opened by Andrej-Marsic - 4
llamacpp provider change to use local server instead of local binary breaks existing prompts using llamacpp
#17 opened by helmling - 3
- 1
[PaLM]: filters.reason = "OTHER"
#19 opened by orhnk - 14
llama.cpp usage
#13 opened by Vesyrak - 3
- 0
`Question` : Change Default provider?
#10 opened by NormTurtle - 4
- 1
LLM error ['stop']
#6 opened by gdnaesver