Issues
- 8
ollama completion does not work at all
#15 opened by OneOfOne - 2
Huggingface Customization
#33 opened by Freaksed - 3
- 1
Feature Request: Support for Claude
#28 opened by Ernesto905 - 0
cmp config is overridden post installing
#27 opened by TheNaman047 - 3
Feature: Support more Models out of the box
#26 opened by milafrerichs - 3
[Feature Request] Settings for own LLM server
#23 opened by Alpensin - 1
Working cmp-ai configuration for NVChad?
#20 opened by awonglk - 1
Support for AWS codewhisperer
#12 opened by mateimicu - 1
feature request: Tabbyml support
#21 opened by Kamilcuk - 1
HF logic issue
#18 opened by devvit - 0
How to connect with a remote ollama server?
#10 opened by captainko - 2
Add delay when firing auto-complete
#9 opened by JoseConseco - 0
cannot change ollama model
#8 opened by JoseConseco - 4
Llama support
#5 opened by JoseConseco - 1
OpenAI model not working
#6 opened by henryoliver - 3
- 1
- 2