Issues
- 9
- 1
- 7
Documentation on how to use?
#14 opened by sherodtaylor - 0
Open-WebUI compatibility
#53 opened by CalvesGEH - 0
base model issue
#52 opened by ANIBHAVc0de - 1
Adding more models in enum package.json
#47 opened by GermanAizek - 5
llama-coder on JetBrains IDE
#28 opened by Nonolanlan1007 - 0
- 0
Larger models just seem to return metadata
#49 opened by oc013 - 2
Can't use custom model
#44 opened by HelmholtzW - 0
- 0
- 1
Extension Literally Does Nothing
#42 opened by PaulSinghDev - 0
As a user I would like to be able to disable autocomplete when running on battery
#40 opened by bkyle - 0
Astro files not getting completions
#32 opened by S1monlol - 3
Does it work on Windows
#31 opened by PierrunoYT - 20
- 4
Unsupported document: vscode-vfs
#25 opened by AurelioOsorio - 2
More flexibility with remote hosts
#27 opened by corinfinite - 2
Need clarification: Ollama and codellama-70b running. Will Llama Coder use this?
#30 opened by ewebgh33 - 0
Unable to switch models
#38 opened by carlca - 2
Ignoring download reset
#16 opened by oderwat - 1
Add setting to reduce autocomplete suggestions
#19 opened by quikssb - 1
add ipynb support (jupyter notebook)
#20 opened by BornSaint - 5
Error during inference: fetch failed
#29 opened by cadeff01 - 32
How to run it in VS Code?
#3 opened by ppogorze - 2
Unknown model undefined
#23 opened by pywacket1 - 0
Ollama support and decouple codellama
#22 opened by shouryan01 - 2
Move model to external harddrive and symlink it
#18 opened by rowild - 1
- 2
Add support for Deepseek Coder - Instruct Models
#10 opened by etlacker - 5
SUGGESTION: allow using arbitrary model
#11 opened by Gusarich - 5
- 1
Keyboard Shortcut to Pause autocomplete?
#13 opened by peanutyost - 0
Unable to install extension 'ex3ndr.llama-coder' as it is not compatible with VSCodium '1.83.1'.
#12 opened by gerroon - 2
publish to OpenVSX registry
#7 opened by khimaros - 7
Please add Deepseek-coder models
#2 opened by ragaman555 - 0