smallcloudai/refact
WebUI for Fine-Tuning and Self-hosting of Open-Source Large Language Models for Coding
JavaScriptBSD-3-Clause
Issues
- 0
Support Qwen2.5-Coder
#457 opened by alexkramer98 - 0
EPIC: Add context-awareness on entire codebase
#189 opened by klink - 10
VSCode plugin broken by "Cannot reach the server:..."
#379 opened by st01cs - 1
Not working gpu filtering for Codellama/7b
#423 opened by hazratisulton - 3
how to fineture with codeLlama-7B
#453 opened by deepforest7 - 1
ADD ability to use any model from any source
#451 opened by allanlaal - 1
ADD version number (and release date) to footer
#450 opened by allanlaal - 1
- 0
Supporting the code model Codestral-22B-v0.1
#435 opened by 596192804 - 10
VRAM memory leak for Refact.AI 1.6B
#332 opened by tawek - 0
- 1
docker: Error response from daemon: could not select device driver "" with capabilities: [[gpu]].
#410 opened by koenig-arthur - 0
Add stablelm models
#401 opened by valaises - 2
Maybe hide popup when stats is empty?
#389 opened by valaises - 0
how add local model to mapping in docker-compse mount
#397 opened by linpan - 2
- 1
Llama2 chat model times out
#376 opened by jcntrl - 1
stats problem
#365 opened by valaises - 1
run without database oss
#366 opened by valaises - 1
error running docker on wsl with cuda
#368 opened by josersleal - 2
LoRA metabug for v1.2
#200 opened by olegklimov - 1
Latest lora checkpoints for deepseek-coder/5.7b/mqa-base only generate 1 token to some requests
#307 opened by hazratisulton - 2
lora's "catastrophic forgetting" problem
#311 opened by shatealaboxiaowang - 12
Finetune Problem
#235 opened by ChinnYu - 10
Self host v1.4.0 MODEL always /infengine-v1/completions-wait-batch WAIT time out
#300 opened by yourchanges - 0
Support for HTTP proxies
#207 opened by olegklimov - 1
Simple API key for OSS version, so people can expose docker port via reverse proxy
#232 opened by olegklimov - 7
Finetune of deepseek-coder fails
#262 opened by ryancu7 - 2
Finetune improvement for better performance
#246 opened by hazratisulton - 4
more files to process than processes
#275 opened by assinchu - 0
[ui] Finetune progress bar
#202 opened by mitya52 - 0
Fix <th> number of columns in finetune
#204 opened by mitya52 - 0
[CICL] cache flash_attn
#223 opened by reymondzzzz - 0
Host Model for Embeddings for RAG
#239 opened by valaises - 2
- 8
docker image fails to start on mac m3
#257 opened by domdorn - 2
Database not starting?
#261 opened by m0ngr31 - 0
Finetune failed with "No train files provided"
#242 opened by olegklimov - 0
GPU Filtering improvement
#244 opened by JegernOUTT - 0
Self Hosted Chat Times Out VSCode
#250 opened by stratus-ss - 4
Add DeepSeek Coder models
#216 opened by klink - 4
Add Code LLaMA
#217 opened by klink - 0
- 2
- 1
Refactoring of the finetuning script
#219 opened by klink - 0
- 2
Could not scan repo with Refact/1.6B selected
#218 opened by worldemar - 6
EPIC: Run Self-hosted version on CPU
#191 opened by klink - 0
easy bug -- query_nvidia_smi doesn't have temp_celsius, ValueError: invalid literal for int() with base 10, "[N/A]"
#203 opened by mitya52 - 1
Always "latest/best" now, instead of the clicked LoRA
#186 opened by mitya52