Issues
- 0
Install
#77 opened by Bobbylargykisjo - 1
./play-rocm.sh gptq error fedora 39
#76 opened by EtereosDawn - 0
- 2
Attempting to pass model params to ExLlama on startup causes an AttributeError
#59 opened by InconsolableCellist - 0
[Regression] Can't participate in horde with `exllama` branch, stopping sharing breaks processing
#73 opened by InconsolableCellist - 0
Support for MythoMax-L2-13B-GPTQ
#72 opened by yukkun - 0
How to load multiple graphics cards
#71 opened by qweronly - 2
Exllama in KoboldAI emits a spurious space at the beginning of generations that end with a stop token.
#61 opened by pi6am - 0
- 4
Slow speed for some models.
#33 opened by BadisG - 0
"expected scalar type BFloat16 but found Half"
#58 opened by j2l - 1
i keep getting a merge conflict when trying to git pull from the new updated 4bit-plugin dev branch
#57 opened by 0xYc0d0ne - 1
- 4
cant load models 4bit
#55 opened by scavru - 4
Can't load 4bit models on Rocm
#52 opened by Infection321 - 0
WinError 127 on nvfuser_codegen.dll
#54 opened by racinmat - 1
Request for T5 gptq model support.
#53 opened by sigmareaver - 0
- 2
1 token generation in story mode
#49 opened by Hotohori - 1
i cannot load any ai models and i keep getting this error no matter what i do. this happened after i did "git pull" command from this repository
#50 opened by 0xYc0d0ne - 4
Hey, I'm not sure what's wrong, but it does automatically delete a lot of output at the end of each generation.
#46 opened by anyezhixie - 1
- 0
anaconda3/lib/python3.9/runpy.py:127: RuntimeWarning: 'gptq.bigcode' found in sys.modules after import of package 'gptq', but prior to execution of 'gptq.bigcode'; this may result in unpredictable behaviour
#48 opened by sigmareaver - 0
Interface not loading... WSL/Windows
#45 opened by bbecausereasonss - 4
ModuleNotFoundError when starting "play.bat"
#44 opened by anyezhixie - 1
how i can uninstall
#43 opened by gandolfi974 - 1
- 0
install_requirements error libmamba
#42 opened by TFlame82 - 20
Cannot find the path specified & No module named 'hf_bleeding_edge' when trying to start.
#41 opened by TheFairyMan - 3
Failed to load 4bit-128g WizardLM 7B
#36 opened by lee-b - 5
Loading a model via command line (--model) does not work in 0cc4m Branch
#27 opened by RandomBanana122132 - 6
AMD install out of date?
#29 opened by jthree2001 - 2
- 1
- 2
Error using previously good model.
#30 opened by HeroMines - 2
- 2
What is the best way to update?
#24 opened by silvestron - 1
ImportError when running "play.sh"
#22 opened by brendan-donohoe - 2
No 4-bit toggle
#23 opened by Magnaderra - 13
Can't Generate With 4bit Quantized Model
#19 opened by chigkim - 1
i got an other error
#14 opened by akdcelcopr77 - 2
Error on start
#21 opened by conradcn - 2
pt not found
#16 opened by olamedia - 3
Can't Find 4Bit Model
#17 opened by chigkim - 1
Flask Error
#15 opened by xSparksx - 2
ERROR: quant_cuda-0.0.0-cp38-cp38-win_amd64.whl is not a supported wheel on this platform.
#13 opened by akdcelcopr77 - 1