bitsandbytes-foundation/bitsandbytes
Accessible large language models via k-bit quantization for PyTorch.
PythonMIT
Pinned issues
Issues
- 0
- 0
asking for CUDA 124 when only CUDA 126 exists
#1470 opened - 6
[BUG] AMD GPU compilation from source
#1468 opened - 0
- 0
- 1
- 0
- 0
Deleting StableEmbedding still occupies GPU RAM
#1462 opened - 0
[ROCm] Allowing for more flexibility in matching ROCm-specific PyTorch wheels to the installed ROCm version
#1461 opened - 1
bitsandbytes for macos M1,M2,M3 chips
#1460 opened - 0
Link to code for reproducing table found in Multi-backend support (non-CUDA backends) documentation?
#1456 opened - 2
- 0
LoRA + deepspeed zero3 finetuing using 8bit quantization of base weights results in increased loss
#1451 opened - 2
- 3
CUDA Setup failed despite GPU being available.
#1449 opened - 3
Lookahead in forward
#1447 opened - 1
CUDA Setup failed despite GPU being available.
#1446 opened - 1
- 2
CUDA Setup failed despite GPU being available.
#1443 opened