Issues
- 0
Does this support llama2 as well?
#21 opened by YaoJiayi - 0
Producing nan Tensors
#20 opened by Bryan-Lavender - 0
CUDA out of memory
#19 opened by fengyh3 - 4
Getting error on generation in Windows
#12 opened by elephantpanda - 3
65B on multiple GPUs : CUDA out of memory with 4 x GPU RTX A5000 (24GB) / 96GB in total
#18 opened by scampion - 1
LLaMA 13B works on a single RTX 4080 16GB
#17 opened by kcchu - 1
- 1
Issue for bitsandbytes /// NameError: name 'cuda_setup' is not defined. Did you mean: 'CUDASetup'?
#15 opened by kskim-phd - 3
Tracking issue for Mac support
#4 opened by pannous - 0
Can 65B run on 4*32G GPU?
#11 opened by zhongtao93 - 0
Is it possible to save the smaller weights so it doesn't have to convert them each time?
#10 opened by spullara - 3
- 1
- 4
- 3
RTX4090 CUDA out of memory.
#7 opened by WuNein - 0
- 0