TiptopFunk/GPTQ-for-LLaMa-ROCm
4 bits quantization of LLaMA using GPTQ, ported to HIP for use in AMD GPUs.
PythonApache-2.0
Watchers
No one’s watching this repository yet.
4 bits quantization of LLaMA using GPTQ, ported to HIP for use in AMD GPUs.
PythonApache-2.0
No one’s watching this repository yet.