TiptopFunk/GPTQ-for-LLaMa-ROCm
4 bits quantization of LLaMA using GPTQ, ported to HIP for use in AMD GPUs.
PythonApache-2.0
Stargazers
No one’s star this repository yet.
4 bits quantization of LLaMA using GPTQ, ported to HIP for use in AMD GPUs.
PythonApache-2.0
No one’s star this repository yet.