TiptopFunk/GPTQ-for-LLaMa-ROCm
4 bits quantization of LLaMA using GPTQ, ported to HIP for use in AMD GPUs.
PythonApache-2.0
No issues in this repository yet.
4 bits quantization of LLaMA using GPTQ, ported to HIP for use in AMD GPUs.
PythonApache-2.0
No issues in this repository yet.