/exllama

A more memory-efficient rewrite of the HF transformers implementation of Llama for use with quantized weights.

Primary LanguagePythonMIT LicenseMIT

No issues in this repository yet.