A more memory-efficient rewrite of the HF transformers implementation of Llama for use with quantized weights.
Primary LanguagePythonMIT LicenseMIT
This repository is not active