warricksothr/exllama
A more memory-efficient rewrite of the HF transformers implementation of Llama for use with quantized weights.
PythonMIT
Watchers
No one’s watching this repository yet.
A more memory-efficient rewrite of the HF transformers implementation of Llama for use with quantized weights.
PythonMIT
No one’s watching this repository yet.