Aspartame-e951's Stars
kaiokendev/cutoff-len-is-context-len
Demonstration that finetuning RoPE model on larger sequences than the pre-trained model adapts the model context limit
turboderp/exllama
A more memory-efficient rewrite of the HF transformers implementation of Llama for use with quantized weights.
ggerganov/llama.cpp
LLM inference in C/C++