tomaarsen/attention_sinks
Extend existing LLMs way beyond the original training length with constant memory usage, without retraining
PythonApache-2.0
Extend existing LLMs way beyond the original training length with constant memory usage, without retraining
PythonApache-2.0