/marlin

FP16xINT4 LLM inference kernel that can achieve near-ideal ~4x speedups up to medium batchsizes of 16-32 tokens.

Primary LanguagePythonApache License 2.0Apache-2.0

Watchers

No one’s watching this repository yet.