itsliupeng/marlin
FP16xINT4 LLM inference kernel that can achieve near-ideal ~4x speedups up to medium batchsizes of 16-32 tokens.
PythonApache-2.0
Watchers
No one’s watching this repository yet.
FP16xINT4 LLM inference kernel that can achieve near-ideal ~4x speedups up to medium batchsizes of 16-32 tokens.
PythonApache-2.0
No one’s watching this repository yet.