fp4
There are 3 repositories under fp4 topic.
intel/neural-compressor
SOTA low-bit LLM quantization (INT8/FP8/INT4/FP4/NF4) & sparsity; leading model compression techniques on TensorFlow, PyTorch, and ONNX Runtime
intel/neural-speed
An innovative library for efficient LLM inference via low-bit quantization