LairChen/bigdl-llm-tutorial
Accelerate LLM with low-bit (FP4 / INT4 / FP8 / INT8) optimizations using bigdl-llm
Jupyter NotebookApache-2.0
Watchers
No one’s watching this repository yet.
Accelerate LLM with low-bit (FP4 / INT4 / FP8 / INT8) optimizations using bigdl-llm
Jupyter NotebookApache-2.0
No one’s watching this repository yet.