FMInference/FlexLLMGen

Please do not abandon this project!

Opened this issue · 3 comments

Earlier this year I was impressed with the offloading performance of FlexGen, and I wonder how it would compare with the performance currently provided by llama.cpp for Llama and Llama-2 models in a CPU offloading scenario.

Any chance Llama support could be added to FlexGen @Ying1123 @keroro824?

We are pushing a refactoring of the current implementation to support most HF models, we will release that soon under a fork of this repo and will keep you informed.

That's exciting news @BinhangYuan! I look forward to testing the new release and incorporating it in my text-generation-webui project. Cheers :)

Are there any news of this fork?