dvmazur/mixtral-offloading

Can this be used for Jambo inference

freQuensy23-coder opened this issue · 1 comments

Can I use this solution for inference https://huggingface.co/ai21labs/Jamba-v0.1/discussions with offloading mamba moe layers?

Jambo it SOTA open source long context model and its support would be very useful for this library.

Hey, @freQuensy23-coder! The code in this repo is quite transformer-moe specific. I'm not too familiar with mamba-like architectures, but afaik @lavawolfiee has plans for adapting Jamba to work with our offloading strategy.