bigscience-workshop/petals

text to video generation models ?

scenaristeur opened this issue · 2 comments

I've seen that you don't want to host StableDiffusion models at #519

We are currently developping a chat game based on chat LLMs https://scenaristeur.github.io/numerai/ that use "The horde" for now for chat/text generation and image génération, but we could potentially be interested with video generation. Do you think something like that could be hosted https://huggingface.co/damo-vilab/text-to-video-ms-1.7b ?

Hi @scenaristeur,

How much GPU memory does this model take in total? At first sight, it seems that this model requires < 8-10 GB and fits many consumer GPUs, so there's not much sense in using Petals for it. I mean that you can technically do this, but AI Horde may be a better fit (optionally, you can use the horde for different model components separately).

In fact i don't have a gpu myself, and my app is a webapp working in the browser.
My goal is to connect this webapp to decentralized llms, and image or video generation from the browser, or from mobile app.
This is for accessing decentralized models from cpu/brosser, or mobile app