Picsart-AI-Research/Text2Video-Zero

Using multiple GPU's?

nadermx opened this issue · 3 comments

Incredible project, I was working on something for editing video, had something working on local but you released before. Anyways, is it possible to use more than one gpu to speed up the processing? IE in your example,

import torch
from model import Model

model = Model(device = "cuda", dtype = torch.float16)

prompt = "A horse galloping on a street"
params = {"t0": 44, "t1": 47 , "motion_field_strength_x" : 12, "motion_field_strength_y" : 12, "video_length": 8}

out_path, fps = f"./text2video_{prompt.replace(' ','_')}.mp4", 4
model.process_text2video(prompt, fps = fps, path = out_path, **params)

Possible offer the device = ["cuda", "cuda1"]]

And if this isn't trivial, if you could point me in the direction of where I should go about looking to try and implement this and hopefully submit a pull request with it

Hi ,
I had to use multiple GPU's for better resolution of video . I had tried using device_map="sequential" which is inbuilt feature of diffusers to load model on multiple GPU's .
Using this I was able to load model on multiple GPU's but I am getting an error while loading :-

.cache/huggingface/hub/models--dreamlike-art--dreamlike-photoreal-2.0/snapshots/d9e27ac81cfa72def39d74ca673219c349f0a0d5/vae/diffusion_pytorch_model.safetensors

I am getting error :-

UnpicklingError: invalid load key, '\xdc'.

Any idea how to load and run model successfully on multiple GPU's .

Note : I looked online and people suggested that weight file maybe corrupt . I re-downloaded them but still same error

Has anyone found any solution?

Hi, multi-gpu is currently not supported. A way to speed-up inference is to integrate xFormers into the CrossFrameAttnProcessor

class CrossFrameAttnProcessor:
.