MCG-NKU/E2FGVI

Resolution or video length lead to cuda out of memory question

davidzang0930 opened this issue · 8 comments

I use e2fgvi_hq model

  1. In the input part, if the resolution is adjusted from 432240 to 648360 or higher, there will be cuda out of memory problem
  2. Fixing the resolution to 432*240 and adjusting the input video length to 10 seconds or higher will also cause cuda out of memory problems
    I would like to ask, is there no way to input a higher resolution or longer video? Or am I using it wrong

Bro. The model is hungry for your gpu

@davidzang0930 did you find an answer for this?

@davidzang0930 did you find an answer for this?

Guys what are trying to do? The model is expensive to run

I was wondering if there were any settings to adjust that would use less memory at the cost of inference time.

I found this repo:

https://github.com/Teravus/Chunk_E2FGVI

which allows for more frames at a time, so I am using that one for now.

@antithing I use the .half() function to allow it to use more frames, and I put selected_imgs and selected_masks into the GPU after selecting them. This way, it can process longer videos.

@davidzang0930 thanks! Are you able to share your code changes here?

@antithing
Still adjusting, try to see if it can be better optimized
However, the above can initially reduce the demand
You can modify test.py

Hi @davidzang0930 did you find any other ways to optimize it? If so can you please share?