styler00dollar/VSGAN-tensorrt-docker

[SUGGESTION] per-scene processing

MarcoRavich opened this issue · 2 comments

Hi there, this project is awesome so thanks for your - voluntary - work !

Since GANs-based processing is quite heavy computing task, it could be very useful to split it into multiple "segments" to allow parallel/scalable/collaborative/resumable instances.

We suggest you to check @master-of-zen's Av1an framework, wich implements it.

Hope that inspires.

If you really want parallel inference, you can just run multiple vsgan docker instances. Usually VRAM will be a bottleneck and you wont run more than one in most cases. Resumable could be a interesting, but sounds a bit painful to implement. Like, how to handle that with vapoursynth, how to split and how to handle filenames for example.

I am quite time constrained lately and I have quite a lot in my todo list, like ai based scene detection, fixes and code improvements so you won't really see me working on that any time soon. Not closing it since I am unsure yet.

I honestly most likely won't bother working on it. If you want parallel inference, then just do something like:

pool = multiprocessing.Pool(2)
pool.map(process_file, files)
pool.close()
pool.join()

Resumable does sound interesting, but speeds are gotten very fast with stuff like cugan and rife trt. It is in double or three digit fps, so I do not see a point on focusing on that, since I have more important tasks which I want to do.

I did add compiled av1an into my docker, which may be of some use for that since you mentioned that av1an does implement it. I added a custom version which takes two inputs to avoid processing twice.

av1an -i inference.py -I inference_orig.py -o test.mkv