lagmoellertim/unsilence

Hardware Acceleration

oschwartz10612 opened this issue · 10 comments

Describe the solution you'd like
It would be great if hardware acceleration could be added to the ffmpeg usage. I assume it would really speed up the video manipulation aspect.

Additional context
This seems to be a good starting point: https://trac.ffmpeg.org/wiki/HWAccelIntro

If you point me in the right direction as to how you would like this done I can get working on a PR.

Love this project BTW. Been using it for school all year and it really is a time saver!

So I started playing around with this, and I appear to have successfully enabled hardware acceleration in the "Rendering Intervals" part. But after it starts to combining, I get the following error:

OSError: [Errno 18] Invalid cross-device link: '/content/.tmp/d837f272-3fbf-47a4-8806-6265e3d69d65/out_final.mp4' -> '/content/drive/MyDrive/unsilence/unsilence_File_2.mp4'

During handling of the above exception, another exception occurred:

FileNotFoundError: [Errno 2] No such file or directory: '/content/.tmp/d837f272-3fbf-47a4-8806-6265e3d69d65/out_final.mp4'

Here is what I have done so far (not much): master...oschwartz10612:master

Any idea what I should do?

Hey, thanks you working on it!

This error
OSError: [Errno 18] Invalid cross-device link: '/content/.tmp/d837f272-3fbf-47a4-8806-6265e3d69d65/out_final.mp4' -> '/content/drive/MyDrive/unsilence/unsilence_File_2.mp4'
is related to your changes.

This exception
FileNotFoundError: [Errno 2] No such file or directory: '/content/.tmp/d837f272-3fbf-47a4-8806-6265e3d69d65/out_final.mp4' happens when unsilence tries to copy the final video from the temp folder to the real final output location. Since this error occured, the generation of the out_final.mp4 must have failed (which seems to be the case because you got the first error).

Also, it would be great if you could look into hardware acceleration that depends on the available resources (amd, nvidia, intel gpu), (windows, linux, macos) because the current h264_nvenc only works on nvidia.

I also came to the same conclusion after more messing around. I can quite figure out why ffmpeg is not writing the output file.

I removed the acceleration from MediaRenderer.py and still get the error so I am of the opinion that there is some issue in RenderIntervalThread.py. I apologize if I speak ignorantly about the source; I am still exploring.

I would be happy to look into the other platforms once I get it working on nvidia. It seemed that was a good jumping off point.

Is there a way to retrieve the ffmpeg stdout so I could debug it?

BTW I have been testing in colab because I dont have an nvidia GPU to test on. Here is the link if you are interested: https://colab.research.google.com/drive/1q2wPHRYKfqYPu9WynADoTjedg7ab4HAi?usp=sharing

I think the cause of the error is that os.rename can only rename on same drive. Since you use colab, the tmp folder and the Google Drive folder are seen as different drives, and so os.rename fails. We could use shutil.move (https://stackoverflow.com/a/43967659/10666894), that could fix the problem.

Currently, there is no simple way to output the ffmpeg output, so you would need to add your own print statements at the according points.
For most console outputs, I have a loop (for line in console_output:) where you can output each line. Otherwise, you could lookup the subprocess.run method and see how to get stdout from it.

That was a good thought that I would never have arrived at, but it seems to not be the issue. I copied my test video file to the local runtime and tested again with the same error.

I will patch in a print function and see what I can come up with tomorrow. It might have to do with the frame buffer being stored on the GPU, but it should still write to the file...

Turns out the problem was that ffmpeg was not built with nvenc or cuda support and the command was not working. It also turns to build ffmpeg with nvenc and cuda is quite the hassle and I gave up trying after one or two tries. Snap also does not work on Colab so I gave up on trying to get it to work on Colab.

I am going to try using AMD hwaccel on my local machine and I will let you know how it goes.

After going down this rabbit hole it seems that us hardware acceleration has its limitations.

On macos you can you videotoolbox with AMD cards using a stock ffmpeg brew install. I tried this, and it worked but it is not significantly faster than multithreaded software encoding.

On Windows you can use CUDA, but it requires that ffmpeg be compiled with support which is typically done by cross compiling which is a whole can of worms in itself.

On linux the ffmpeg snap package has CUDA support built in, but in trying to use that I ran into a myriad of permissions issues. Without the snap package it must be compiled with CUDA support.

Therefore I think because of the effort it requires to get ffmpeg to even work nicely with hardware it is not realistic to pursue.

Thank you for your work on that topic. If you are looking for alternative ways to help speed things up with unsilence, there is another user over at #55 working on improvements using other, more efficient ffmpeg commands.

Hello,

I'm using a M1 Max MacBook and I'd like to know if there's a way to videotoolbox? I've tried to reencode videos with HandBrake using videotoolbox in the past, and it's twice as fast and half as power-consuming.

Is there a simple way to include this as an option and pass it directly to ffmpeg as done here?

Thank you :)