AugmendTech/CrabGrab

Low framerate when capturing video

Closed this issue ยท 3 comments

Hello ๐Ÿ‘‹ ,

Thank you for this library!

I'm trying to save a video file of the screen capture(currently testing on macOS). I'm piping each frame to a ffmpeg process: https://github.com/samrat/crabgrab_video/blob/main/src/main.rs

However, the frames seem to be captured at ~1 frame per second(ffmpeg logs the fps it is processing at, but I've also added some println's). As a result, the output video is jumpy-- it looks like the frames are not being captured at all rather than the processing being slow.

ScreenCaptureKit seems to have a config for configuring the frame rate: https://developer.apple.com/documentation/screencapturekit/scstreamconfiguration/3928174-minimumframeinterval , however I wasn't able to find a corresponding way to configure this in CrabGrab. And, besides, the default behaviour seems to be to capture at the highest possible framerate, so maybe that isn't the issue.

I'm fairly new to Rust(and especially async Rust/Tokio) so it's possible I'm doing something wrong :)

Hey there!

I took a look at your code. Essentially, you're being limited by the amount of buffers you have in flight at a time and the speed at which ffmpeg can encode video with libx264.

If you watch the program output when it starts up, it will initially send out three frames very quickly, then wait a moment before continuing on (roughly one second, corresponding to the startup delay you have). Then, frames are both processed and received as fast as ffmpeg consumes them.

The reason CrabGrab stops giving you frames at full speed is because you've already got as many frames in flight (IE, in your mpsc channel) as the underlying capture library (in this case ScreenCaptureKit) is configured to allow.

There are a few relevant config options here:

Neither of these will totally solve the problem you're facing, though. When you've used up all the available buffers in the stream, you need to drop() one before a new frame can be delivered.

In your application, the easiest way to do this is probably to convert frames to bitmaps before putting them in your mpsc channel. This will consume a lot of memory pretty quickly using a CPU encoder in ffmpeg - you'd need to encode frames in real-time to prevent it from eating up all your RAM in a few minutes. An alternative would be to record the raw bitmap frames to a file, and then process them separately, but that's obviously quite a bit more complicated.

I'm going to close this issue, as the behavior you're seeing is by-design, and expected, but I'll open an issue to better document this behavior. I'll also leave this thread open for comments if you have any questions.

Thanks for the explanation @OutOfTheVoid

I tried making both the changes, but unfortunately still getting same results in terms of speed: samrat/crabgrab_video@f734167 (haven't looked yet into how the changes affect memory usage)

It's probably due to the latency of the VRAM->RAM copy in get_bitmap(). It's not a fast operation, but each frame can be handled independently, so you could copy each frame on a separate thread. Since you're using tokio, you could use tokio::task::spawn_blocking(..) to put the get_bitmap(..) operation on multiple threads, and send the future through the mpsc queue.

You could also limit the capture resolution, which will speed up everything.

It may be possible to speed up get_bitmap() using memory pooling, but would be something I need to plan out and implement carefully in a future release of CrabGrab, and likely require a breaking change to the bitmap API. It might also make sense to enable this in a more async-friendly way.