alskipp/ASScreenRecorder

Possible to include video from AVPlayer?

Closed this issue · 5 comments

I've been trying to solve the video screen capture for a while, and this is by far the best implementation I've seen. It's really great!

Unfortunately it suffers from the same issues I've seen with every renderInContext/drawViewHierarchy implementation I've seen, which is that it won't capture video.

It know it's possible to get frames from AVAssetImageGenerator, so I was wondering if there was some way to feed that into the blank views.

It's potentially possible, but complicated! There's a delegate method which you can implement, but then you're entirely responsible for writing the video/live stream/openGL etc.
However, I'm not sure AVPlayer will give you access to the video data, so it might not be possible in your current use case?

// If your view contains an AVCaptureVideoPreviewLayer or an openGL view
// you'll need to write that data into the CGContextRef yourself.
// In the viewcontroller responsible for the AVCaptureVideoPreviewLayer / openGL view
// set yourself as the delegate for ASScreenRecorder.
// [ASScreenRecorder sharedInstance].delegate = self
// Then implement 'writeBackgroundFrameInContext:(CGContextRef*)contextRef'
// use 'CGContextDrawImage' to draw your view into the provided CGContextRef
@protocol ASScreenRecorderDelegate <NSObject>
- (void)writeBackgroundFrameInContext:(CGContextRef*)contextRef;
@end

I can use an AVAssetReader, which should give me the bitmap data from the same video the AVPlayer is displaying at least. Time syncing might be a little difficult. Will the recorder use this method for an AVPlayerLayer automatically? How does it know to fall back to this delegate method?

In your view controller you need to set yourself as the delegate [ASScreenRecorder sharedInstance].delegate = self;. The delegate method will be called every time a new frame is required for the video. You only need to write the video data (your UI will be rendered on top automatically).

The extra complication is that you need to be ready with the data, otherwise the frame rate will plummet.

Here is the implementation I use to record a AVCaptureVideoPreviewLayer in my app.

- (void)writeBackgroundFrameInContext:(CGContextRef*)contextRef;
{
    dispatch_sync(_imageQueue, ^{
        if (_capturedImage) { // this a CGImageRef
            CGContextSaveGState(*contextRef);
            CGAffineTransform flipRotate = CGAffineTransformMake(0.0, 1.0, 1.0, 0.0, 0.0, 0.0);
            CGContextConcatCTM(*contextRef, flipRotate);

            CGContextDrawImage(*contextRef, CGRectMake(0,0, CGRectGetHeight(_cameraView.bounds), CGRectGetWidth(_cameraView.bounds)), _capturedImage);

            CGContextRestoreGState(*contextRef);

            _needsNewImage = YES;
        }
    });
}

I create the CGImageRef when my AVCaptureVideoDataOutputSampleBufferDelegate gets called:

- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection
    dispatch_sync(_imageQueue, ^{
        if (_needsNewImage) { // only create image if needed
            CGImageRelease(_capturedImage);
            _capturedImage = [self createCGImageFromSampleBuffer:sampleBuffer];
            _needsNewImage = NO;
        }
    });
}

It's a bit of a palaver as you need to keep track of additional state to get everything to work. Here are the extra properties/ivars I'm using to tie everything together.

@property (strong, nonatomic) dispatch_queue_t imageQueue;
// the iVars below must be mutated using the serial dispatch_queue above

CGImageRef _capturedImage;
BOOL _needsNewImage;

With any luck, that'll give you some idea of what's involved. It's not pretty :(

@domhofmann Were you able to figure out how to record AVPlayer content?

endel commented

I've managed to record the preview layer by rendering each frame into a CALayer. endel/react-native-camera@e739b50