tungs/timecut

Can't record a canvas that consists of objects that react to an audio file

frizurd opened this issue ยท 8 comments

Hey there,
I really love the plugin and thank you very much for sharing it with us ๐Ÿ™

I'm trying to record a local webpage that consists of multiple HTML canvases and an HTML audio element. The canvases react and move based on the audio file, I'm hoping to record the movement, and glue the MP3 file to the video afterward the creation of the video.

In the preparePage function I trigger the page to play the audio element which triggers the canvases to animate, which all works fine. But the actual video is not realtime/aligned with the audio file, the video skips a lot of frames in between. I feel like its only recording 1 FPS.

Is there some way of making this awesome plugin work for my use case or am I misunderstanding something?

Thank you in advance.

tungs commented

Hi, thanks for filing this! I was wondering when I wrote the video handling code whether anyone would have this use case. Thanks for an actual real world case!

Currently the audio element isn't supported (though theoretically, it should be pretty easy to support by editing media-time-handler.js in timesnap).

You may have some luck just changing the audio element to a video element-- I believe audio files can work as video files, and there is some support for video elements. In a future version, I'll try to add audio support.

Hi, thanks for filing this! I was wondering when I wrote the video handling code whether anyone would have this use case. Thanks for an actual real world case!

Currently the audio element isn't supported (though theoretically, it should be pretty easy to support by editing media-time-handler.js in timesnap).

Thanks a lot for your time!

I tried this and it works. Adjusted the node names checkers to match 'audio' instead of 'video'.

I drew an MP3 player and it gets played and rendered correctly. But for some reason, the audio visualization doesn't get shown (connected via AudioContext/AnalyserNode). Drawing the animation on a canvas with the requestAnimationFrame function, data from the analyser. It works perfectly fine if I open it via the browser. Trying to figure out what the cause can be, do you have any suggestions?

Once again, thanks a lot for your time!

You may have some luck just changing the audio element to a video element-- I believe audio files can work as video files, and there is some support for video elements. In a future version, I'll try to add audio support.

I tried to do this first but it gives me the same problem.


Before I edited the media-time-handler.js file and was using the audio element, Timecut recorded both the audio visualization and the MP3 player correctly, even though it was sped up or in 1FPS.

tungs commented

timecut and its underlying library timesnap work by implementing custom requestAnimationFrame functions that can be manually called on demand, essentially creating a virtual timeline. In your case, I suspect the audio is playing in real time, while the function being called in requestAnimationFrame is either chunking or missing data from that real time player.

I'm not very familiar with how AnalyserNodes work, but I suspect it'll be tricky to incorporate real time elements (from the AnalyserNode) with the virtual time elements (from timecut/timesnap). It might be possible to move everything to virtual time via a custom modification of AudioContext and/or AnalyserNode, but that would require some effort to look into. Do you have a sample project you can post here?

tungs commented

I should also add that videos modified via timesnap aren't really "played," but rather the video is paused, and the seeked to the appropriate time for each frame. This approach won't work for audio elements that need to be playing for AnalyserNodes to be able to receive data. It might be possible to manually collect and send the data in virtual time, but even if it is possible, it would take a significant amount of effort to implement.

timecut and its underlying library timesnap work by implementing custom requestAnimationFrame functions that can be manually called on demand, essentially creating a virtual timeline. In your case, I suspect the audio is playing in real time, while the function being called in requestAnimationFrame is either chunking or missing data from that real time player.

I'm not very familiar with how AnalyserNodes work, but I suspect it'll be tricky to incorporate real time elements (from the AnalyserNode) with the virtual time elements (from timecut/timesnap). It might be possible to move everything to virtual time via a custom modification of AudioContext and/or AnalyserNode, but that would require some effort to look into. Do you have a sample project you can post here?

Yes, this is a very simple example of what I'm trying to record.

let audio = new Audio();
audio.src = '/audio/track.mp3';
audio.controls = true;
audio.loop = true;
audio.autoplay = false;


// Establish all variables that your Analyser will use
let canvas, ctx, source, context, analyser, fbc_array, bars, bar_x, bar_width, bar_height;

function initMp3Player() {
    document.getElementById('audio').appendChild(audio);

    window.AudioContext = window.AudioContext || window.webkitAudioContext;
    context = new AudioContext();

    analyser = context.createAnalyser(); 
    canvas = document.getElementById('visualizer');
    ctx = canvas.getContext('2d');
    source = context.createMediaElementSource(audio);
    source.connect(analyser);
    analyser.connect(context.destination);
    frameLooper();
}

function frameLooper() {
    window.requestAnimationFrame(frameLooper);
    fbc_array = new Uint8Array(analyser.frequencyBinCount);
    analyser.getByteFrequencyData(fbc_array);
    ctx.clearRect(0, 0, canvas.width, canvas.height); 
    ctx.fillStyle = '#00CCFF'; 
    bars = 100;
    for (var i = 0; i < bars; i++) {
        bar_x = i * 3;
        bar_width = 2;
        bar_height = -(fbc_array[i] / 2);
        ctx.fillRect(bar_x, canvas.height, bar_width, bar_height);
    }
}

It took me a minute but I found a way to do it.

Preprocess the audio by using an OfflineAudioContext
I've added an onseeking event listener to the audio file and draw on the canvas whenever triggered. Control how many times you draw on the canvas via the Timecut FPS variable.
Get the frequency data from the OfflineAudioContext from a given time on every onseek event.

tungs commented

Awesome! Glad to hear that you got it working. If you eventually want to share the end result and the code, I'm interested in seeing it.

It took me a minute but I found a way to do it.

Preprocess the audio by using an OfflineAudioContext
I've added an onseeking event listener to the audio file and draw on the canvas whenever triggered. Control how many times you draw on the canvas via the Timecut FPS variable.
Get the frequency data from the OfflineAudioContext from a given time on every onseek event.

I'd be interested to see, too, @frizurd!