Library which lets you add visual filters to your OpenTok Publisher.
- Chrome 51+
- Firefox 49+
These filters require the Canvas captureStream API which works in Chrome 51+ and Firefox 43+. Adding audio to the stream only started working in Firefox 49+.
You can view the source code for the demo for an example of how to use this library.
Note: Make sure you include the opentok-camera-filters code before you include opentok.js.
Include the filters and then initialise with the filter you want to use.
const filters = require('opentok-camera-filters/src/filters.js');
const filter = require('opentok-camera-filters')(filters.none);
Then when you have a Publisher you need to set it, eg.
const publisher = session.publish();
filter.setPublisher(publisher);
If you want to change the filter you can use the change method, eg.
filter.change(filters.red);
A lot of the filters were taken from tracking.js.
Give the video a red tint
Give the video a green tint
Give the video a blue tint
Converts a colour from a colorspace based on an RGB color model to a grayscale representation of its luminance.
A Gaussian blur (also known as Gaussian smoothing) is the result of blurring an image by a Gaussian function.
Computes the vertical and horizontal gradients of the image and combines the computed images to find edges in the image.
Inverts the colour in every pixel of the video.
Does face detection using clmtrackr and draws an image on top of the face.
If you want to create your own custom filter you just need to create a function that looks like one of the functions in the filters.js file. These functions accept a videoElement and a canvas parameter and they take the data out of the videoElement which is rendering the unfiltered video from the camera and they draw it onto the canvas after applying a filter. It should return an object with a stop method which when called will stop the filter from processing. For example creating a simple filter which draws a new random colour every second would look something like:
const randomColour = () => {
return Math.round(Math.random() * 255);
};
filter.change((videoElement, canvas) => {
const interval = setInterval(() => {
const ctx = canvas.getContext('2d');
ctx.clearRect(0, 0, canvas.width, canvas.height);
ctx.fillStyle = `rgb(${randomColour()}, ${randomColour()}, ${randomColour()})`;
ctx.fillRect(0, 0, canvas.width, canvas.height);
}, 1000);
return {
stop: () => {
clearInterval(interval);
}
};
});
You can also use the filterTask which handles transforming image data from the videoElement and just lets you pass it a filter function which takes ImageData and transforms it returning new ImageData. The invert function is a good example of a simple filter which uses this.
If you want access to the face tracking data from clmtrackr you can use the face()
filter and pass in your own renderer function like so:
filter.change((videoElement, canvas) => {
return filters.face(videoElement, canvas, positions => {
// Do something with the positions and draw something on the canvas
});
});
The positions are the response from clmtrackr.getCurrentPosition()
. The glasses filter is an example of a face filter.