This JavaScript/WebGL library detects if the user is looking at the screen or not. It is very robust to all lighting conditions and lightweight (only 150KB gzipped for the main script and the neural network JSON model). It is great for playing a video only if the user is watching it.
You can test it with these demos (included in this repo):
- Simple integration demo: live demo, source code
- Youtube integration demo: live demo, source code
- Old and ugly integration demo: live demo, source code
- Camera auto exposure adjuster: live demo, source code
This is a video screenshot of the Youtube integration demo:
This repository is composed of the following paths:
/dist/
: Main library script and neural network model,/demos/
: Integration demonstrations,/libs/
: third party library.
In the HTML page, you first need to include the main script between the tags <head>
and </head>
:
<script src="dist/jeelizGlanceTracker.js"></script>
Then you should include a <canvas>
HTML element in the DOM, between the tags <body>
and </body>
:
<canvas id='glanceTrackerCanvas'></canvas>
This canvas will be used by WebGL for the computation and the display of the camera video with the face detection frame. It can be hidden using CSS rules. As soon as the page is loaded or when you want to enable the glance tracking feature you should call this function:
JEELIZGLANCETRACKER.init({
// MANDATORY:
// callback launched when:
// * the user is watching (isWatching=true)
// * or when he stops watching (isWatching=false)
// it can be used to play/pause a video
callbackTrack: function(isWatching){
if (isWatching){
console.log('Hey, you are watching bro');
} else {
console.log('You are not watching anymore :(');
}
},
// FACULTATIVE (default: none):
// callback launched when then Jeeliz Glance Tracker is ready
// or if there was an error
// spec is an object with these attributes:
// * <video> video: the video element
// * <WebGLContext> GL: the webgl context
// * <WebGLTexture> videoTexture: WebGL texture storing the camera video
// * <WebGLTexture> videoTextureCut: WebGL texture storing the cropped face
callbackReady: function(error, spec){
if (error){
console.log('EN ERROR happens', error);
return;
}
console.log('All is well :)');
},
//FACULTATIVE (default: true):
//true if we display the video of the user
//with the face detection area on the <canvas> element
isDisplayVideo: true,
// MANDATORY:
// id of the <canvas> HTML element
canvasId: 'glanceTrackerCanvas',
// FACULTATIVE (default: internal)
// sensibility to the head vertical axis rotation
// float between 0 and 1:
// * if 0, very sensitive, the user is considered as not watching
// if he slightly turns his head,
// * if 1, not very sensitive: the user has to turn the head a lot
// to loose the detection.
sensibility: 0.5,
// FACULTATIVE (default: current directory)
// should be given without the NNC.json
// and ending by /
// for example ../../
NNCPath: '/path/of/NNC.json'
});
After the initialization, these methods are available:
-
JEELIZGLANCETRACKER.set_sensibility(<float> sensibility)
: adjust the sensibility (between 0 and 1), -
JEELIZGLANCETRACKER.toggle_pause(<boolean> isPause, <boolean> shutCamera)
: pause/resume the face tracking. ifshutCamera
is set totrue
, it will also turn off the camera light. It returns a promise, -
JEELIZGLANCETRACKER.toggle_display(<boolean> isDisplay)
: toggle the display of the video with the face detection area on the HTML<canvas>
element. It is better to disable the display if the canvas element is hidden (using CSS for example). It will save some GPU resources. -
JEELIZGLANCETRACKER.destroy()
: Clean both graphic memory and JavaScript memory, uninit the library. After that you need to init the library again.
You should use them after initialization, ie:
- either after that
callbackReady
function provided as initialization argument is launched (better), - or when the boolean property
JEELIZGLANCETRACKER.ready
switches totrue
.
The tracker requires the user's camera video feed through MediaStream API
. So your application should be hosted with a HTTPS server (even with a self-signed certificate). It won't work at all with unsecure HTTP, even locally with some web browsers.
You can use our hosted and up to date version of the library, available here:
https://appstatic.jeeliz.com/glanceTracker/jeelizGlanceTracker.js
It is hosted on a content delivery network (CDN) using gzip compression.
If you host the scripts by yourself, be careful to enable gzip HTTP/HTTPS compression for .JSON and .JS files. Indeed, the neuron network JSON file, dist/NNC.json
is quite heavy, but very well compressed with GZIP. You can check the gzip compression of your server here.
This API uses Jeeliz WebGL Deep Learning technology to detect and track the user's face using a neural network. All is done client-side.
- If
WebGL2
is available, it usesWebGL2
and no specific extension is required, - If
WebGL2
is not available butWebGL1
, we require eitherOES_TEXTURE_FLOAT
extension orOES_TEXTURE_HALF_FLOAT
extension, - If
WebGL2
is not available, and ifWebGL1
is not available or neitherOES_TEXTURE_FLOAT
orOES_HALF_TEXTURE_FLOAT
are implemented, the user is not compatible.
In all cases, WebRTC should be implemented in the web browser, otherwise FaceFilter API will not be able to get the camera video feed. Here are the compatibility tables from caniuse.com here: WebGL1, WebGL2, WebRTC.
If a compatibility error is triggered, please post an issue on this repository. If this is a problem with the camera access, please first retry after closing all applications which could use your device (Skype, Messenger, other browser tabs and windows, ...). Please include:
- a screenshot of webglreport.com - WebGL1 (about your
WebGL1
implementation), - a screenshot of webglreport.com - WebGL2 (about your
WebGL2
implementation), - the log from the web console,
- the steps to reproduce the bug, and screenshots.
Apache 2.0. This application is free for both commercial and non-commercial use.