JavaScript/WebGL lightweight and robust face tracking library designed for augmented reality face filters
This JavaScript library detects and tracks the face in real time from the camera video feed captured with WebRTC. Then it is possible to overlay 3D content for augmented reality applications. We provide various demonstrations using the main WebGL 3D engines. We have included in this repository the release versions of the 3D engines to work with a determined version (they are in /libs/<name of the engine>/
).
This library is lightweight and it does not include any 3D engine or third party library. We want to keep it framework agnostic so the outputs of the library are raw: if the face is detected or not, the position and the scale of the detected face and the rotation Euler angles. But thanks to the featured helpers, examples and boilerplates, you can quickly deal with a higher level context (for motion head tracking, for face filter or face replacement...). We continuously add new demonstrations, so stay tuned!
Table of contents
- Features
- Architecture
- Demonstrations and apps
- Specifications
- Integration
- Hosting
- About the tech
- Articles and tutorials
Features
Here are the main features of the library:
- face detection,
- face tracking,
- face rotation detection,
- mouth opening detection,
- multiple faces detection and tracking,
- very robust for all lighting conditions,
- video acquisition with HD video ability,
- mobile friendly,
- interfaced with 3D engines like THREE.JS, BABYLON.JS, A-FRAME,
- interfaced with more accessible APIs like CANVAS, CSS3D.
Architecture
/demos/
: source code of the demonstrations, sorted by 2D/3D engine used,/dist/
: core scripts of the library:jeelizFaceFilter.js
: main minified script,jeelizFaceFilter.module.js
: main minified script for use as a module (withimport
orrequire
),
/neuralNets
: trained neural network models:NN_DEFAULT.json
: file storing the neural network parameters, loaded by the main script,NN_<xxx>.json
: alternative neural network models,
/helpers/
: scripts which can help you to use this library in some specific use cases,/libs/
: 3rd party libraries and 3D engines used in the demos,/reactThreeFiberDemo/
: NPM/React/Webpack/Three-Fiber boilerplate.
Demonstrations and apps
Included in this repository
These demonstration are included in this repository. So they are released under the FaceFilter licence. You will probably find among them the perfect starting point to build your own face based augmented reality application:
-
REACT/THREE FIBER boilerplate: /reactThreeFiberDemo
-
BABYLON.JS based demos:
- Boilerplate (displays a cube on the user's head): live demo, source code
-
THREE.JS based demos - specific README about THREE.js based demo problems:
-
Boilerplates:
- Boilerplate (displays a cube on the user's head): live demo, source code
- Boilerplate with only 1
<canvas>
element: live demo, source code - Same boilerplate but using
neuralNets/NN_4EXPR_1.json
as neural net, and displays 4 expressions: live demo, source code - Multiple face tracking: live demo, source code
- GLTF fullscreen demo with HD video: live demo, source code
-
AR 3D demos:
- Werewolf (turn yourself into a werewolf): live demo, source code
- Angel/Demon (discover who of the angel or demon will win in this animated scene): live demo, source code
- Anonymous mask and video effect: live demo, source code
- Rupy Motorcycle Helmet VTO: live demo, source code
- Dog: live demo, source code
- Butterflies animation: live demo, source code
- Clouds above the head: live demo, source code
- Casa-de-Papel mask: live demo, source code
- Miel Pops glasses and bees: live demo, source code
- Football makeup: live demo, source code
- Tiger face filter with mouth opening detection (strong WTF effect): live demo, source code
- Fireworks - particules: live demo, source code
-
face painting or deformation:
- Face deformation: live demo, source code
- Face cel shading: live demo, source code
-
demos linked with tutorials:
- Luffy's Hat: live demo, source code part 1, tutorial part 1, source code part 2, tutorial part 2
- Statue Of Liberty: live demo, source code, interactive tutorial
- Matrix: live demo, source code, tutorial in French, tutorial in English
-
misc:
- Head controlled navigation: live demo, source code
- Glasses virtual try-on: live demo, source code
-
-
A-FRAME based demos:
- Boilerplate (displays a cube on the user's head): live demo, source code
-
CSS3D based demos:
- Boilerplate (displays a
<DIV>
element on the user's head): live demo, source code - Comedy glasses demo: live demo, source code
- Boilerplate (displays a
-
Canvas2D based demos:
- Draw on the face with the mouse: live demo, source code
- 2D face detection and tracking - 30 lines of code only !: live demo, source code, JSfiddle
- 2D face detection and tracking from a video file instead of camera video: live demo, source code
- 2D face detection and tracking simultaneously from a video file and from the camera (multiple trackers example): live demo, source code
-
CESIUM.JS based demos:
- 3D view of the Earth with head controlled navigation: live demo, source code, article about the demo
-
Face replacement demos:
- Insert your face into portrait art painting or film posters: live demo, source code
- Insert your face into an animated gif: live demo, specific README, source code
- The traditional faceSwap, fullscreen and with color correction: live demo, source code
-
Head motion control:
- PACMAN game with head controlled navigation: live demo, source code
- Head controlled mouse cursor: live demo, source code
Some screenshot videos are available on Youtube. You can also subscribe to the Jeeliz Youtube channel or to the @WebARRocks Twitter account to be kept informed of our cutting edge developments.
Third party
These amazing applications rely on this library for face detection and tracking:
-
Applications made by Movable Ink:
-
VRMjidori: Replace your head by a manga style character provided in .VRM file format. This demo has been developed by けしごむ/Nono
-
FaceVoice: Control the mouse pointer with your head and by saying Click. Discussion on Reddit
-
Halloween masks: Amazing halloween masks experience made by Thorsten Bux. The code is published on Github here: ThorstenBux/halloween-masks.
-
GazeFilter: library to track accurately the pupils positions. There is a nice eye-tracking demo, including a debug view of the output of FaceFilter here.
-
SnapChat Clone: Great work from Towhid Kashem. This library has been wrapped up to build a Snapchat clone. Check out the Github source code repository, try the live demo or read the Reddit thread.
-
Facepaint: Draw your own face filters with this creative web application developed by Patricia Arnedo - Medium article on the demo
-
Virtual Fighter: Find the Virtual Fighter (SEGA Video game) who looks like you. The first part of this experiment relies on face-api.js to detect your face and landmarks. Then click on PUSH and a 3D face filter of a virtual fighter will be applied to your face using this library and Three.js
-
Are you a true wizard? Try-on an amazing wizard hat in this demo made by Level 30 Wizards
-
Vertebrae VTO: Vertebrae relies on this library for face detection and tracking for some of its virtual try-on products. You can check it out on:
- Moscot: Click on the VIRTUAL TRY-ON button on the top-left of the product picture,
- Goodr: Click on the VIRTUAL TRY-ON button on the top-left of the product picture,
- Tenth Street: click on the Try it on button.
If you have developped an application or a fun demo using this library, we would love to see it and insert a link here! Just contact us on Twitter @WebARRocks or LinkedIn
Specifications
Here we describe how to use this library. Although we planned to add new features, we will keep it backward compatible.
Get started
On your HTML page, you first need to include the main script between the tags <head>
and </head>
:
<script src="dist/jeelizFaceFilter.js"></script>
Then you should include a <canvas>
HTML element in the DOM, between the tags <body>
and </body>
. The width
and height
properties of the <canvas>
element should be set. They define the resolution of the canvas and the final rendering will be computed using this resolution. Be careful to not enlarge too much the canvas size using its CSS properties without increasing its resolution, otherwise it may look blurry or pixelated. We advise to fix the resolution to the actual canvas size. Do not forget to call JEELIZFACEFILTER.resize()
if you resize the canvas after the initialization step. We strongly encourage you to use our helper /helpers/JeelizResizer.js
to set the width and height of the canvas (see Optimization/Canvas and video resolutions section).
<canvas width="600" height="600" id='jeeFaceFilterCanvas'></canvas>
This canvas will be used by WebGL both for the computation and the 3D rendering. When your page is loaded you should launch this function:
JEELIZFACEFILTER.init({
canvasId: 'jeeFaceFilterCanvas',
NNCPath: '../../../neuralNets/', // path to JSON neural network model (NN_DEFAULT.json by default)
callbackReady: function(errCode, spec){
if (errCode){
console.log('AN ERROR HAPPENS. ERROR CODE =', errCode);
return;
}
// [init scene with spec...]
console.log('INFO: JEELIZFACEFILTER IS READY');
}, //end callbackReady()
// called at each render iteration (drawing loop)
callbackTrack: function(detectState){
// Render your scene here
// [... do something with detectState]
} //end callbackTrack()
});
Optional init arguments
<boolean> followZRot
: Allow full rotation around depth axis. Default value:false
. See Issue 42 for more details,<integer> maxFacesDetected
: Only for multiple face detection - maximum number of faces which can be detected and tracked. Should be between1
(no multiple detection) and8
,<integer> animateDelay
: It is used only in normal rendering mode (not in slow rendering mode). With this statement you can set accurately the number of milliseconds during which the browser wait at the end of the rendering loop before starting another detection. If you use the canvas of this library as a secondary element (for example in PACMAN or EARTH NAVIGATION demos) you should set a smallanimateDelay
value (for example 2 milliseconds) in order to avoid rendering lags.<function> onWebcamAsk
: Function launched just before asking for the user to allow access to its camera,<function> onWebcamGet
: Function launched just after the user has accepted to share its video. It is called with the video element as argument,<dict> videoSettings
: override WebRTC specified video settings, which are by default:
{
'videoElement' // not set by default. <video> element used
// WARN: If you specify this parameter,
// 1. all other settings will be useless
// 2. it means that you fully handle the video aspect
// 3. in case of using web-camera device make sure that
// initialization goes after `loadeddata` event of the `videoElement`,
// otherwise face detector will yield very low `detectState.detected` values
// (to be more sure also await first `timeupdate` event)
'deviceId' // not set by default
'facingMode': 'user', // to use the rear camera, set to 'environment'
'idealWidth': 800, // ideal video width in pixels
'idealHeight': 600, // ideal video height in pixels
'minWidth': 480, // min video width in pixels
'maxWidth': 1920, // max video width in pixels
'minHeight': 480, // min video height in pixels
'maxHeight': 1920, // max video height in pixels,
'rotate': 0, // rotation in degrees possible values: 0,90,-90,180
'flipX': false // if we should flip horizontally the video. Default: false
},
If the user has a mobile device in portrait display mode, the width and height of these parameters are automatically inverted for the first camera request. If it does not succeed, we invert the width and height.
<dict> scanSettings
: override face scan settings - seeset_scanSettings(...)
method for more information.<dict> stabilizationSettings
: override tracking stabilization settings - seeset_stabilizationSettings(...)
method for more information.<boolean> isKeepRunningOnWinFocusLost
: Whether we should keep the detection loop running even if the user switches the browser tab or minimizes the browser window. Default value isfalse
. This option is useful for a videoconferencing app, where a face mask should be still computed if the FaceFilter window is not the active window. Even with this option toggled on, the face tracking is still slowed down when the FaceFilter window is not active.
Error codes
The initialization function ( callbackReady
in the code snippet ) will be called with an error code ( errCode
). It can have these values:
false
: no error occurs,"GL_INCOMPATIBLE"
: WebGL is not available, or this WebGL configuration is not enough (there is no WebGL2, or there is WebGL1 without OES_TEXTURE_FLOAT or OES_TEXTURE_HALF_FLOAT extension),"ALREADY_INITIALIZED"
: the library has been already initialized,"NO_CANVASID"
: no canvas or canvas ID was specified,"INVALID_CANVASID"
: cannot found the<canvas>
element in the DOM,"INVALID_CANVASDIMENSIONS"
: the dimensionswidth
andheight
of the canvas are not specified,"WEBCAM_UNAVAILABLE"
: cannot get access to the camera (the user has no camera, or it has not accepted to share the device, or the camera is already busy),"GLCONTEXT_LOST"
: The WebGL context was lost. If the context is lost after the initialization, thecallbackReady
function will be launched a second time with this value as error code,"MAXFACES_TOOHIGH"
: The maximum number of detected and tracked faces, specified by the optional init argumentmaxFacesDetected
, is too high.
The returned objects
We detail here the arguments of the callback functions like callbackReady
or callbackTrack
. The reference of these objects do not change for memory optimization purpose. So you should copy their property values if you want to keep them unchanged outside the callback functions scopes.
The initialization returned object
The initialization callback function ( callbackReady
in the code snippet ) is called with a second argument, spec
, if there is no error. spec
is a dictionnary having these properties:
<WebGLRenderingContext> GL
: the WebGL context. The rendering 3D engine should use this WebGL context,<canvas> canvasElement
: the<canvas>
element,<WebGLTexture> videoTexture
: a WebGL texture displaying the camera video. It has the same resolution as the camera video,[<float>, <float>, <float>, <float>]
videoTransformMat2: flatten 2x2 matrix encoding a scaling and a rotation. We should apply this matrix to viewport coordinates to rendervideoTexture
in the viewport,<HTMLVideoElement> videoElement
: the video used as source for the webgl texturevideoTexture
,<int> maxFacesDetected
: the maximum number of detected faces.
The detection state
At each render iteration a callback function is executed ( callbackTrack
in the code snippet ). It has one argument ( detectState
) which is a dictionnary with these properties:
<float> detected
: the face detection probability, between0
and1
,<float> x
,<float> y
: The 2D coordinates of the center of the detection frame in the viewport (each between -1 and 1,x
from left to right andy
from bottom to top),<float> s
: the scale along the horizontal axis of the detection frame, between 0 and 1 (1 for the full width). The detection frame is always square,<float> rx
,<float> ry
,<float> rz
: the Euler angles of the head rotation in radians.<Float32Array> expressions
: array listing the facial expression coefficients:expressions[0]
: mouth opening coefficient (0
→ mouth closed,1
→ mouth fully opened)
In multiface detection mode, detectState
is an array. Its size is equal to the maximum number of detected faces and each element of this array has the format described just before.
Miscellaneous methods
After the initialization (ie after that callbackReady
is launched ) , these methods are available:
-
JEELIZFACEFILTER.resize()
: should be called after resizing the<canvas>
element to adapt the cut of the video. It should also be called if the device orientation is changed to take account of new video dimensions, -
JEELIZFACEFILTER.toggle_pause(<boolean> isPause, <boolean> isShutOffVideo)
: pause/resume. This method will completely stop the rendering/detection loop. IfisShutOffVideo
is set totrue
, the media stream track will be stopped and the camera light will turn off. It returns aPromise
object, -
JEELIZFACEFILTER.toggle_slow(<boolean> isSlow)
: toggle the slow rendering mode: because this library can consume a lot of GPU resources, it may slow down other elements of the application. If the user opens a CSS menu for example, the CSS transitions and the DOM update can be slow. With this function you can slow down the rendering in order to relieve the GPU. Unfortunately the tracking and the 3D rendering will also be slower but this is not a problem is the user is focusing on other elements of the application. We encourage to enable the slow mode as soon as a the user's attention is focused on a different part of the canvas, -
JEELIZFACEFILTER.set_animateDelay(<integer> delay)
: Change theanimateDelay
(seeinit()
arguments), -
JEELIZFACEFILTER.set_inputTexture(<WebGLTexture> tex, <integer> width, <integer> height)
: Change the video input by a WebGL Texture instance. The dimensions of the texture, in pixels, should be provided, -
JEELIZFACEFILTER.reset_inputTexture()
: Come back to the user's video as input texture, -
JEELIZFACEFILTER.get_videoDevices(<function> callback)
: Should be called before theinit
method. 2 arguments are provided to the callback function:<array> mediaDevices
: an array with all the devices founds. Each device is a javascript object having adeviceId
string attribute. This value can be provided to theinit
method to use a specific camera. If an error happens, this value is set tofalse
,<string> errorLabel
: if an error happens, the label of the error. It can be:NOTSUPPORTED
,NODEVICESFOUND
orPROMISEREJECTED
.
-
JEELIZFACEFILTER.set_scanSettings(<object> scanSettings)
: Override scan settings.scanSettings
is a dictionnary with the following properties:<float> scale0Factor
: Relative width (1
-> full width) of the searching window at the largest scale level. Default value is0.8
,<int> nScaleLevels
: Number of scale levels. Default is3
,[<float>, <float>, <float>] overlapFactors
: relative overlap according to X,Y and scale axis between 2 searching window positions. Higher values make scan faster but it may miss some positions. Set to[1, 1, 1]
for no overlap. Default value is[2, 2, 3]
,<int> nDetectsPerLoop
: specify the number of detection per drawing loop.-1
for adaptative value. Default:-1
<boolean> enableAsyncReadPixels
: enable asynchronous GPU reading. Default isfalse
. It will free a lot of CPU resource but it may add latency on some devices
-
JEELIZFACEFILTER.set_stabilizationSettings(<object> stabilizationSettings)
: Override detection stabilization settings. The output of the neural network is always noisy, so we need to stabilize it using a floatting average to avoid shaking artifacts. The internal algorithm computes first a stabilization factork
between0
and1
. Ifk==0.0
, the detection is bad and we favor responsivity against stabilization. It happens when the user is moving quickly, rotating the head or when the detection is bad. On the contrary, ifk
is close to1
, the detection is nice and the user does not move a lot so we can stabilize a lot.stabilizationSettings
is a dictionnary with the following properties:[<float> minValue, <float> maxValue] translationFactorRange
: multiplyk
by a factorkTranslation
depending on the translation speed of the head (relative to the viewport).kTranslation=0
iftranslationSpeed<minValue
andkTranslation=1
iftranslationSpeed>maxValue
. The regression is linear. Default value:[0.002, 0.005]
,[<float> minValue, <float> maxValue] rotationFactorRange
: analogous totranslationFactorRange
but for rotation speed. Default value:[0.015, 0.1]
,[<float> minValue, <float> maxValue] qualityFactorRange
: analogous totranslationFactorRange
but for the head detection coefficient. Default value:[0.9, 0.98]
,[<float> minValue, <float> maxValue] alphaRange
: it specify how to applyk
. Between 2 successive detections, we blend the previousdetectState
values with the current detection values using a mixing factoralpha
.alpha=<minValue>
ifk<0.0
andalpha=<maxValue>
ifk>1.0
. Between the 2 values, the variation is quadratic. Default value:[0.05, 1.0]
.
-
JEELIZFACEFILTER.update_videoElement(<video> vid, <function|False> callback)
: change the video element used for the face detection (which can be provided viaVIDEOSETTINGS.videoElement
) by another video element. A callback function can be called when it is done. -
JEELIZFACEFILTER.update_videoSettings(<object> videoSettings)
: dynamically change the video settings (see Optional init arguments for the properties ofvideoSettings
). It is useful to change the camera from the selfie camera (user) to the back (environment) camera. APromise
is returned. -
JEELIZFACEFILTER.set_videoOrientation(<integer> angle, <boolean> flipX)
: Dynamically changevideoSettings.rotate
andvideoSettings.flipX
. This method should be called after initialization. The default values are0
andfalse
. The angle should be chosen among these values:0, 90, 180, -90
, -
JEELIZFACEFILTER.destroy()
: Clean both graphic memory and JavaScript memory, uninit the library. After that you need to init the library again. APromise
is returned, -
JEELIZFACEFILTER.reset_GLState()
: reset the WebGL context, -
JEELIZFACEFILTER.render_video()
: render the video on the<canvas>
element.
Optimization
1 or 2 Canvas?
You can either:
- Use 1
<canvas>
with 1 WebGL context, shared by facefilter and THREE.js (or another 3D engine), - Use 2 separate
<canvas>
elements, aligned using CSS, 1 canvas for AR, and the second one to display the video and to run this library.
The 1. is often more efficient, but the newest versions of THREE.js are not suited to share the WebGL context and some weird bugs can occur. So I strongly advise to use 2 separate canvas.
Canvas and video resolutions
We strongly recommend the use of the JeelizResizer
helper in order to size the canvas to the display size in order to not compute more pixels than required. This helper also computes the best camera resolution, which is the closer to the canvas actual size. If the camera resolution is too high compared to the canvas resolution, your application will be unnecessarily slowed because it is quite costly to refresh the WebGL texture for each video frame. And if the video resolution is too low compared to the canvas resolution, the image will be blurry. You can take a look at the THREE.js boilerplate to see how it is used. To use the helper, you first need to include it in the HTML code:
<script src="https://appstatic.jeeliz.com/faceFilter/JeelizResizer.js"></script>
Then in your main script, before initializing Jeeliz FaceFilter, you should call it to size the canvas to the best resolution and to find the optimal video resolution:
JeelizResizer.size_canvas({
canvasId: 'jeeFaceFilterCanvas',
callback: function(isError, bestVideoSettings){
JEELIZFACEFILTER.init({
videoSettings: bestVideoSettings,
// ...
// ...
});
}
});
Take a look at the source code of this helper (in helpers/JeelizResize.js) to get more information.
Misc
A few tips:
- In term of optimisation, the WebGL based demos are more optimized than Canvas2D demos, which are still more optimized than CSS3D demos.
- Try to use lighter resources as possibles. Each texture image should have the lowest resolution as possible, use mipmapping for texture minification filtering.
- The more effects you use, the slower it will be. Add the 3D effects gradually to check that they do not penalize too much the frame rate.
- Use low polygon meshes.
Multiple faces
It is possible to detect and track several faces at the same time. To enable this feature, you only have to specify the optional init parameter maxFacesDetected
. Its maximum value is 8
. Indeed, if you are tracking for example 8 faces at the same time, the detection will be slower because there is 8 times less computing power per tracked face. If you have set this value to 8
but if there is only 1
face detected, it should not slow down too much compared to the single face tracking.
If multiple face tracking is enabled, the callbackTrack
function is called with an array of detection states (instead of being executed with a simple detection state). The detection state format is still the same.
You can use our Three.js
multiple faces detection helper, helpers/JeelizThreeHelper.js
to get started and test this example. The main script has only 60 lines of code !
Multiple videos
To create a new JEELIZFACEFILTER
instance, you need to call:
const JEELIZFACEFILTER2 = JEELIZFACEFILTER.create_new();
Be aware that:
- Each instance uses a new WebGL context. Depending on the configuration, the number of WebGL context is limited. We advise to not use more than 16 contexts simultaneously,
- The computing power will be shared between the context. Using multiple instances may increase the latency.
Checkout this demo to have an example of how it works: source code, live demo
Changing the 3D engine
It is possible to use another 3D engine than BABYLON.JS or THREE.JS. If you have accomplished this work, we would be interested to add your demonstration in this repository (or link to your code). Just open a pull request.
The 3D engine can either share the WebGL context and the canvas with FaceFilter, or use a second canvas overlaid on the FaceFilter canvas (the FaceFilter canvas is just used to render the video). In the first case, the WebGL context is created by Jeeliz Face Filter. We strongly encourage the second approach, even if the first one may be a bit more optimized.
Changing the neural network
Since July 2018 it is possible to change the neural network. When calling JEELIZFACEFILTER.init({...})
with NNCPath: <path of NN_DEFAULT.json>
you set NNCPath value to a specific neural network file:
JEELIZFACEFILTER.init({
NNCPath: '../../neuralNets/NN_LIGHT_1.json'
// ...
})
It is also possible to give directly the neural network model JSON file content by using NNC
property instead of NNCPath
.
We provide several neural network models:
neuralNets/NN_DEFAULT.json
: this is the default neural network. Good tradeoff between size and performances,neuralNets/NN_WIDEANGLES_<X>.json
: this neural network is better to detect wide head angles (but less accurate for small angles),neuralNets/NN_LIGHT_<X>.json
: this is a light version of the neural network. The file is twice lighter and it runs faster but it is less accurate for large head rotation angles,neuralNets/NNC_VERYLIGHT_<X>.json
: even lighter than the previous version: 250Kbytes, and very fast. But not very accurate and robust to all lighting conditions,neuralNets/NN_VIEWTOP_<X>.json
: this neural net is perfect if the camera has a bird's eye view (if you use this library for a kiosk setup for example),neuralNets/NN_INTEL1536.json
: neural network working with Intel 1536 Iris GPUs (there is a graphic driver bug, see #85),neuralNets/NN_4EXPR_<X>.json
: this neural network also detects 4 facial expressions (mouth opening, smile, frown eyebrows, raised eyebrows).
Using module
/dist/jeelizFaceFilter.module.js
is exactly the same as /dist/jeelizFaceFilter.js
except that it works as a JavaScript module, so you can import it directly using:
import 'dist/jeelizFaceFilter.module.js'
or using require
(see issue #72):
const faceFilter = require('./lib/jeelizFaceFilter.module.js').JEELIZFACEFILTER;
faceFilter.init({
// you can also provide the canvas directly
// using the canvas property instead of canvasId:
canvasId: 'jeeFaceFilterCanvas',
NNCPath: '../../../neuralNets/', // path to JSON neural network model (NN_DEFAULT.json by default)
callbackReady: function(errCode, spec){
if (errCode){
console.log('AN ERROR HAPPENS. ERROR CODE =', errCode);
return;
}
// [init scene with spec...]
console.log('INFO: JEELIZFACEFILTER IS READY');
}, //end callbackReady()
// called at each render iteration (drawing loop)
callbackTrack: function(detectState){
// Render your scene here
// [... do something with detectState]
} //end callbackTrack()
});
Integration
With a bundler
If you use this library with a bundler (typically Webpack or Parcel), first you should use the module version.
Then, with the standard library, we load the neural network model (specified by NNCPath
provided as initialization parameter) using AJAX for the following reasons:
- If the user does not accept to share its camera, or if WebGL is not enabled, we don't have to load the neural network model,
- We suppose that the library is deployed using a static HTTPS server.
With a bundler, it is a bit more complicated. It is easier to load the neural network model using a classical import
or require
call and to provide it using the NNC
init parameter:
const faceFilter = require('./lib/jeelizFaceFilter.module.js').JEELIZFACEFILTER
const neuralNetworkModel = require('./neuralNets/NN_DEFAULT.json')
faceFilter.init({
NNC: neuralNetworkModel, // instead of NNCPath
// ... other init parameters
});
You can check out the amazing work of @jackbilestech, jackbilestech/jeelizFaceFilter if you are interested to use this library in a NPM / ES6 / Webpack environment.
With JavaScript frontend frameworks
With REACT and THREE Fiber
Since October 2020, there is a React/THREE Fiber/Webpack boilerplate in /reactThreeFiberDemo path.
See also
We don't officially cover here integration with mainstream JavaScript frontend frameworks (React, Vue, Angular). Feel free to submit a Pull Request to add a boilerplate or a demo for a specific framework. Here is a bunch of submitted issues dealing with React integration:
- React integration: #74 and #122
- is it possible to use this library in react native project
- Having difficulty using JeelizThreeHelper in ReactApp
You can also take a look at these Github code repositories:
- ikebastuz/jeelizTest: React demo of a CSS3D FaceFilter. It is based on Create React App
- CloffWrangler/facevoice: Another demo based on [Create React App]
- nickydev100/FFMpeg-Angular-Face-Filter: Angular boilerplate
Native
It is possible to execute a JavaScript application using this library into a Webview for a native app integration.
For IOS the camera access is disabled inside WKWebview
component for IOS before IOS14.3. If you want to make your application run on devices running IOS <= 14.2, you have to implement a hack to stream the camera video into the webview using websockets.
His hack has not been implemented into this repository but in a similar Jeeliz Library, Jeeliz Weboji. Here are the links:
- Apache Cordova IOS demo (it should also work on Android)
- Youtube video of the demo
- Github submitted issue
- Linkedin post detailing pros and cons
But it is still a dirty hack introducing a bottleneck. It still run pretty well on a high end device (tested on Iphone XR), but it is better to stick on a full web environment.
There is also this Github issue detailing how to embed the library into a Webview
component, for React native. It is for Android only:
Hosting
This library requires the user's camera video feed through MediaStream API
. So your application should be hosted by a HTTPS server (even with a self-signed certificate). It won't work at all with unsecure HTTP, even locally with some web browsers.
The development server
For development purpose we provide a simple and minimalist HTTPS server in order to check out the demos or develop your very own filters. To launch it, execute in the bash console:
with phython2
python2 httpsServer.py
It requires Python 2.X. Then open in your web browser https://localhost:4443.
with node
npm install
npm run dev
go to https://127.0.0.1:8000/demos/threejs/cube/index.html
when you open the browser it will show not secure. Go to advance. Click proceed.
Hosting optimization
You can use our hosted and up to date version of the library, available here:
https://appstatic.jeeliz.com/faceFilter/jeelizFaceFilter.js
It uses the neural network NN_DEFAULT.json
hosted in the same path. The helpers used in these demos (all scripts in /helpers/) are also hosted on https://appstatic.jeeliz.com/faceFilter/
.
It is served through a content delivery network (CDN) using gzip compression.
If you host the scripts by yourself, be careful to enable gzip HTTP/HTTPS compression for JSON and JS files. Indeed, the neuron network JSON file, neuralNets/NN_DEFAULT.json
is quite heavy, but very well compressed with GZIP. You can check the gzip compression of your server here.
The neuron network file, neuralNets/NN_DEFAULT.json
is loaded using an ajax XMLHttpRequest
after calling JEEFACEFILTER.init()
. This loading is proceeded after the user has accepted to share its camera. So we won't load this quite heavy file if the user refuses to share it or if there is no camera available. The loading can be faster if you systematically preload neuralNets/NN_DEFAULT.json
using a service worker or a simple raw XMLHttpRequest
just after the HTML page loading. Then the file will be already in the browser cache when Jeeliz Facefilter will request it.
About the tech
Under the hood
This library relies on Jeeliz WebGL Deep Learning technology to detect and track the user's face using a neural network. The accuracy is adaptative: the best is the hardware, the more detections are processed per second. All is done client-side.
Compatibility
- If
WebGL2
is available, it usesWebGL2
and no specific extension is required, - If
WebGL2
is not available butWebGL1
, we require eitherOES_TEXTURE_FLOAT
extension orOES_TEXTURE_HALF_FLOAT
extension, - If
WebGL2
is not available, and ifWebGL1
is not available or neitherOES_TEXTURE_FLOAT
orOES_HALF_TEXTURE_FLOAT
are implemented, the user is not compatible.
If a compatibility error occurred, please post an issue on this repository. If this is a problem with the camera access, please first retry after closing all applications which could use the camera (Skype, Messenger, other browser tabs and windows, ...). Please include:
- the browser, the version of the browser, the operating system, the version of the operating system, the device model and the GPU if it is a desktop computer,
- a screenshot of webglreport.com - WebGL1 (about your
WebGL1
implementation), - a screenshot of webglreport.com - WebGL2 (about your
WebGL2
implementation), - the log from the web console,
- the steps to reproduce the bug, and screenshots.
Articles and tutorials
You have written a tutorial using this library? Submit a pull request or send us the link, we would be glad to add it.
In English
-
Creating a Snapchat-like face filter using Jeeliz FaceFilter and THREE.JS:
- Part 1: Creating your first filter
- Part 2: User interactions and particles
-
Build a multifacial face filter: Interactive step by step tutorial hosted on WebGL Academy where you learn to build a Statue of Liberty using THREE.js and this library
-
Tutorial: Matrix theme face filter
-
Video tutorials by Chris Godber: Headtracking Controls with Three JS
-
Tutorial on Medium by Patricia Arnedo: Building an AR Drawing App Using React
-
How to develop a Web AR Facefilter with React and ThreeJS / React Three Fiber in 2021 Great tutorial by Level 30 Wizards Creative Digital Studio to learn how to create a wizard hat face filter. Live demo here
-
Build a Snapchat/Insta-like face filter using jeelizFaceFilter and threejs Project.
In French
- Tutorial: Matrix theme face filter on developpez.com: Développer un filtre facial webcam thème Matrix
In Japanese
- Good overall review and explanations of the library on Qiita.com: jeelizFaceFilterを試してみた
License
Apache 2.0. This application is free for both commercial and non-commercial use.
We appreciate attribution by including the Jeeliz logo and a link to the Jeeliz website in your application or desktop website. Of course we do not expect a large link to Jeeliz over your face filter, but if you can put the link in the credits/about/help/footer section it would be great.