Our project aims to find a suitable DeepFake model, in terms of visual quality, inferencing speed and easiness of deploying, to deploy on as Android demo application server. We have uploaded and referenced code of all 3 potential models we selected in addition with our Android application package and server.
In this repo, there are 3 models we referenced from 3 authors.
Model A: FaceSwap.
Model B: Few-shot face translation.
Model C: FaceShifter
Here we show a short gift instead of the actual videos.
- This is the result generated from Model B. Inputs are 2 videos.
- This is the result generated from Model C. Inputs are a image and a video.
Feel free to give these models a try if you are interested. Simply click on the following links and try them on Colab. Remember to set the hardware accelerator in the notebook setting as GPU. More details are in directory of each model and their notebook. You don't need to train any model before trying them.
Try Model A in Colab Notebook to swap face from source image to target image
Try Model B in Colab Notebook to swap face from source image to target image or from source video to target video
Try Model C in Colab Notebook, the colab notebook only has the demo for swap face between two images. Other demo like video face swapping, training, server deployment can be seen in directory ModelC. You can download code and run them locally. The detailed description can be seen at README in ModelC directory.
The demo Android application is uploaded to present our work. You can download and install it. However it is not able to translate faces without server. We only turned on our server during our testing and presentation. Server code is uploaded as well and you can find it in directory ModelC. You may deploy it on your own and change server url to try this demo.
Please refer to each model directory for more detail and instruction.