zelonghaha's Stars
AliaksandrSiarohin/first-order-model
This repository contains the source code for the paper First Order Motion Model for Image Animation
gaoxiang12/slambook2
edition 2 of the slambook
yfeng95/PRNet
Joint 3D Face Reconstruction and Dense Alignment with Position Map Regression Network (ECCV 2018)
yfeng95/face3d
Python tools for 3D face: 3DMM, Mesh processing(transform, camera, light, render), 3D face representations.
patrikhuber/eos
A lightweight 3D Morphable Face Model library in modern C++
JDAI-CV/FaceX-Zoo
A PyTorch Toolbox for Face Recognition
vchoutas/smplify-x
Expressive Body Capture: 3D Hands, Face, and Body from a Single Image
WIKI2020/FacePose_pytorch
🔥🔥The pytorch implement of the head pose estimation(yaw,roll,pitch) and emotion detection with SOTA performance in real time.Easy to deploy, easy to use, and high accuracy.Solve all problems of face detection at one time.(极简,极快,高效是我们的宗旨)
TimoBolkart/TF_FLAME
Tensorflow framework for the FLAME 3D head model. The code demonstrates how to sample 3D heads from the model, fit the model to 2D or 3D keypoints, and how to generate textured head meshes from Images.
facemoji/alter-core
Realtime 3D avatar system and cross-platform rendering engine built from scratch for web3 interoperability and the open metaverse.
zhangchenxu528/FACIAL
FACIAL: Synthesizing Dynamic Talking Face With Implicit Attribute Learning. ICCV, 2021.
Yinghao-Li/3DMM-fitting
Fit 3DMM to front and side face images simultaneously.
BCV-Uniandes/AUNets
Pytorch implementation of Multi-View Dynamic Facial Action Unit Detection, Image and Vision Computing (2018)
ESanchezLozano/Action-Units-Heatmaps
Code for BMVC paper "Joint Action Unit localisation and intensity estimation through heatmap regression"
AffectAnalysisGroup/AFARtoolbox
AFAR: A Deep Learning Based Toolbox for Automated Facial Affect Recognition
kimoktm/Face2face
A python library to to fit 3D morphable models to images of faces and capture facial performance overtime with no markers or a special mount
isir/greta
Model of nonverbal behavior for socio-emotional virtual characters
AffectAnalysisGroup/PAttNet
Patch Attentive Deep Network for Action Unit Detection
DevendraPratapYadav/gsoc18_RedHenLab
A modular pipeline to extract several facial features from videos such as face landmarks, eye gaze direction, head pose and Action Units
neelabhsinha/flame
Original Pytorch Implementation of FLAME: Facial Landmark Heatmap Activated Multimodal Gaze Estimation
davidecoluzzi/Shape-and-action-unit-extraction-of-3D-human-face-meshes-by-multilinear-dimensionality-reduction
This work aims to create a model able to discern the parameters of shape and action units from 3D human face meshes. The adopted dataset was acquired by using Kinect and consist of 360 3D representation of human faces. More precisely, 20 different users performed 6 specific facial expressions (happy, sad, scared, angry, disgusted, surprised) by using 3 emphasis degree (low, medium, high). The collected dataset was labelled and then modelled in a three-dimensional tensor. Then, a multilinear dimensionality reduction technique (Higher-order singular value decomposition - HOSVD) was applied to separately extract the face deformation features related to the shape units and the action units. These specific features are finally exploited to independently rebuild the user human face by using much fewer data with respect to the starting dataset, specifically the 83% less, maintaining approximately 90% of variance.
diegothomas/FaceCap
This is source code for ArXiv paper: https://arxiv.org/pdf/2004.10557.pdf
tsky1971/UEZeroMQPlugin
UE4 ZeroMQ Plugin
wmdydxr/Pytorch-FAU
The Pytorch implementation of Facial Action Unit Intensity Estimation.
gdsad/FaceTrack
FaceTrack: Asymmetric Facial and Gesture Analysis Tool for Speech Language Pathologist Applications
mtran14/AUglove
packyan/Graduation-Proj
Data-Drive Facial Animation
DM2097/FaceCap
FaceCap is a browser application that detects the user’s face using tracking.js and applies a facemask on it.
gmforge/mask
Capture facial expressions on a mesh structure
UEA-digital-human-group/paper-review-meshtalk
Review the paper MeshTalk: 3D Face Animation from Speech using Cross-Modality Disentanglement