/3d-facial-landmark-detection-and-tracking

Human 3D Facial Pose Estimation and Tracking (AffCom IJCAI2018)

Primary LanguageC++BSD 3-Clause "New" or "Revised" LicenseBSD-3-Clause

Landmark Detection and Tracking

We use two open source library in our project. You need to download and install them before running our scripts. One is Dlib (See http://dlib.net for downloading and installation). Another one is eos, which is a lightweight 3D Morphable Face Model fitting library.(The link is https://github.com/patrikhuber/eos/releases.)

Sample code

Run the code “landmark_detection_img.py”. It will show the landmark detection of a sample image. As for video processing, “landmark_detection_video.py” shows the landmark tracking in a sample video.

Citation

If you find our work useful in your research please consider citing our paper:

@INPROCEEDINGS{facial2018yin,

title = {Facial Expression and Peripheral Physiology Fusion to Decode Individualized Affective Experience},
author = {Yin, Y and Nabian, M and Fan, M and Chou, C and Gendron, M and Ostadabbas, S},
booktitle = {2nd Affective Computing Workshop of the 27th International Joint Conference on Artificial Intelligence (IJCAI)},
year = {2018}

}

For further inquiry please contact:

Sarah Ostadabbas, PhD Electrical & Computer Engineering Department Northeastern University, Boston, MA 02115 Office Phone: 617-373-4992 ostadabbas@ece.neu.edu Augmented Cognition Lab (ACLab) Webpage: http://www.northeastern.edu/ostadabbas/