MAFW is a large-scale, multi-modal, compound affective database for dynamic facial expression recognition in the wild. Clips in this database come from China, Japan, Korea, Europe, America and India, and cover various themes, e.g., variety, family, science fiction, suspense, love, comedy, and interviews, encompassing a wide range of human emotions. Each clip has been independently labeled 11 times by 11 well-trained annotators. MAFW database has enormous diversities, large quantities, and rich annotations, including:
- 10,045 number of video clips from movies, TV dramas, and short videos,
- a 11-dimensional expression distribution vector for each video clip,
- three kinds of annotations: (1) single expression label; (2) multiple expression label; (3) bilingual emotional descriptive text,
- two subsets: single-expression set, including 11 classes of single emotions; multiple-expression set, including 32 classes of multiple emotions,
- three automatic annotations: the frame-level 68 facial landmarks, bounding boxes of face regions, and gender,
- four benchmarks : uni-modal single expression classification, multi-modal single expression classification, uni-modal compound expression classification, and multi-modal compound expression classification.
- MAFW database is available for non-commercial research purposes only.
- You agree not to reproduce, duplicate, copy, sell, trade, resell or exploit for commercial purposes, any portion of the clips, and any derived data.
- You agree not to further copy, publish, or distribute any portion of the MAFW database. Except for internal use at a single site within the same organization, it is allowed to make copies of the dataset.
This database is publicly available. It is free for professors and researcher scientists affiliated to a University.
Permission to use but not reproduce or distribute the MAFW database is granted to all researchers given that the following steps are properly followed:
- Download the MAFW-academics -final.pdf document.
- Read the terms and conditions carefully to make sure they are acceptable, and fill in the relevant information at the end of the document.
- Send the completed document to email (linw@cug.edu.cn).
Please cite our paper if you find our work useful for your research:
- Yuanyuan Liu, Wei Dai, Chuanxu Feng, Wenbin Wang, Guanghao Yin, Jiabei Zeng, and Shiguang Shan. 2022. MAFW: A Large-scale, Multi-modal, Compound Affective Database for Dynamic Facial Expression Recognition in the Wild. In Proceedings of the 30th ACM International Conference on Multimedia (MM ’22), October 10–14, 2022, Lisboa, Portugal. ACM, New York, NY, USA, 9 pages. https://doi.org/10.1145/3503161.3548190
@inbook{liu_mafw_2022,
author = {Liu, Yuanyuan and Dai, Wei and Feng, Chuanxu and Wang, Wenbin and Yin, Guanghao and Zeng, Jiabei and Shan, Shiguang},
title = {MAFW: A Large-scale, Multi-modal, Compound Affective Database for Dynamic Facial Expression Recognition in the Wild},
year = {2022}
isbn = {978-1-4503-9203-7},
publisher = {ACM},
address = {New York, NY, USA},
url = {https://doi.org/10.1145/3503161.3548190},
booktitle = {Proceedings of the 30th ACM International Conference on Multimedia (MM’22)},
numpages = {9}
}
- Data
- Labels
- Labels (auto)
- Train & Test Set
For more details of the dataset, please refer to the paper: MAFW: A Large-scale, Multi-modal, Compound Affective Database for Dynamic Facial Expression Recognition in the Wild.
For more details of emotional descriptive texts, please refer to supplementary materials for MAFW.
The source code of our proposed T-ESFL model can be downloaded in https://github.com/MAFW-database/MAFW.
Please contact us for any questions about MAFW.
Yuanyuan Liu | Associate Professor, China University of Geosciences | liuyy@cug.edu.cn |
Shaoze Feng | Master, China University of Geosciences | 2807592236@cug.edu.cn |
Lin Wei | Master, China University of Geosciences | linw@cug.edu.cn |
Guanghao Yin | Master, China University of Geosciences | ygh2@cug.edu.cn |
For more information, welcome to visit our team's homepage: https://cvlab-liuyuanyuan.github.io/