This project aims to classify facial expression. Here I provide seven types of expression, including Angry Disgusted Fearful Happy Sad Surprised Neutral.
- Backbone ——VGG16
- Dataset ——FER2013
————240×240 Data(Train、Val、Test) password:5j3x
- In this project, face detection part is applied, which can definitely improve the test accuracy. More over, it can support the robust of the model, especially no face input image.
- GPU and CPU all support. If don't have GPU, is OK.
- Dependencies fewer.
- When testing, batch images input is supported.
Recommend to use Anaconda
- Ubuntu16.04 (Windows also avaliable, but need to change something, like image path)
- Python 3.6
- Pytorch (latest version or old version all fine, mine is 0.4.1 & 1.1.0)
- torchvision
- numpy
- matplotlib
- opencv(cv2)
- pillow
FER2013 includes 35887 pictures: 48 × 48 pixels, here using bilinear interpolation to resize the expression pictures to 240 × 240 pixels. The input of the net is 224 × 224, same as original VGG16.
First, put the processed dataset in the folder "data", the data folder like following:
-- data
------- train
------------------ 0
---------------------------00000.jpg
---------------------------00005.jpg
...
------------------ 1
---------------------------00023.jpg
...
...
------------------ 6
---------------------------00061.jpg
...
------- val
------------------ 0
---------------------------00006.jpg
...
------------------ 1
---------------------------00043.jpg
...
...
------------------ 6
---------------------------00021.jpg
...
------- test
------------------ 0
---------------------------00008.jpg
...
------------------ 1
---------------------------00011.jpg
...
...
------------------ 6
---------------------------00022.jpg
...
0-6 represent 7 different expression:Angry Disgusted Fearful Happy Sad Surprised Neutral
python demo_image.py
When running, first need to type the image name, such as 1.jpg. Put input images in input folder
python demo_camera.py
python demo_image_batch.py
Find image process methods to improve the accuracy.
Any questions, open a new issue.