/FlowerPower

Flowers have feelings too

Primary LanguagePythonOtherNOASSERTION

Lipreading using Temporal Convolutional Networks

PWC

Authors

Pingchuan Ma, Brais Martinez, Stavros Petridis, Maja Pantic.

Content

Deep Lipreading

Model Zoo

Citation

License

Contact

Deep Lipreading

Introduction

This is the respository of Towards practical lipreading with distilled and efficient models and Lipreading using Temporal Convolutional Networks. In this repository, we provide pre-trained models, network settings for end-to-end visual speech recognition (lipreading). We trained our model on LRW dataset. The network architecture is based on 3D convolution, ResNet-18 plus MS-TCN.

By using this repository, you can achieve a performance of 87.9% on the LRW dataset. This reporsitory also provides a script for feature extraction.

Preprocessing

As described in our paper, each video sequence from the LRW dataset is processed by 1) doing face detection and face alignment, 2) aligning each frame to a reference mean face shape 3) cropping a fixed 96 × 96 pixels wide ROI from the aligned face image so that the mouth region is always roughly centered on the image crop 4) transform the cropped image to gray level.

You can run the pre-processing script provided in the preprocessing folder to extract the mouth ROIs.

0. Original 1. Detection 2. Transformation 3. Mouth ROIs

How to install environment

  1. Clone the repository into a directory. We refer to that directory as TCN_LIPREADING_ROOT.
git clone --recursive https://github.com/mpc001/Lipreading_using_Temporal_Convolutional_Networks.git
  1. Install all required packages.
pip install -r requirements.txt

How to prepare dataset

  1. Download our pre-computed landmarks from GoogleDrive or BaiduDrive (key: kumy) and unzip them to $TCN_LIPREADING_ROOT/landmarks/ folder.

  2. Pre-process mouth ROIs using the script in the preprocessing folder and save them to $TCN_LIPREADING_ROOT/datasets/.

  3. Download a pre-trained model from Model Zoo and put the model into the $TCN_LIPREADING_ROOT/models/ folder.

How to test

  • To evaluate on LRW dataset:
CUDA_VISIBLE_DEVICES=0 python main.py --config <MODEL-JSON-PATH> \
                                      --model-path <MODEL-PATH> \
                                      --data-dir <DATA-DIRECTORY>

How to extract 512-dim embeddings

We assume you have cropped the mouth patches and put them into <MOUTH-PATCH-PATH>. The mouth embeddings will be saved in the .npz format

  • To extract 512-D feature embeddings from the top of ResNet-18:
CUDA_VISIBLE_DEVICES=0 python main.py --extract-feats \
                                      --config <MODEL-JSON-PATH> \
                                      --model-path <MODEL-PATH> \
                                      --mouth-patch-path <MOUTH-PATCH-PATH> \
                                      --mouth-embedding-out-path <OUTPUT-PATH>

Model Zoo

We plan to include more models in the future. We use a sequence of 29-frames with a size of 88 by 88 pixels to compute the FLOPs.

Architecture Acc. FLOPs (G) url size (MB)
resnet18_mstcn_adamw_s3 87.9 10.31 GoogleDrive or BaiduDrive (key: bygn) 436.7
resnet18_mstcn 85.5 10.31 GoogleDrive or BaiduDrive (key: qwtm) 436.7
snv1x_tcn2x 84.6 1.31 GoogleDrive or BaiduDrive (key: f79d) 36.7
snv1x_dsmstcn3x 85.3 1.26 GoogleDrive or BaiduDrive (key: 86s4) 37.5
snv1x_tcn1x 82.7 1.12 GoogleDrive or BaiduDrive (key: 3caa) 15.5
snv05x_tcn2x 82.5 1.02 GoogleDrive or BaiduDrive (key: ej9e) 33.0
snv05x_tcn1x 79.9 0.58 GoogleDrive or BaiduDrive (key: devg) 11.8

Citation

If you find this code useful in your research, please consider to cite the following papers:

@article{ma2020towards,
  author       = "Ma, Pingchuan and Martinez, Brais and Petridis, Stavros and Pantic, Maja",
  title        = "Towards practical lipreading with distilled and efficient models",
  journal      = "arXiv preprint arXiv:2007.06504",
  year         = "2020",
}

@InProceedings{martinez2020lipreading,
  author       = "Martinez, Brais and Ma, Pingchuan and Petridis, Stavros and Pantic, Maja",
  title        = "Lipreading using Temporal Convolutional Networks",
  booktitle    = "ICASSP",
  year         = "2020",
}

@InProceedings{petridis2018end,
  author       = "Petridis, Stavros and Stafylakis, Themos and Ma, Pingchuan and Cai, Feipeng and Tzimiropoulos, Georgios and Pantic, Maja",
  title        = "End-to-end audiovisual speech recognition",
  booktitle    = "ICASSP",
  year         = "2018",
}

License

It is noted that the code can only be used for comparative or benchmarking purposes. Users can only use code supplied under a License for non-commercial purposes.

Contact

[Pingchuan Ma](pingchuan.ma16[at]imperial.ac.uk)