/DFGC-VRA-starterkit

Starker-kit for the DFGC-VRA competition

Primary LanguagePython

DFGC-VRA

Starker-kit for the DFGC-VRA competition

This code follows the workflow of this paper, you may check the paper for more details.

Usage

Video pre-processing

We crop the videos and keep only the facial region.

Relative coordinates of the bounding box for each video is provided in the crop folder, with the top left corner set as the origin.

Feature extraction

Here we use the 1st-place solution in DFGC-2022 detection track (DFGC-1st) as the extractor.

You may download their codes from here and rename it as DFGC1st, then put it in the same folder as the scrip below. Run this scrip to get video level features:

python DFGC1st_feats.py

Feature selection

In this step we first decide the dimension of the selected features, then perform the selection.

You may modify the parameters in the scrips below.

For dimention seletion, run:

python feats_num_select.py

For feature selection with a given dimention, run:

python feats_select.py

Both the raw and the selcted features extracted via DFGC-1st model can be downloaded from here.

Train and predict

Run the scrip below to train a SVR regressor with the selected features and predict the realism score for the three test sets.

This scrip is mostly borrowed from here1.

Also, you may modify the parameters.

python train_and_predict.py

Reference

  • [1] Tu, Zhengzhong, et al. "UGC-VQA: Benchmarking blind video quality assessment for user generated content." IEEE Transactions on Image Processing 30 (2021): 4449-4464.