/Practical-RIFE

We are developing more practical approach for users based on RIFE.

Primary LanguagePythonMIT LicenseMIT

Practical-RIFE

V4.0 Promotional Video (宣传视频)

This project is based on RIFE and aims to make RIFE more practical for users by adding various features and designing new models. Because improving the PSNR index is not compatible with subjective effects, we hope this part of work and our academic research are independent of each other. To reduce development pressure, this project is for engineers and developers. For common users, we recommend the following softwares:

SVFI (中文) | RIFE-App | FlowFrames | Drop frame fixer and FPS converter

Thanks to SVFI team to support model testing on Animation. For business cooperation, please contact our PM.

Usage

Model List

The content of these links is under the same MIT license as this project.

We hide some models because they received some serious bug feedback or could be totally replaced by new models.

v4.6 - 2022.9.26 | Google Drive | 百度网盘,密码:gtkf

v4.5 - 2022.9.14 | Google Drive | 百度网盘,密码:mvr0 || v4.4 - 2022.8.24 | Google Drive | 百度网盘,密码:2q63

v4.3 - 2022.8.17 | Google Drive | 百度网盘,密码:q83a || v4.2 - 2022.8.10 | Google Drive | 百度网盘,密码:y3ad

v3.8 - 2021.6.17 | Google Drive | 百度网盘, 密码:kxr3 || v3.1 - 2021.5.17 | Google Drive | 百度网盘, 密码:64bz

Installation

git clone git@github.com:hzwer/Practical-RIFE.git
cd Practical-RIFE
pip3 install -r requirements.txt

Download a model from the model list and put *.py and flownet.pkl on train_log/

Run

Video Frame Interpolation

You can use our demo video or your video.

python3 inference_video.py --multi=2 --video=video.mp4 

(generate video_2X_xxfps.mp4)

python3 inference_video.py --multi=4 --video=video.mp4

(for 4X interpolation)

python3 inference_video.py --multi=2 --video=video.mp4 --scale=0.5

(If your video has high resolution, such as 4K, we recommend set --scale=0.5 (default 1.0))

python3 inference_video.py ---multi=4 --img=input/

(to read video from pngs, like input/0.png ... input/612.png, ensure that the png names are numbers)

python3 inference_video.py --multi=3 --video=video.mp4 --fps=60

(add slomo effect, the audio will be removed)

Report Bad Cases

Please share your original video clip with us via Github issue and Google Drive. We may add it to our test set so that it is likely to be improved in later versions. It will be beneficial to attach a screenshot of the model's effect on the issue.

Model training

Since we are in the research stage of engineering tricks, and our work and paper have not been authorized for patents nor published, we are sorry that we cannot provide users with training scripts. If you are interested in academic exploration, please refer to our academic research project RIFE.

To-do List

Multi-frame input of the model

Frame interpolation at any time location (Done)

Eliminate artifacts as much as possible

Make the model applicable under any resolution input

Provide models with lower calculation consumption

Citation

@inproceedings{huang2022rife,
  title={Real-Time Intermediate Flow Estimation for Video Frame Interpolation},
  author={Huang, Zhewei and Zhang, Tianyuan and Heng, Wen and Shi, Boxin and Zhou, Shuchang},
  booktitle={Proceedings of the European Conference on Computer Vision (ECCV)},
  year={2022}
}

Reference

Optical Flow: ARFlow pytorch-liteflownet RAFT pytorch-PWCNet

Video Interpolation: DVF TOflow SepConv DAIN CAIN MEMC-Net SoftSplat BMBC EDSC EQVI RIFE