/Practical-RIFE

We are developing more practical approach for users based on RIFE.

Primary LanguagePythonMIT LicenseMIT

Practical-RIFE

V4.0 Promotional Video (宣传视频)

This project is based on RIFE and aims to make RIFE more practical for users by adding various features and design new models. Because improving the PSNR index is not compatible with subjective effects, we hope this part of work and our academic research are independent of each other. To reduce development difficulty, this project is for engineers and developers. For users, we recommend the following softwares:

RIFE-App | FlowFrames | SVFI (中文)

For business cooperation, please contact my email.

Usage

Model List

v4.1 - 2022.3.23 | Google Drive | 百度网盘,密码:e4qg ||

v4.0 - 2021.12.6 | Google Drive | 百度网盘,密码:mocg || v3.9beta - 2021.11.23 | Google Drive | 百度网盘, 密码:4nrl

v3.8 - 2021.6.17 | Google Drive | 百度网盘, 密码:kxr3 || v3.5 - 2021.6.12 | Google Drive | 百度网盘, 密码:1rb7

v3.1 - 2021.5.17 | Google Drive | 百度网盘, 密码:64bz || v3.0 - 2021.5.15 | Google Drive | 百度网盘, 密码:tgmd

Installation

git clone git@github.com:hzwer/Practical-RIFE.git
cd Practical-RIFE
pip3 install -r requirements.txt

Download a model from the model list and put *.py and flownet.pkl on train_log/

Run

Video Frame Interpolation

You can use our demo video or your video.

python3 inference_video.py --multi=2 --video=video.mp4 

(generate video_2X_xxfps.mp4)

python3 inference_video.py --multi=4 --video=video.mp4

(for 4X interpolation)

python3 inference_video.py --multi=2 --video=video.mp4 --scale=0.5

(If your video has high resolution, such as 4K, we recommend set --scale=0.5 (default 1.0))

python3 inference_video.py ---multi=4 --img=input/

(to read video from pngs, like input/0.png ... input/612.png, ensure that the png names are numbers)

python3 inference_video.py --multi=3 --video=video.mp4 --fps=60

(add slomo effect, the audio will be removed)

Report Bad Cases

Please share your original video clip with us via Github issue and Google Drive. We may add it to our test set so that it is likely to be improved in later versions. It will be beneficial to attach a screenshot of the model's effect on the issue.

Model training

Since we are in the research stage of engineering tricks, and our work and paper have not been authorized for patents nor published, we are sorry that we cannot provide users with training scripts. If you are interested in academic exploration, please refer to our academic research project RIFE.

To-do List

Multi-frame input of the model

Frame interpolation at any time location (Done)

Eliminate artifacts as much as possible

Make the model applicable under any resolution input

Provide models with lower calculation consumption

Citation

@article{huang2020rife,
  title={RIFE: Real-Time Intermediate Flow Estimation for Video Frame Interpolation},
  author={Huang, Zhewei and Zhang, Tianyuan and Heng, Wen and Shi, Boxin and Zhou, Shuchang},
  journal={arXiv preprint arXiv:2011.06294},
  year={2020}
}

Reference

Optical Flow: ARFlow pytorch-liteflownet RAFT pytorch-PWCNet

Video Interpolation: DVF TOflow SepConv DAIN CAIN MEMC-Net SoftSplat BMBC EDSC EQVI RIFE