Use this project on Google Colab for free! Check out the Practical-RIFE Colab Notebook.
2024.01 - We recently release new v4.7-4.14 models. In our tests, 4.14 makes a great improvement for animation scenes. 🎉
This project is based on RIFE and SAFA. We aim to make them more practical for users by adding various features and designing new models. Because improving the PSNR index is not compatible with subjective effects, we hope this part of work and our academic research are independent of each other. To reduce development pressure, this project is for engineers and developers. For common users, we recommend the following softwares:
SVFI (中文) | RIFE-App | FlowFrames | Drop frame fixer and FPS converter
Thanks to SVFI team to support model testing on Animation.
The content of these links is under the same MIT license as this project. lite means using similar training framework, but lower computational cost model.
4.17 - 2024.05.24 | Google Drive | 百度网盘 : Add gram loss from FILM | 4.17.lite
4.15 - 2024.03.11 | Google Drive | 百度网盘 | 4.15.lite || 4.14 - 2024.01.08 | Google Drive | 百度网盘 | 4.14.lite
4.13.1 - 2023.12.05 | Google Drive | 百度网盘 | 4.13.lite || v4.12.2 - 2023.11.13 | Google Drive | 百度网盘
v4.11.1 - 2023.11.11 | Google Drive | 百度网盘 || v4.10.1 - 2023.11.09 Google Drive | 百度网盘
v4.9.2 - 2023.11.01 | Google Drive | 百度网盘 || v4.8.1 - 2023.10.23 | Google Drive | 百度网盘
v4.7.1 - 2023.9.25 | Google Drive | 百度网盘 || v4.6 - 2022.9.26 | Google Drive | 百度网盘
v4.3 - 2022.8.17 | Google Drive | 百度网盘 || v4.2 - 2022.8.10 | Google Drive | 百度网盘
v3.8 - 2021.6.17 | Google Drive | 百度网盘 || v3.1 - 2021.5.17 | Google Drive | 百度网盘
git clone git@github.com:hzwer/Practical-RIFE.git
cd Practical-RIFE
pip3 install -r requirements.txt
Download a model from the model list and put *.py and flownet.pkl on train_log/
You can use our demo video or your video.
python3 inference_video.py --multi=2 --video=video.mp4
(generate video_2X_xxfps.mp4)
python3 inference_video.py --multi=4 --video=video.mp4
(for 4X interpolation)
python3 inference_video.py --multi=2 --video=video.mp4 --scale=0.5
(If your video has high resolution, such as 4K, we recommend set --scale=0.5 (default 1.0))
python3 inference_video.py --multi=4 --img=input/
(to read video from pngs, like input/0.png ... input/612.png, ensure that the png names are numbers)
Parameter descriptions:
--img / --video: The input file address
--output: Output video name 'xxx.mp4'
--model: Directory with trained model files
--UHD: It is equivalent to setting scale=0.5
--montage: Splice the generated video with the original video, like this demo
--fps: Set output FPS manually
--ext: Set output video format, default: mp4
--multi: Interpolation frame rate multiplier
--exp: Set --multi to 2^(--exp)
--skip: It's no longer useful refer to issue 207
The whole repo can be downloaded from v4.0, v4.12, v4.15. However, we currently do not have the time to organize it well, it is for reference only.
We are developing a practical model of SAFA. Welcome to check its demo (BiliBili) and provide advice.
v0.5 - 2023.12.26 | Google Drive
python3 inference_video_enhance.py --video=demo.mp4
@inproceedings{huang2022rife,
title={Real-Time Intermediate Flow Estimation for Video Frame Interpolation},
author={Huang, Zhewei and Zhang, Tianyuan and Heng, Wen and Shi, Boxin and Zhou, Shuchang},
booktitle={Proceedings of the European Conference on Computer Vision (ECCV)},
year={2022}
}
@inproceedings{huang2024safa,
title={Scale-Adaptive Feature Aggregation for Efficient Space-Time Video Super-Resolution},
author={Huang, Zhewei and Huang, Ailin and Hu, Xiaotao and Hu, Chen and Xu, Jun and Zhou, Shuchang},
booktitle={Winter Conference on Applications of Computer Vision (WACV)},
year={2024}
}
Optical Flow: ARFlow pytorch-liteflownet RAFT pytorch-PWCNet
Video Interpolation: DVF TOflow SepConv DAIN CAIN MEMC-Net SoftSplat BMBC EDSC EQVI RIFE