The PyTorch implementation for the OVQE: Omniscient Network for Compressed Video Quality Enhancement which is accepted by [IEEE TBC].
Task: Video Quality Enhancement / Video Artifact Reduction.
Suppose that you have installed CUDA 10.1, then:
conda create -n cvlab python=3.7 -y
conda activate cvlab
git clone --depth=1 https://github.com/pengliuhan/OVQE && cd OVQE/
python -m pip install torch==1.6.0+cu101 torchvision==0.7.0+cu101 -f https://download.pytorch.org/whl/torch_stable.html
python -m pip install tqdm lmdb pyyaml opencv-python scikit-image thop
Build DCNv2
cd ops/dcn/
bash build.sh
Check if DCNv2 works (optional)
python simple_check.py
The DCNv2 source files here is different from the open-sourced version due to incompatibility. [issue]
Please check here.
We now generate LMDB to speed up IO during training.
python create_lmdb_ovqe.py
We utilize 2 NVIDIA GeForce RTX 3090 GPUs for training.
CUDA_VISIBLE_DEVICES=0,1 python -m torch.distributed.launch --nproc_per_node=2 --master_port=12354 train.py --opt_path option_ovqe.yml
Pretrained models can be found here: [GoogleDisk] and [百度网盘]
We utilize 1 NVIDIA GeForce RTX 3090 GPU for testing.
python test_one_video.py
If you find this project is useful for your research, please cite:
@article{peng2022ovqe,
title={OVQE: Omniscient Network for Compressed Video Quality Enhancement},
author={Peng, Liuhan and Hamdulla, Askar and Ye, Mao and Li, Shuai and Wang, Zengbin and Li, Xue},
journal={IEEE Transactions on Broadcasting},
year={2022},
publisher={IEEE}
}
This work is based on STDF-Pytoch. Thank RyanXingQL for sharing the codes.