/LLTC

[IEEE SPL] A RNN for temporal consistency in videos enhanced by single-frame methods

A RNN for temporal consistency in videos enhanced by single-frame methods

IEEE Signal Processing Letters 2024

Abstract

Low-light video enhancement (LLVE) has received little attention compared to low-light image enhancement (LLIE) mainly due to the lack of paired low-/normal-light video datasets. Consequently, a common approach to LLVE is to enhance each video frame individually using LLIE methods. However, this practice introduces temporal inconsistencies in the resulting video. In this work, we propose a recurrent neural network (RNN) that, given a low-light video and its per-frame enhanced version, produces a temporally consistent video preserving the underlying frame-based enhancement. We achieve this by training our network with a combination of a new forward-backward temporal consistency loss and a content-preserving loss. At inference time, we can use our trained network to correct videos processed by any LLIE method. Experimental results show that our method achieves the best trade-off between temporal consistency improvement and fidelity with the per-frame enhanced video, exhibiting a lower memory complexity and comparable time complexity with respect to other state-of-the-art methods for temporal consistency.

Code

The code will be released soon.

Demo videos

video9.mp4
video14.mp4
video92.mp4
video212.mp4

Citations

@article{rota2024rnn,
  title={A RNN for temporal consistency in low-light videos enhanced by single-frame methods},
  author={Rota, Claudio and Buzzelli, Marco and Bianco, Simone and Schettini, Raimondo},
  journal={IEEE Signal Processing Letters},
  volume={31},
  pages={2795-2799},
  year={2024},
  publisher={IEEE}
}

Contacts

If you have any questions, please contact me at claudio.rota@unimib.it