Hui Li, Xiao-Jun Wu*, Josef Kittler
Information Fusion (IF:13.669), Volume: 73, Pages: 72-86, September 2021
paper
arXiv
Supplementary Material
Python 3.7
Pytorch >=0.4.1
The testing datasets are included in "images".
The results iamges are included in "outputs".
MS-COCO 2014 (T.-Y. Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Ramanan, P. Dollar, and C. L. Zitnick. Microsoft coco: Common objects in context. In ECCV, 2014. 3-5.) is utilized to train our auto-encoder network.
KAIST (S. Hwang, J. Park, N. Kim, Y. Choi, I. So Kweon, Multispectral pedestrian detection: Benchmark dataset and baseline, in: Proceedings of the IEEE conference on computer vision and pattern recognition, 2015, pp. 1037β1045.) is utilized to train the RFN modules.
If you have any question about this code, feel free to reach me(hui_li_jnu@163.com)
@article{li2021rfn,
title={RFN-Nest: An end-to-end residual fusion network for infrared and visible images},
author={Li, Hui and Wu, Xiao-Jun and Kittler, Josef},
journal={Information Fusion},
volume={73},
pages={72--86},
month={March},
year={2021},
publisher={Elsevier}
}
I am very sorry about this clerical error. Actually, in Section 4.6, this part "With the nest connection, the decoder is able to preserve more image information conveyed by the multiscale deep features (ππΌ, πΉ πΉππΌπππ‘, πΉ πΉππΌπ€) and generate more natural and clearer fused image (πΈπ, ππ·, π πΌπΉ)." should change to "With the nest connection, the decoder is able to preserve more image information conveyed by the multiscale deep features (ππΌ, Nabf, MS-SSIM) and generate more natural and clearer fused image (πΈπ, ππ·, SCD)."