How to train RAW-to-RGB mapping
Closed this issue · 1 comments
Hi,
Thank you for sharing your great work! I am curious how you train CURL on samsung s7 dataset. So in this case, the input will be dng files and groundtruth is corresponding jpg files? Could your proposed model can generate pleasing RGB images from RAW images? Could your provide the 'images_train.txt', 'images_test.txt' and 'images_valid.txt'?
Thanh you so much!
Hi @visionbike thank you for your interest in our work! Yes you are correct, the Samsung S7 dataset is a RAW image dataset and the targets for learning are the corresponding JPG files that have went through the Samsung S7 ISP. Rawpy is used to load the dng, see line 232 of data.py (https://github.com/sjmoran/CURL/blob/master/data.py). The TED model in ted.py (https://github.com/sjmoran/CURL/blob/master/ted.py) handles RAW images out of the box, with no changes required. The validation and test image splits are listed on the README here: https://github.com/sjmoran/CURL. You just need to copy and paste those to .txt files for loading into the code. The training images are all those images not listed in the validation and test splits.