SRiTMO as a standalone LDR to HDR operator
tonetechnician opened this issue · 2 comments
Hey @FrozenBurning
Thanks for your awesome contribution!
I've been particularly interested in SRiTMO for it's simplicity and speed in generating HDRs from LDRs. I've been testing it isolated from the other parts of your algorithm and finding it has quite good results with regular LDR panos.
One thing I've noticed that I think is related to #6 is sometimes the HDRs come out overexposed. I find to rectify this, I need to adjust the balance, luma threshold and boost values. But it feels like it needs to be adjusted individually to various LDRs and is not very generalizable.
I'm curious as to if you think we should be doing further normalization operation according to the LDR luminance scale of "in-the-wild" LDRs? something similar to the luminance invariance scale normalization done on the training set?
Would love to know your thoughts here!
Thanks for your interest in our work! Definitely, the ideal solution would be a pure automatic pipeline that leverages "in-the-wild" knowledge. Our consideration is that even if we automatically tune those hyperparameters, the rendering performance of HDRI is still dependent on the renderer you use. Lots of rendering tasks or engines have their customized renderer, so we leave this out for flexibility. But indeed, it's really worthwhile to explore potential solutions that deal with this issue using data-driven approaches. A global normalization operation may not be enough though 💭
Closed due to inactivity. Feel free to reopen it for further discussions 🤗