Why don't you add the additional branch (β^2) to the MLP in Eq2?
zjwzcx opened this issue · 2 comments
@Panxuran
Hi Xuran, thanks for your awesome work!
In Sec 4, you mentioned 'Following previous researches in Bayesian neural networks, we take the model output as the mean'. However, the model outputs the uncertainty/variance β^2(r) in a shallower layer, which is even not related to the input d. It seems more intuitive to have the mean c(r, d) and variance β^2(r) both depend on the same input r(t) and d (just like the following figure, ref to NeurAR). Have you conducted any related experiments or had any insight?
Look forward to your reply!
In our case, we see the uncertainty estimation as an attribute similar to volume density, thus we omit the influence of viewing direction. Also, this assumption helps us to meet the demand of one of the prerequisites, i.e., the independence of radiance value distribution between different positions.
Nevertheless, from my personal opinion, this remains a open question. Modeling uncertainty with viewing direction may be a promising direction. Hope to see your positive results on trying this!