should we sample when modeling aleatoric uncertainty
ShellingFord221 opened this issue · 0 comments
ShellingFord221 commented
Hi, in What Uncertainties Do We Need in Bayesian Deep Learning for Computer Vision, it seems that when modeling aleatoric uncertainty, dropout is disabled, the model is just a normal neural network which predicts mean and variance of the input. Please see Section 2.2:
Note that here, unlike the above, variational inference is not performed over the weights, but instead we perform MAP inference – finding a single value for the model parameters θ. This approach does not capture epistemic model uncertainty, as epistemic uncertainty is a property of the model and not of the data.
Otherwise, modeling aleatoric uncertainty will be the same as modeling both two kinds of uncertainties.