Data4DM/BayesSD

Implement Dan's DC-prior

Closed this issue · 0 comments

Dan wrote a blog on 9/3/2022 on updated version of his previous paper on pc-prior which is practical as users usually don’t have to derive them themselves. Moreover, a lot of that complexity is the price we pay for dealing with densities. We think that this is worth it and the lesson that the parameterisation that you are given may not be the correct parameterisation to use when specifying your prior is an important one! Below are four principles from Dan's blog which includes specific implementation.

Occam’s razor: We have a base model that represents simplicity and we prefer our base model.

Measuring complexity: We define the prior using the square root of the KL divergence between the base model and the more flexible model. The square root ensures that the divergence is on a similar scale to a distance, but we maintain the asymmetry of the divergence as as a feature (not a bug).

Constant penalisation: We use an exponential prior on the distance scale to ensure that our prior mass decreases evenly as we move father away from the base model.

User-defined scaling: We need the user to specify a quantity of interest and a scale . We choose the scaling of the prior so that . This ensures that when we move to a new context, we are able to modify the prior by using the relevant information about .