andytimm/retrodesign

vignette for slope coefficient

Closed this issue · 3 comments

Hey Andy,

I tried to figure your email but could not find it and therefore hope it is ok to ask here:

I saw in your vignette how to use retrodesign for group differences. I am a little confused and have a question for understanding:

I already conducted my study and performed Bayesian linear regression. There is no previous literature where I could estimate the effect size from.

I want to find out now the power but especially the type S error probability and exaggeration rate for a group difference and the slope of a covariate in the same model.

I need for that the effect sizes (of group difference and the slope, respecitively). I would use the one I found in my model and add values close to that to get a realistic range. Then I need the standard error. Now I am confused: Is this then the standard deviation of the posterior distribution of my mean difference and the slope coefficient, respectively? Or are we talking about the parameter "sigma" so that the SE is the same for the slope and the mean difference. I mean what in the frequentist version should be standard error of any prediction, which is different than the SE of any individual parameter.

So if I understand correctly, then it is not sigma what I enter in retrodesign but it is the standard deviation of the posterior distribution of whichever parameter I am interested in, correct?

Karl

So my package is mostly intended for analysis of the frequentist context, and talking about the Type S/M error induced by NHST.

The distinction is that in a Bayesian context, it’s not as clear what your “confident statement” threshold would be, whereas in the frequentist it’s just the 95% confidence interval. So if you look at Gelman/Tuerlinckx’s paper (linked in the vignette), they derive the difference between the frequentist and Bayesian version of this on page 5/6. And so P(getting sign wrong|confident statement) will be different than what I calculate unfortunately.

Now if you just want P(coefficient is > 0|data/model) in the Bayesian context, that’s easy; you can work with the posterior distribution on your coefficient and calculate the proportion greater than 0, or something similar for the difference estimate. This having the probability interpretation we want is one of the great things about Bayesian inference, but of course, it’s not quite the same as Type S error, due to that being conditional on a “confident statement” of some type, whether Bayesian or Frequentist.

As for type M error, I’m less certain, but Gelman/Tuerlinckx mention it a little towards the end of the same paper. It’s a similar problem I think though, where these errors are a result of having some sort of confidence threshold.

Hopefully this is helpful! Let me know if you have followup questions.

Yeah, this was super helpful. I will have to reread these papers once more. I did what you describe (calculating P(coefficient > | data, model). So I have in my case 100 different outcomes and thus 100 models and 100 probabilities. I wanted to include the type S error rate to illustrate that for the sign to be correct we can be quite sure because I do not correct for multiple comparisons. I could not take the approach of Gelman where they fit a multilevel model to deal with that. Instead I use the same prior accross the 100 models (N(0, 1)) for the effect size such as slopes or group difference. So, I just want to be responsible in the way I deal with the multiple comparisons.

Thanks a lot again

This is pretty old, closing