Lakens/TOSTER

Results from brunner_munzel() seem inverse of npar.t.test()

Closed this issue · 6 comments

Why does the brunner_munzel() function report what seems like the inverse of the results reported by npar.t.test for the relative effect (p)?

set.seed(123)
group1 <- rnorm(50, mean = 10, sd = 2)
group2 <- rnorm(50, mean = 11, sd = 3)
groups = data.frame(id=c(rep(1,50),rep(2,50)),g=c(group1,group2))

brunner_munzel(group1, group2,paired = FALSE,alternative = "two.sided")
brunner_munzel(group2, group1,paired = FALSE,alternative = "two.sided")

npt=npar.t.test(g~id,data=groups,alternative = "two.sided",method = "t.app")
summary(npt)

We use the same ordering as base t.test (i.e., x-y). The results are "inverse" of that from nparcomp because they use a different approach to setup the ordering of the test.

Just to note the ordering is consistent in the package:

> ses_calc(group2, group1,paired = FALSE,alternative = "two.sided",
+          ses = "cstat")
            estimate  lower.ci  upper.ci conf.level
Concordance     0.67 0.5629869 0.7618914       0.95
> t.test(data=groups, g ~id ,paired = FALSE,alternative = "two.sided")

	Welch Two Sample t-test

data:  g by id
t = -2.9477, df = 86.454, p-value = 0.004116
alternative hypothesis: true difference in means between group 1 and group 2 is not equal to 0
95 percent confidence interval:
 -2.2945755 -0.4462599
sample estimates:
mean in group 1 mean in group 2 
       10.06881        11.43922 

Thanks for your clarification and for your great work on the TOST package. This is my first experience with the BM test and I was confused because the documentation for both functions appear to be defining the relative effect in the same way [p(X<Y) + .5*P(X=Y)], but yielding different results. It would be helpful to have some clarification in the output as to which group the "X" and "Y" variables are assigned to, especially when the formula interface is used (e.g., brunner_munzel(g~id,data=groups,paired = FALSE,alternative = "two.sided")).

Ohhhh, I see the confusion now, the notation is wrong on the label for the estimate! That is definitely my fault on that one! I've been working on update for the standard errors and will incorporate a notation fix in that now. The GitHub version will be updated shortly and sent over to CRAN soon.

Also, agreed on the confusing output as well, but that is the default for htest objects. I've trying to think of a sustainable solution for this but none come to mind.

Looking back through the branches, I had it performing the tests that would be "[p(X<Y) + .5*P(X=Y)]" but flipped it for consistency with the other functions (i.e., Wilcoxon and t-tests).

Good to hear that I am starting to understand this stuff. By the way, the describe_htest() function has been super useful in learning how these tests work. Thanks again for your help.

No problem, and thank you for the kinds words. My apologies for the notation error, hard to create unit tests to prevent those kinds of errors!