group comparison
Closed this issue · 6 comments
Thank you for this quite exciting package. It's really helping me with null model analysis!
I'm exploring pNST and tNST and wonder how to compare different groups best.
In the PNAS paper, it is recommended to do a relative comparison rather than using absolute values.
I have time-series data from three different environments for which I want to check for stochastic effects.
Currently, I'm running NST for each environment with time as a grouping variable, and I use the NST predictions from bootstrapping for statistical analysis.
But I'm struggling with how to make the relative comparison. Do you have any recommendations here?
You may explore the function nst.boot in the package, also see step 5.2 in the example code https://github.com/DaliangNing/NST/blob/master/Examples/SimpleOTU/NST.example.r
if still hard to figure out, you may send me your input files and R code, then I can write some scripts specific to your case.
Hi @DaliangNing
Thank you for your reply!
I was experimenting with nst.boot()
but wasn't sure how to do the statistics here.
I'm a bit irritated because one group comparison is unsure with nst.panova()
(it changes from run to run depending on the original NST value). But when I compare the bootstrapped NST values from nst.boot()
, it seems relatively straightforward (wilcoxon).
Should I repeat the panova several times to account for variation in the NST?
Yes, bootstrapping results are generally more stable than PERMANOVA of NST.
In the output of nst.boot, in principle, p.count or p.count.noOut is preferred, and others have defects. p.wtest (wilcox test) is not recommended, which always overestimated significance (you can always get significant results given large enough randomization time).
I see. Thank you, this was quite helpful!
If not wilcoxon, which test is used for p.count or p.count.noOut, and are those p values already corrected for multiple testing?
P.count is the direct count of the probability based on bootstrapping results, thus it is a way of 'bootstrapping test'. p.ount.noout removes outliers before counting (I would not recommend). The p values are not corrected. If necessary, you may apply p adjustment after getting the p.count values of all comparisons.
thank you for all your suggestions and help!