wviechtb/metafor

Conflicting results with rma.peto vs. escalc

sgoldbergR opened this issue · 2 comments

Classification:

Bug Report
df_for_wolfgang.csv

Summary

You had suggested on StackExchange that I can use either rma.peto to escalc with measure = "PETO" to return equivalent results. That worked on the dat.egger2001 data. But, for some reason it's not working with my data. I've attached the file here. Not sure if this is a bug or not!

Reproducible Example (if applicable)

res <- rma.peto(ai=a, bi = b, ci=c, di = d, data=df)
summary(res)

dat <- escalc(measure="PETO", ai=a, bi = b, ci=c, di = d, data=df)
escalc.out <- rma(yi, vi, data=dat, method="FE")
summary(escalc.out)
Output from rma.peto:
estimate      se    zval    pval    ci.lb   ci.ub 
  0.0968  0.0505  1.9159  0.0554  -0.0022  0.1959 

Output from escalc:
estimate      se    zval    pval    ci.lb   ci.ub 
  0.0952  0.0503  1.8912  0.0586  -0.0035  0.1938 

Notes

I tried rounding the values to whole numbers (some are fractions when dropout wasn't clearly reported).

sessionInfo()

Post output of sessionInfo() below:

sessionInfo()
R version 4.0.4 (2021-02-15)
Platform: x86_64-apple-darwin17.0 (64-bit)
Running under: macOS Mojave 10.14.6

Matrix products: default
BLAS: /System/Library/Frameworks/Accelerate.framework/Versions/A/Frameworks/vecLib.framework/Versions/A/libBLAS.dylib
LAPACK: /Library/Frameworks/R.framework/Versions/4.0/Resources/lib/libRlapack.dylib

Random number generation:
RNG: Mersenne-Twister
Normal: Inversion
Sample: Rounding

locale:
[1] en_US.UTF-8/en_US.UTF-8/en_US.UTF-8/C/en_US.UTF-8/en_US.UTF-8

attached base packages:
[1] stats graphics grDevices utils datasets methods base

other attached packages:
[1] Hmisc_4.5-0 Formula_1.2-4 survival_3.2-7 lattice_0.20-41
[5] readtext_0.80 MplusAutomation_0.8 ggpubr_0.4.0 ppcor_1.1
[9] mediation_4.5.0 sandwich_3.0-0 mvtnorm_1.1-1 lmerTest_3.1-3
[13] lme4_1.1-26 readxl_1.3.1 MASS_7.3-53 foreign_0.8-81
[17] MAd_0.8-2.1 g.data_2.4 metapower_0.2.2 car_3.0-10
[21] carData_3.0-4 metafor_3.0-2 Matrix_1.3-2 psych_2.0.12
[25] forcats_0.5.1 stringr_1.4.0 dplyr_1.0.7 purrr_0.3.4
[29] readr_2.1.1 tidyr_1.1.4 tibble_3.1.6 ggplot2_3.3.5
[33] tidyverse_1.3.0 meta_5.2-0

loaded via a namespace (and not attached):
[1] minqa_1.2.4 colorspace_2.0-0 ggsignif_0.6.1 ellipsis_0.3.2
[5] rio_0.5.26 htmlTable_2.1.0 base64enc_0.1-3 fs_1.5.0
[9] rstudioapi_0.13 farver_2.1.0 fansi_0.4.2 lubridate_1.7.10
[13] mathjaxr_1.4-0 xml2_1.3.2 splines_4.0.4 mnormt_2.0.2
[17] knitr_1.37 texreg_1.37.5 jsonlite_1.7.2 nloptr_1.2.2.2
[21] broom_0.7.9 cluster_2.1.0 dbplyr_2.1.0 png_0.1-7
[25] compiler_4.0.4 httr_1.4.2 backports_1.2.1 assertthat_0.2.1
[29] fastmap_1.1.0 cli_3.0.1 htmltools_0.5.2 tools_4.0.4
[33] coda_0.19-4 gtable_0.3.0 glue_1.4.2 reshape2_1.4.4
[37] Rcpp_1.0.8 cellranger_1.1.0 vctrs_0.3.8 gdata_2.18.0
[41] nlme_3.1-152 xfun_0.29 proto_1.0.0 openxlsx_4.2.3
[45] testthat_3.0.2 rvest_0.3.6 lpSolve_5.6.15 CompQuadForm_1.4.3
[49] lifecycle_1.0.0 gtools_3.8.2 rstatix_0.7.0 statmod_1.4.35
[53] zoo_1.8-8 scales_1.1.1 hms_1.0.0 parallel_4.0.4
[57] RColorBrewer_1.1-2 curl_4.3 gridExtra_2.3 pander_0.6.3
[61] rpart_4.1-15 latticeExtra_0.6-29 stringi_1.5.3 checkmate_2.0.0
[65] boot_1.3-26 zip_2.1.1 rlang_1.0.0 pkgconfig_2.0.3
[69] htmlwidgets_1.5.3 labeling_0.4.2 tidyselect_1.1.0 plyr_1.8.6
[73] magrittr_2.0.1 R6_2.5.0 generics_0.1.0 DBI_1.1.1
[77] gsubfn_0.7 pillar_1.6.5 haven_2.3.1 withr_2.4.1
[81] nnet_7.3-15 abind_1.4-5 modelr_0.1.8 crayon_1.4.1
[85] utf8_1.1.4 tmvnsim_1.0-2 tzdb_0.2.0 jpeg_0.1-8.1
[89] grid_4.0.4 data.table_1.14.0 reprex_1.0.0 digest_0.6.27
[93] xtable_1.8-4 GPArotation_2014.11-1 numDeriv_2016.8-1.1 munsell_0.5.0

# put output here

The difference arises due to handling of 0 cells in these functions. If you use dat <- escalc(measure="PETO", ai=a, bi = b, ci=c, di = d, data=df, add=0), then the results are identical. If you want equal likelihoods, then you also need to use res <- rma.peto(ai=a, bi = b, ci=c, di = d, data=df, add=c(0,0)).

This aside, I don't understand why you have fractional values, but this is not pertinent to this issue, so I think we can close this.

Thanks, @wviechtb! The fractions are from studies that did not report the outcome of interest clearly (we are looking at attrition rates).