r-three/t-few

results for LoRA

shuishen112 opened this issue · 1 comments

Thank you for your valuable contributions. I am currently attempting to replicate the outcomes presented in your research paper. However, I am encountering difficulties in obtaining the desired results when I attempt to re-run LoRA adapters.

copa: 76.00 (2.00), h-swag: 26.64 (0.36), storycloze: 84.87 (0.21), winogrande: 51.14 (2.13), wsc: 65.38 (2.88), wic: 51.57 (0.63), rte: 59.57 (0.36), cb: 51.79 (1.79), anli-r1: 34.80 (0.80), anli-r2: 34.00 (2.40), anli-r3: 32.92 (1.08)

Have you encountered situations where the training of "h-swag" and "rte" did not yield successful results?

Ignore it. I did not set the eval_epoch_interval for that before. Sorry.