Alibaba-NLP/HiAGM

the code displays the not good results unlike the paper

Opened this issue · 1 comments

INFO: TEST performance at epoch 155 --- Precision: 0.320000, Recall: 0.285714, Micro-F1: 0.301887, Macro-F1: 0.032500, Loss: 0.396801.
INFO: TEST performance at epoch 134 --- Precision: 0.428571, Recall: 0.321429, Micro-F1: 0.367347, Macro-F1: 0.052500, Loss: 0.408921.

Test on the original code without big changes
The result is:

WARNING: Performance has not been improved for 50 epochs, updating learning rate
WARNING: Learning rate update 0.0001--->0.0001
WARNING: Performance has not been improved for 50 epochs, stopping train with early stopping
100%|██████████| 1/1 [00:00<00:00, 1.36it/s]
INFO: TEST performance at epoch 73 --- Precision: 0.750000, Recall: 0.214286, Micro-F1: 0.333333, Macro-F1: 0.021429, Loss: 0.418117.

100%|██████████| 1/1 [00:00<00:00, 1.34it/s]
INFO: TEST performance at epoch 155 --- Precision: 0.391304, Recall: 0.321429, Micro-F1: 0.352941, Macro-F1: 0.046786, Loss: 0.385901.

Any help?