declare-lab/RelationPrompt

Hi! I‘m trying to run this code lately. I've got a question that why there is only pretrained model for unseen=10 seed=0.WHere should I find other models? thank you

Drizze999 opened this issue · 6 comments

Hi! I‘m trying to run this code lately. I've got a question that why there is only pretrained model for unseen=10 seed=0.WHere should I find other models? thank you

Hi, sorry we did not release the weights for all the random seeds, but you can reproduce the results using the commands here: https://github.com/declare-lab/RelationPrompt#experiment-scripts

I see. Then I used these commands changed for other random seeds, but there's an error keeps occurring :'Can't load config for 'outputs/wrapper/fewrel/unseen_x_seed_x/generator/model'. How can I solve this problem?

Hi, can you provide the commands that caused the error? For example, I tried with the following commands but there is no error on Python 3.7.12, V100 GPU.

python wrapper.py main \
  --path_train outputs/data/splits/zero_rte/fewrel/unseen_15_seed_4/train.jsonl \  
  --path_dev outputs/data/splits/zero_rte/fewrel/unseen_15_seed_4/dev.jsonl \
  --path_test outputs/data/splits/zero_rte/fewrel/unseen_15_seed_4/test.jsonl \
  --save_dir outputs/wrapper/fewrel/unseen_15_seed_4

python wrapper.py run_eval \
  --path_model outputs/wrapper/fewrel/unseen_15_seed_4/extractor_final \
  --path_test outputs/data/splits/zero_rte/fewrel/unseen_15_seed_4/test.jsonl \
  --mode multi

I find out that my GPU has smaller memory than yours,so I changed batch size to the half of the original one.Then this error had been solved!Thank you so much for your reply!~

No problem, to maintain the same effective batch size, you can use gradient accumulation, for example in the demo.

OK!I will try larger gradient accumulation.