Yuqifan1117/CaCao

where is train.sh?

Closed this issue · 4 comments

where can i find this train.sh file in 'bash train.sh TRANSGLOVE_novel'

We use Scene-Graph-Benchmark.pytorch's train.sh extended with CaCao, which you can debug on. However, according to the next works, we will sort out this part of the code latter.

We use Scene-Graph-Benchmark.pytorch's train.sh extended with CaCao, which you can debug on. However, according to the next works, we will sort out this part of the code latter.

How do you actually test the "base novel" in OpenWord? I'm still not quite clear. I appreciate your assistance.

base is the 'cacao/VG-SGG-base-EXPANDED-with-attri.h5'? novel is 'open-world/VG-SGG-zs-random-EXPANDED-with-attri.h5'?

and vg+cacao is the ' 'open-world/VG-SGG-zs-random-EXPANDED-with-attri.h5', cacao is the 'cacao/VG-SGG-base-EXPANDED-with-attri.h5'?

I'm very confused about this and would appreciate your help. If possible, could you provide me with the code for evaluating your base novel?

We apologize for your confusion owing to file naming. We split the base and novel predicates randomly and the split result is shown as follows:
af88a86781911cd4ff55fbca8189a762
We then use unified embeddings with cross-modal prompts mentioned in the paper to predict unseen predicates. Finally, we compute the performance on the base and novel predicates in turn in the same way as standard recall. By the way, vg+cacao is the ' 'open-world/VG-SGG-zs-random-EXPANDED-with-attri.h5', vg is the 'cacao/VG-SGG-base-zs-random-with-attri.h5', and cacao is the extended data between these two datasets.

yep, this answer helped me a lot, thanks.