How do you train KE and MEND with CounterFact?
Zce1112zslx opened this issue · 0 comments
Zce1112zslx commented
As is described in your paper, "To encourage fair comparison on both zsRE and COUNTERFACT tasks, we additionally train KE-zsRE and KE-CF models on size-10,000 subsets of the respective training sets." and "Again, for fair comparison, we train new versions of MEND (MEND-zsRE, MEND-CF) on the same sets that KE-zsRE and KE-CF were trained on.".
Which 10,000 records do you use to train KE-CF and MEND-CF?
Besides, "Table 4 showcases quantitative results on GPT-2 XL (1.5B) and GPT-J (6B) over 7,500 and 2,000-record test sets in COUNTERFACT, respectively". Which 7,500 or 2,000 records do you use to evaluate all baselines?
Thank you :-)