openreview/openreview-expertise

Integrate Standard ACL Paper Matching Scorer

Opened this issue · 4 comments

Hi,

We've been using a different way to calculate affinity scores for ACL conferences that is very fast to calculate and also seems to be relatively effective: https://github.com/acl-org/reviewer-paper-matching/

It'd be nice to integrate this into this package to make it compatible with OpenReview's recommendation system. I'd be happy to do this when I have time, but my time is pretty limited nowadays (and I'm not very familiar with the openreview-expertise package yet) so if someone else would have time to do so that would also be greatly appreciated.

For reference, the relevant part for calculating the affinity scores is here: https://github.com/acl-org/reviewer-paper-matching/blob/master/suggest_reviewers.py#L981

Thanks for all your feedback @neubig! I think implementing more algorithms in the expertise of OpenReview is a great idea.

Hi, Graham (@neubig), I was looking at the code block you mentioned above and had a couple of questions. The suggest_reviewers.py expects a trained model to get the embeddings for the submissions and the reviewer data.

Would you like to integrate this method of calculating the affinity scores with the existing models in the openreview-expertise repo or as a separate entity with models used as mentioned in the reviewer-paper-matching repo?

Hi @purujitgoyal ! Thanks for helping, and sorry about the late reply. I'm afraid I don't really understand the distinction between the two options you presented though.

To clarify, in the reviewer-paper-matching repository linked above, there is a method to calculate affinity scores based on discriminatively trained embeddings. This seems to work pretty well, and qualitatively the matches that I've gotten using this method seemed a bit better than the models implemented in the openreview-expertise repository. The code to calculate these affinity scores is here: https://github.com/acl-org/reviewer-paper-matching/blob/master/suggest_reviewers.py#L981

It would be nice if when we run the openreview-expertise code, these affinity scores could be calculated and used instead of other options to calculate affinity scores such as spectre-mfr.

Does this clarify things?

I see. So, if I understand it correctly, the user will provide a pre-trained model to calculate the embeddings. We don't have to train a model on openreview data, right? @neubig