facebookresearch/CodeGen

Question for validation and test sets

runningmq opened this issue · 1 comments

Hi,

as the paper "Unsupervised Translation of programming languages" mentioned, there are 852 parallel functions. So I checked the data in this repo fold and found (each file contain one function code with unit test cases, actually there are 852 union filenames in python/java/cpp folder):

  • 698 cpp functions
  • 717 java functions
  • 702 python functions

the number of the test/validate dataset sizes is different as mentioned in Table 4 of the raw paper :

  • c++ 466/231
  • java 234/481
  • python 237/463

And another question is function pairs number in Table 5 of the raw paper. I'm wander why the C++ -> java hava 481 tests functions while the java -> c++ only hava 466 test functions. If my understanding is right, there should have a same number of tests for giving parallel functions (java to c++ or c++ to java). Why the count of test functions if different for c++->java and java->c++ ? (same for other language pairs)

really thanks

brozi commented

Hi and sorry for the late answer,
We created the tests by generating test cases for each problem, then running the generated tests on the ground truth. We considered the unit tests we generated to be correct if they all pass for the ground truth independently for each language. For instance, if the generated unit tests succeed for the Python version but not for the C++ version, we will add an example in the valid or test set for X -> Python but not for Y -> C++. That explains why we don't have the same number of valid unit tests for C++ -> Java and Java -> C++ for instance.
It is debatable whether that's the right thing to do, as we have seen a few examples where the functions were not exactly parallel (e.g. global variables in C++ but not in Python) and that was causing the tests to fail only in C++. Taking the intersection instead would reduce the number of valid/test examples and also probably increase the average quality of the sets and result in higher scores. On the other hand, if we assume that the translation will always fail when the functions are not parallel, the current version will still correlate well with model performance and have lower variance due to us accepting extra correct examples for which only one of the test suites failed.

About the difference between the number of unit tests in the unit test folder vs what we actually test on, we removed a few functions that had too many tokens from our test set and the generated tests are still in the test folder. That's why you have 1 or 2 examples that have unit tests but that we didn't test on (e.g. for C++ 231 + 466 = 697 so one was too long).
Tell me if you still have some questions.

More precisely, we remove all the sentences with more than 512 tokens in either the source or the target language, corresponding to these test IDs DYNAMIC_PROGRAMMING_SET_37_BOOLEAN_PARENTHESIZATION_PROBLEM for the test set and all language pairs and MOBILE_NUMERIC_KEYPAD_PROBLEM for Java and Python in the validation set.