google-deepmind/funsearch

Evaluator sandbox function potentially missing 'sample' argument

KevinH48264 opened this issue · 1 comments

In implementation/evaluator.py, we have
test_output, runs_ok = self._sandbox.run(program, self._function_to_run, current_input, self._timeout_seconds)

program = full original specification
self._function_to_run = name of the run function in program specification to evaluate and score
current_input = current test input to run on
self._timeout_seconds = timeout_seconds, default being 30 seconds

However, it seems to me that we want the sandbox to run the sample (new_function) instead of self._function_to_run? Unless self._function_to_run is updated to be the sample / new_function somewhere that I'm missing?

Happy to make a PR with this edit to make it easier and more clear to follow the implementation details, but just wanted to clarify if my understanding is correct.

It looks like the key is actually in implementation/evaluator.py, the _sample_to_program() function is called before self._sandbox.run().

_sample_to_program() takes the 'sample' (new function) and actually updates 'program' (type: code_manipulation.Program class) so that 'program.get_function(function_to_evolve)' is updated with the new generated code instead of the old original program specification.