openai/human-eval

Prompt used in APPS

henryhungle opened this issue · 2 comments

Thank you for the very interesting work!

I have one question about the natural language prompt used in APPS.

Did you directly use the original prompts used in the APPS benchmark? (as coded here in the generate_prompt function).

You mentioned in the paper that 'we append a single input/output example from the task description to the docstring as a formatting hint.' How did you do this precisely? Did you need to construct a new prompt, including a function signature and a docstring with input/output example?

We constructed a new prompt, below is the exact prompt we used. We did not tune it in any special way, so I suspect other prompts would work too. Hope that helps!

# Language: Python 3
# Task: Synthesize program

"""
Contains programming exercises for single functions specified by their
doc-strings and with solutions in simple code and with a lot of comments
that explain what is done and why and how it is related to the specification.
"""

# Example 1.

"""
Given an array of integers, find if the array contains any duplicates.

Your function should return true if any value appears at least twice in the array, and
 it should return false if every element is distinct.

-----Input-----

The first line contains a list of integers.


-----Output-----

Output true if there are duplicates and false otherwise.

-----Examples-----
Input
[1,2,3,1]

Output
true

Input
[1,2,3,4]

Output
false

Input
[1,1,1,3,3,4,3,2,4,2]

Output
true

"""

# Solution:
list = eval(input())
if len(list) == len(set(list)):
    return False
else:
    return True

# Example 2.

Thank you for your answer!