Retry with feedback and retry without feedback.
Opened this issue · 0 comments
rnbwdsh commented
One of the most successful prompting techniques is Tree of Thought - ToT and Chain of Thought - CoT.
This paper also suggests some other, easily implementable prompting techniques.
This is effectively "run N generators in parallel"/"retry from nothing" and "retry with feedback from what the go-compile and go-test prints".
You could easily do this with a test-execution-context variable and some modified error handling.
This is probably mostly relevant for rather weak models, but also to do a comparison with cost in mind, i.e. gpt4-32k costs 30€/MegaInToken and 60€/MegaOutToken and only 0.5€/MegaToken for gpt3.5, so you can effectively do 20-shot gpt3.5 instead of gpt4.