[Question]LLM prompt evolution optimisation with feedback
Opened this issue · 1 comments
filbert-c commented
Hi, i have been looking through the various examples in the llm prompt optimisation. I am trying to figure out how to optimise my own prompt by testing with the tutorial code. I notice
using llm_feedback is set to true which supposedly improves the performance. However, unlike circle with artifacts example, it seems the evaluator.py in llm optimisation does not have the artifact code option available.
So if it is not implemented, this option cant be used ? If i understand correctly, the feedback provides incorrect examples to the LLM optimiser based on the current prompt.
codelion commented
The artifacts side-channel was meant to pass on program errors during evaluation so that the llm can correct them. With the prompt optimizer example we are not generating code but a new prompt so it is not used in the evaluator.