LLMEvaluator that evaluates model's output with LLM
deep-diver opened this issue · 2 comments
This is a custom TFX component project idea.
hope to get some feedbacks from (@rcrowe-google , @hanneshapke , @sayakpaul , @casassg)
Temporary Name of the component: LLMEvaluator
Behaviour
: LLMEvaluator evaluates trained model's performance via designated LLM service (i.e. PaLM, Gemini, ChatGPT, ...) by comparing the outputs of the model and the labels provided from ExampleGen.
: LLMEvaluator takes a parameter instruction
which let you specify the prompt to the model. Since each LLM service could not interpret the same prompt in the same way, and it should be differentiated from task to task.
Why
: It is common sense to leverage LLM service to evaluate the model these days (especially when we fine-tune one of the open source LLM such as LLaMA).
@deep-diver Great component idea. How will you handle the different prompts for optimal performance?
Do you have code you could share?
Could this be used for HELM? https://crfm.stanford.edu/helm/latest/