Fix LLMAsJudge validation with OpenAIInferenceEngine
Opened this issue · 0 comments
lilacheden commented
The current validation code states:
if isinstance(self.inference_model, OpenAiInferenceEngine):
if self.format and type(self.format) is not SystemFormat:
raise ValueError(
"Error in 'LLMAsJudge' metric. Inference model 'OpenAiInferenceEngine' does "
"not support formatting. Please remove the format definition from the recipe"
" (OpenAi Chat API take care of the formatting automatically)."
)
If OpenAiInferenceEngine does not support formatting, need to assure that self.format
is an empty format, not any SystemFormat.