Retrying relevance score prompt if score was a float or response isn't valid JSON
Opened this issue · 0 comments
jamesbraza commented
Our LLM response --> Context
logic is a bit fragile:
- #747 shows the LLM responding with a float relevance score.
- #736 shows the LLM failing to respond with valid JSON (though actually the issue seems to have been a token limitation).
What currently happens is this blows up Docs.aquery
.
However, this should not be a critical failure because:
- It's not in the caller's (e.g. an agent) control, it's a failure internal to
paper-qa
. - We can retry the summary prompt once or twice, saying something like:
"We failed to parse valid JSON according to exception {exc!r}. Please respond with JSON."
"We failed to extract a valid score according to exception {exc!r}. Please provide an integer score in 1-10."