HTTPException: Status code: 500 (utility check)
NivC opened this issue · 2 comments
NivC commented
Dear Spylab,
Validating our final model we got:
request": {
"model": "openai/gpt-3.5-turbo-1106",
"api_keys": null,
"small": false
},
"result": {
"utility": 0.629,
"threshold": 0.483,
"passed": true,
"additional_info": {
"avg_share_of_failed_queries": 0.016,
"sample_errors": [
"HTTPException: Status code: 500, Detail: ('OpenAI API error: The server had an error while processing your request. Sorry about that! {\\n \"error\": {\\n \"message\": \"The server had an error while processing your request. Sorry about that!\",\\n \"type\": \"server_error\",\\n \"param\": null,\\n \"code\": null\\n }\\n}\\n 500 {\\'error\\': {\\'message\\': \\'The server had an
So it seems to pass as far as we can tell.
Should be worried about the error?
Niv.
dpaleka commented
Yeah, our implementation had an issue with retrying failed queries affecting how long the endpoint takes,
so we just report how many queries have failed upstream. If there's an error other than a OpenAI/TogetherAI API issue, then that's a cause for concern (for either the server or your defense, depending on the error).
It seems you've successfully tested on 98.4% of the data and get utility much higher than the threshold, so no worry about this in particular.
NivC commented
Thanks!