openai/evals

Do not back off on `openai.BadRequestError`

johny-b opened this issue · 1 comments

After the recent change we back off on openai.APIError. This means backing off on openai.BadRequestError:

>>> issubclass(openai.BadRequestError, openai.APIError)
True

I don't know all the cases where we can get a BadRequestError, but one of them is exceeded context length:

[2023-11-14 11:09:32,241] [_common.py:105] Backing off openai_completion_create_retrying(...) for 17.0s (openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 8001 tokens, however you requested 8184 tokens (7672 in your prompt; 512 for the completion). Please reduce your prompt; or completion length.", 'type': 'invalid_request_error', 'param': None, 'code': None}})

And in this case we should definitely avoid repeating the request.

(I found this on #1407, but I don't think this is related to this particular PR)

NOTE: logic corresponding to backing off on openai.APIError is in a few different places, I think all of them need the same fix.

Resolved in #1420, we should now only be backing off for errors that could work on retry (429s, connection errors, timeouts, 5xxs)