cvlab-columbia/viper

Problem with maximum context length using text-davinci-003

cathyxl opened this issue · 1 comments

Hi, since codex is not available anymore, I've tried to use text-davinci-003, but openai is always sending back the error.

"This model's maximum context length is 4097 tokens, however, you requested 5270 tokens (4758 in your prompt; 512 for the completion). Please reduce your prompt; or completion length."

How do you deal with the max context length problem?

Hi, Codex (code-davinci-002) had context size of 8192, therefore this was not a problem. In order to avoid having this issue, you may want to remove the VideoSegment part of the prompt, if you only want to apply it to images. See the prompt we released for the chat versions for an example. Otherwise (if video is important), you can try removing methods from ImagePatch that are not necessary for your use case.