saulpw/aipl

Cost computation for Gpt-4 is wrong.

Opened this issue · 4 comments

The cost of gpt-4 is:

(0.03 * prompt_tokens / 1000 + 0.06 * response_tokens / 1000)

saulpw commented

Yes, but how do you know how many response_tokens you'll get, for a --dry-run? We could come up with a more precise estimation function, but I figured 2prompt_tokens0.03 was almost reasonable for a first pass. (Could be commented, obviously :)

I guess it's reasonable to estimate for a dry run - though I think that estimate is inaccurate enough that I'd rather not. We should be able to give the correct value for a real run though?

looks like we've got a few other price adjustments we should make too:

https://openai.com/blog/function-calling-and-other-api-updates

gpt-3.5-turbo: $0.0015 per 1K input tokens and $0.002 per 1K output tokens

gpt-3.5-turbo-16k will be priced at $0.003 per 1K input tokens and $0.004 per 1K output tokens.

we can at least compute the exact cost correctly for real calls since it's different for input/output tokens