niieani/gpt-tokenizer

Calculated tokens much higher than actual

Closed this issue ยท 10 comments

Qarj commented

Thanks for this. I've noticed a weird issue though both with this library and also the official code from open ai that I found a while back before gpt-4 came out.

What is happening is that the tokens calculated by this tool are much higher than the openai api is reporting in the completion. For example, a prompt I just submitted to gpt-4 was calculated as 7810 tokens by this library but when I got the completion from openai it told me my prompt had 5423 tokens. I'm not sure if you have also noticed something similar? In the prompt I'm submitting primarily Node JS code.

Qarj commented

As a workaround, I've noticed when you request too many tokens, you get a 400 error very quickly for example

This model's maximum context length is 8192 tokens. However, you requested 13674 tokens (7469 in the messages, 6205 in the completion). Please reduce the length of the messages or completion.

So I parse of the messages token count and resubmit with a max_tokens calculated as follows: 8192 - 7469 - 1

Hi @Qarj! Thanks for flagging this problem.
As @ricardomatias noticed in #5, the tokenizer is using the r50k_base encoding, which isn't the one used by GPT-4. Hence the token offset. I'm working on v2 which will allow for choosing which encoding to use, which will correctly tokenize for GPT-4 specifically.

Qarj commented

Thanks very much for addressing this! I will definitely use this feature in v2 when it is out.

๐ŸŽ‰ This issue has been resolved in version 2.0.0-beta.1 ๐ŸŽ‰

The release is available on:

Your semantic-release bot ๐Ÿ“ฆ๐Ÿš€

Qarj commented

Thanks very much for this! Am using it already :)

Qarj commented

So it seems it is much closer now to the actual tokens, in a test I did the prompt was calculated as 998 tokens according to the library but 1003 tokens according to open ai. I suspect if we allow a 50 token margin then our completion token requests should always be within limit.

Interesting. I wonder if there are 5 additional tokens that are set by OpenAI for each request? The algorithm should be exactly the same as OpenAIs.

Thanks for investigating.

๐ŸŽ‰ This issue has been resolved in version 2.0.0 ๐ŸŽ‰

The release is available on:

Your semantic-release bot ๐Ÿ“ฆ๐Ÿš€

@Qarj added the new encodeChat function which should return correct values for chats!

Qarj commented

Thanks very much for this! :)