Ability to count tokens for models other than OpenAI
simonw opened this issue · 3 comments
simonw commented
Had a great tip on Discord about tokenziers
- which says: https://huggingface.co/docs/tokenizers/python/latest/quicktour.html#using-a-pretrained-tokenizer
You can load any tokenizer from the Hugging Face Hub as long as a
tokenizer.json
file is available in the repository.
And sure enough, this seems to work:
>>> import tokenizers
>>> from tokenizers import Tokenizer
>>> tokenizer = Tokenizer.from_pretrained("TheBloke/Llama-2-70B-fp16")
Downloaded 1.76MiB in 0s
>>> tokenizer.encode("hello world")
Encoding(num_tokens=3, attributes=[ids, type_ids, tokens, offsets, attention_mask, special_tokens_mask, overflowing])
simonw commented
Anthropic have a tokenizer too: https://github.com/anthropics/anthropic-sdk-python/blob/main/src/anthropic/_tokenizers.py
marcothedeveloper123 commented
what if you don't know the origin of the model? all you have to go by is the name of the model.
is there baked-in metadata we can read that tells us what tokenizer to use?
NightMachinery commented
So what exactly can we use for Claude models? E.g., Sonnet 3.5.