redotvideo/mamba-chat

sentencepiece version

Opened this issue · 0 comments

Hi,
When I load the tokenizer of EleutherAI--gpt-neox-20b and zephyr-7b-base.
The following error happened:
ValueError: Couldn't instantiate the backend tokenizer from one of:
(1) a tokenizers library serialization file,
(2) a slow tokenizer instance to convert or
(3) an equivalent slow tokenizer class to instantiate and convert.
You need to have sentencepiece installed to convert a slow tokenizer to a fast one.

Even if I install sentencepiece, the error still exits.

My environments is py38 and sentencepiece 0.20.0. What is your environment?