EOS token '\n' not working properly in llama3
sjaelee25 opened this issue · 1 comments
There is an issue with '\n' not working properly in llama3. When passing '\n' through tokenizer.encode, it outputs the token ID 198, but it does not terminate the sentence generation appropriately and continues generating subsequent text.
eos_token_id = base_model.tokenizer.encode("\n", bos=False, eos=False)[-1]
In contrast, using other strings like 'Q' works correctly. Additionally, testing with llama2 shows that all strings, including '\n', work as expected.
Could you please look into this issue?
Yes, there is a slight difference in tokenization with Llama-3 compared to other models, e.g., \n\n
is a different token from \n
. To use llama-3, maybe you want to play with the tokenizer and investigate what's the really desired eos_token in your use case.