huggingface/tokenizers

Assign `<unusedXX>` tokens with `special_tokens` without growing vocab size

jacobwjs opened this issue · 3 comments

``
I'm trying to modify google/gemma-7b tokenizer for instruction tuning purposes. My goal is to replace some of the "unused" tokens that were specifically added to the tokenizer for my own defined "custom" tokens. I want these custom tokens to be treated as "special" (i.e. not normalized, stripped, etc.), however this seems like an impossible task.

What I would like to do is some version of the following,

tokenizer = AutoTokenizer.from_pretrained("google/gemma-7b")

custom_tokens = ['<|im_start|>', '<|im_end|>']
unused_tokens = ['<unused1>', '<unused2>']

tokenizer.add_special_tokens({'additional_special_tokens': custom_tokens, 'tokens_to_replace': unused_tokens})

Given that many models/tokenizers being open-sourced specifically reserve some set of unused tokens for this purpose, I would like to make use of them without growing the vocabulary, and subsequently not having to adjust the model's embedding dimensions.

I've tried manually manipulating the vocab, and assigning appropriate dicts on the forward and reverse pass (encoder, decoder), but nothing seems to work.

How can I achieve my goals of making use of unused tokens, ensuring they are treated as "special", and not growing the vocabulary of the tokenizer and model embedding?

That is something we should do indeed

Beautiful. That would mostly resolve another issue as well huggingface/trl#1412 (comment)

This issue is stale because it has been open 30 days with no activity. Remove stale label or comment or this will be closed in 5 days.