keras-team/keras-preprocessing

Tokenizer ignores `filters` if `char_level` is `True`

paw-lu opened this issue · 1 comments

Please make sure that the boxes below are checked before you submit your issue.
If your issue is an implementation question, please ask your question on StackOverflow or on the Keras Slack channel instead of opening a GitHub issue.

Thank you!

  • Check that you are up-to-date with the master branch of keras-preprocessing. You can update with:
    pip install git+git://github.com/keras-team/keras-preprocessing.git --upgrade --no-deps

  • Provide a link to a GitHub Gist of a Python script that can reproduce your issue (or just copy the script here if it is short).

What I expect

Tokenizer should not tokenize characters listed in filters.

What happens

If char_level=True, Tokenizer will tokenize all characters—even those listed in filters.

Code

text = "ae"tokenizer = keras.preprocessing.text.Tokenizer(filters="e")  # Ignore "e"tokenizer.fit_on_texts(text)
❯ tokenizer.word_index
{'a': 1}  # Ignores "e" as expectedtokenizer = keras.preprocessing.text.Tokenizer(char_level=True, filters="e")
❯ tokenizer.fit_on_texts(text)
❯ tokenizer.word_index
{'a': 1, 'e': 2}  # "e" is tokenized

A fix has been attempted in #302