microsoft/DeBERTa

Pretraining the deberta-v3 by larger context length.

Opened this issue · 2 comments

Hi! I find that Deberta-v3 uses relative-position embedding so that it can takes in larger context compared to traditional BERT. Have you tried to pretrain deberta-v3 by 1024 or larger?

If I need to pretrain deberta-v3 from the scratch using a larger context length (e.g., 1024), are there any modification I should make besides the training script?

Thanks for any kind help!

Hi, I did a multi-task fine-tune with 1280 context length (1680 for small version)
https://huggingface.co/tasksource/deberta-base-long-nli

Could you please open-source your code for learn?