jamescalam/transformers

how to train NSP model using different embedding instead bert

Closed this issue · 5 comments

how to train NSP model using different embedding instead bert

Hi @sreenathelloti can you give more info on what you'd like to do? Many models don't use NSP - like roBERTa for example.

Hi Jamescalam/Transformers, I have embeddings , i want to pass those embeddings instead of passing text to nsp model.

On Mon, Jun 14, 2021 at 2:03 PM James Briggs @.***> wrote: Hi @sreenathelloti https://github.com/sreenathelloti can you give more info on what you'd like to do? Many models don't use NSP - like roBERTa for example. — You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub <#6 (comment)>, or unsubscribe https://github.com/notifications/unsubscribe-auth/AF5H5KHXSVQUMON2X7B6TBDTSW5HPANCNFSM46T3R7JA .

Okay, I haven't seen this done before, and I don't know if it'll be a good solution - but give it a go!

You can identify the internal structure of your model by typing model.num_parameters, you should see something like (bert): BertModel (... etc, see this as an object - or even Python dictionary.

So for bert-base-cased we would access the encoder part of the model (skipping the embeddings layer) with model.bert.encoder, pass your embeddings to this, they must match the same format that BERT would usually expect.

Your output from the encoder part of the model would then need to be passed to model.bert.pooler, and the output from that to model.cls. Some of that may need to be rearranged but you should be able to identify what goes in/out of each part by looking at the model.num_parameters output.

I hope that helps and good luck with your problem! I'll close the issue in a few days unless you have any more questions about it. Thanks!

No activity, assuming all went well