Which variants of architectures for extracting relation representations did you use for Bert-EM model?
jzkoh opened this issue · 1 comments
Hi,
In the paper, the author tested 6 architectures:
Figure 3: Variants of architectures for extracting relation representations from deep Transformers network. Figure (a) depicts a model with STANDARD input and [CLS] output, Figure (b) depicts a model with STANDARD input and MENTION POOLING output and Figure (c) depicts a model with POSITIONAL EMBEDDINGS input and MENTION POOLING output. Figures (d), (e), and (f) use ENTITY MARKERS input while using [CLS], MENTION POOLING, and ENTITY START output, respectively.
For BERT-EM model in your code, can I check if you are using (f) the model using the ENTITY MARKERS input representation and ENTITY START output representation?
Are you able to share which section of the code did you adapt this architecture?
Thanks!
Yes, its somewhere in modelling_bert.py for BERT