Extracting Embedding
funihang opened this issue · 1 comments
Hello,
I found a similar issue ( #5), but when I try to extract the embedding according to your instruction, the dimension of the transformer_output
doesn't seem to fit the input. The dimension of the transformer_output
is like [micro-batch-size, max seg length, hidden size]
, but how does this output match the input? I checked the bert
model and its output should be like output1, output2
. The output2
is the embedding that can match the input. But in your code, the output2
is None
. I wonder how we can get the embedding corresponding to the inputs.
Thanks
Hello @funihang ,
Sorry for the late reply.
Thanks for bringing this up. I cannot find a variable named output2
in the transformer.py file. So I am not sure which variable you are referring to. Please specify.
Best,
Yijia