ZichaoHuang/TransE

about the norm of entity and relation embeddings

Closed this issue · 7 comments

During the training stage, why the l2 norm of entity and relation embeddings remain unchanged (always 1)? I'm really confused about this. Could you please explain that?

According to the original paper of TransE, normalizing the embeddings is part of the training procedure.

See line 5 of Algorithm 1 in the TransE paper.

Thanks a lot for your answering! But my confusion is: which line of the code conducts this normalization? I only see the normalization in the very beginning (line 51&52 in model.py), how do the norm of embeddings remain unchanged during training?

Actually, the normalization scope would do the trick.
Since the the embeddings are results of the l2_normalize op in the defined computation graph, they will be firstly normalized when they are acquired for a forward step during training.

Thanks for your valuable answer! I think I've got it. Thanks again.

You're welcome.

The relation embeddings don't need to be normalized per loop according to the original paper, Why do you normalize it per loop in your code?

@ZhCoding I find that the normalization of the relation embeddings results in slightly better performance so I keep this tiny modification.