geek-ai/irgan

some problems with logits

jinchm opened this issue · 2 comments

First, thank you for excellent work.
I have some problems with it.
in dis_model.py 41 line:
self.pre_logits = tf.reduce_sum(tf.multiply(self.u_embedding, self.i_embedding), 1) + self.i_bias
I did not understand why use tf.multiply ?
I think use tf.matmul.
But your work about logits always use tf.multiply.
Please explain it. Thank you.

self.u_embedding and self.i_embedding are defined as follows:

self.u_embedding = tf.nn.embedding_lookup(self.user_embeddings, self.u)
self.i_embedding = tf.nn.embedding_lookup(self.item_embeddings, self.i)

So both of their shape are [batch_size x emb_dim] and tf.multiply performs element-wise product so this is just a batch of vector dot product.

They do the same thing.
tf.matmul and tf.mul together with sum