Lancelot39/KGSF

Problem of computing recommendation probability

Closed this issue · 1 comments

It states that the probability of a recommended item is the softmax value of the inner product (i.e., similarity) between the user representation and its embedding. However, the instruction to get softmax values is commented. Instead, values of the inner product are directly used to compute cross-entropy losses. Could you help me to understand the inconsistency between the paper and your implementation?

The statement in Improving Conversational Recommender Systems via Knowledge Graph based Semantic Fusion.
image

entity_scores = F.linear(user_emb, db_nodes_features, self.output_en.bias)
#entity_scores = scores_db * gate + scores_con * (1 - gate)
#entity_scores=(scores_db+scores_con)/2

#mask loss
#m_emb=db_nodes_features[labels.cuda()]
#mask_mask=concept_mask!=self.concept_padding
mask_loss=0#self.mask_predict_loss(m_emb, attention, xs, mask_mask.cuda(),rec.float())

info_db_loss, info_con_loss=self.infomax_loss(con_nodes_features,db_nodes_features,con_user_emb,db_user_emb,con_label,db_label,db_con_mask)

# why comments??
#entity_scores = F.softmax(entity_scores.cuda(), dim=-1).cuda()

rec_loss=self.criterion(entity_scores.squeeze(1).squeeze(1).float(), labels.cuda())
#rec_loss=self.klloss(entity_scores.squeeze(1).squeeze(1).float(), labels.float().cuda())
rec_loss = torch.sum(rec_loss*rec.float().cuda())

self.user_rep=user_emb

I have the same doubt.How do you understand that