About loss
Opened this issue · 2 comments
huangsusan commented
Hi:
Thanks for sharing your code.I read your code,and I am puzzled about the knowledge transformer loss your artical proposed .It is seemd that you have not use this loss,the loss is just dice loss and CE loss.
Wish your reply.
aminrezaee commented
Me too. I read paper and I expect to see MSKT . but it seems that the code is not the complete version.
CinKKKyo commented
Did u guys figure it out? I met the same problem, when I see the paper, i think the knowledge transfer loss is just compute the dice between the universal decoder prediction and auxiliary decoder prediction. But the public code shows there are four parts in total loss...