Issues
- 1
log likelihood becomes nan after some steps
#27 opened by Sannndy0000 - 1
关于两个tensor的互信息
#22 opened by HanAccount - 11
code requests
#6 opened by junkangwu - 3
The reliability of this method.
#9 opened by erow - 10
About the logvar prediction
#12 opened by daxintan-cuhk - 3
can I use it for pytorch loss function?
#5 opened by zuujhyt - 4
- 1
- 3
Questions about the bound
#24 opened by DZN-Research - 1
CLUBForCategorical estimating p(y|x) , why not pass logits through a softmax activation?
#23 opened by ZohraRezgui - 2
I want to apppy the mutual imformation to learn the disentangled latent codes. But if there are four latent codes, how to minimize the objective I(X1; X2; X3; X4)?
#21 opened by pandsia2007 - 1
- 1
Understanding question - what value to take of the estimator while evaluating?
#18 opened by sdahan12 - 2
您好,请问CLUB和vLUB
#19 opened by winerholiday - 2
Hi, thanks for the good work. I have a general question: according to your code, the positive term in the pytorch version minors a term of logvar but in ther tensorflow version it doesn't. Does it remain any tips in this two versions? And I also encounter a problem in MI minimization that the MI in the earlier training epoches is always <0, is it resonable and any tips to slove it?
#17 opened by xiaomi4356 - 2
confused about logvar
#7 opened by junkangwu - 3
Query about Example Training
#16 opened by ASMIftekhar - 1
- 1
About the computation of loglikeli
#15 opened by qrzou - 2
Equation issue in mi_minimization.ipynb
#13 opened by Wolfybox - 12
您好!论文中的公式6是不是存在一些错误呀
#8 opened by eyuansu62 - 0
The mi_minimization.ipynb
#11 opened by XinyiXuXD - 0
DA experiments : CIFAR and STL
#10 opened by mboudiaf - 1
- 1
Target Acc only achieves 0.79 for Domain Adaptation experiment on dataset Mnist to MnistM.
#3 opened by lcd21 - 3
I have used CLUB in my dataset for mutual information minimization. However, during training, the log likelihood loss became -255 and unchanged. Then I changed the sum to mean and things got better. But after 200 epochs the lld loss were stuck at -1.98. i wanna know how the lld loss changed when you trained CLUB in real-world dataset.
#2 opened by Georgehappy1 - 3
can u tell me how to get the loss function as u write in the code if the approximation network is parameterized in a guassian family?
#1 opened by Georgehappy1