salesforce/PCL

Question about eq(9) in your paper.

randydkx opened this issue · 12 comments

Hi, thanks for your paper and code. I have a question about eq(9) in your paper, it seems that this eq is p(ci|xi), not p(xi|ci). I think p(xi|ci) includes only a single Gaussian distribution so that the integration on xi equals 1. Can you explain it for me?
image

I have the same question. I think this part is wrong in the paper since this is actually not a correct probability form.

@LiJunnan1992 Could you explain this? I am very confused about this.

Thank you very much!

Each x_i is an independent sample, so you do not need to marginalize over x_i.

Thank you very much for your reply! However, in probability theory, only the random variable has PDF (probability density function), a independent sample can not has.

@DongXiang-CV It's ok to have unnormalized probabilities, but what confused me is that I think eq9 is p(ci|xi), not p(xi|ci), with prior probabilities fixed for every ci i.e. p(ci=k)=1/C for k =1,...,C. Could you explain it for me? @LiJunnan1992

@randydkx what do you mean it's ok to have unnormalized probabilities? If you look at the Gaussian mixture model, you will find this part in the paper has some differences. The reason why the authors did the inverse likelihood is that they also want to learn representation not just do the clustering, but this inverse likelihood results in this unnormalized probabilities which is not very reasonable compared to Gaussian mixture model.

@DongXiang-CV It's ok to have unnormalized probabilities, but what confused me is that I think eq9 is p(ci|xi), not p(xi|ci), with prior probabilities fixed for every ci i.e. p(ci=k)=1/C for k =1,...,C. Could you explain it for me? @LiJunnan1992

I agree with you and I am also confused that Eq. 9 is p(c|x), not p(x|c).

@randydkx what do you mean it's ok to have unnormalized probabilities? If you look at the Gaussian mixture model, you will find this part in the paper has some differences. The reason why the authors did the inverse likelihood is that they also want to learn representation not just do the clustering, but this inverse likelihood results in this unnormalized probabilities which is not very reasonable compared to Gaussian mixture model.

In other words, the p(x|c) should be the common Gaussian distribution not the categorical distribution and the posterior p(c|x) is the categorical distribution over cluster c. The current p(x|c) is the paper is categorical distribution over c which is not reasonable since it is not on the X space, p(x|c) should be the simple Gaussian distribution.

@DongXiang-CV It's ok to have unnormalized probabilities, but what confused me is that I think eq9 is p(ci|xi), not p(xi|ci), with prior probabilities fixed for every ci i.e. p(ci=k)=1/C for k =1,...,C. Could you explain it for me? @LiJunnan1992

I agree with you and I am also confused that Eq. 9 is p(c|x), not p(x|c).

I am also confused with this. I think our questions are essentially the same.

Thanks for your work! It seems that only with Eq.(9) we can get the loss term in Eq.(10). I wonder whether the assumption in Eq.(9) appeared in other related literature before? @LiJunnan1992

@wwangwitsel I agree with you, without this eq.(9), the whole mathematical model is not valid, and I look forward to the author's answer @LiJunnan1992

Yun-Fu commented

您好,感谢您的论文和代码。我对您论文中的eq(9)有疑问,看来这个eq是p(ci|xi),而不是p(xi|ci)。我认为p(xi| ) ci) 只包含单个高斯分布,因此 xi 上的积分等于 1。您晕了我解释一下吗? 图像

Have you known that?I‘m also so confused about it.Can you help me?