sjoerdvansteenkiste/Neural-EM

the calculation of gamma with bernoulli distribution

Closed this issue · 3 comments

Hi! Fisrst of all, thanks for the codes.
I read the codes, and I found that you compute the pixelwise probabilities of prediction with bernoulli distribution as probs = pdata + (1-data)(1-p) (in function conpute_em_probabilities->class NEMCell->nem_model.py),.
I am not sure what this formula means. Is it the expection with the latent varialble z that would be further normalized in dimention k?

It is the likelihood for each pixel under a Bernoulli distribution, which will be used in computing the E-step (where they are normalized across K).

Thanks!
And you mentioned in paper that "In order to accurately mimic the M-Step (4) with an RNN, we must impose several restrictions on its weights and structure: the “encoder” must correspond to the Jacobian ∂ψk/∂θk".
Would you please explain how the encoder is correspond to the Jacobian ∂ψk/∂θk in your codes?
Thanks again.

If you take a look at NEMCell in network.py then you will see that here the encoder and decoder use the same (transposed) weight matrix, and that the encoder computes the jacboian in its forward pass (for the given decoder architecture)