Should we use activations or probabilities for the associations
fromLittleAcorns opened this issue · 1 comments
Hi Gabriel
Very helpful, I am looking to explore the use of an RBM in an autoencoder and this has saved me a lot of time.
One question, in the work I have seen where an RBM is trained using CD method, the readients are calculated using the visible and hidden states and not the probabilities. In this case this would mean that positive associations shouldbe the matric product of the input data and the positive_hidden_activations and not the positive_hidden_probabilities, which is what it is at present. The same would apply for the negative associations.
Is there a reason you have adopted this approach.
Once again thanks for this
John
Thanks for pointing out, John.
I've updated the code to use hidden activations except for in the last step. The algorithm still uses positive probabilities since those usually lead to faster mixing times. Refer to https://www.cs.toronto.edu/~hinton/absps/guideTR.pdf, pages 5-6, for reference on this implementation.
If you see any further issues, please let me know.
Thank you,
Gabriel.