Construction of PrimaryCaps
manuelsh opened this issue · 3 comments
I believe this line:
CapsNet-Pytorch/capsulelayers.py
Line 104 in 8a5a357
is not making the capsules in the right way. In theory if we do:
outputs = self.conv2d(x) outputs_2 = outputs.view(x.size(0), -1, self.dim_caps)
then outputs[0,0:8,0,0]
should be equal to outputs_2[0,0,0:8]
and if you apply the view in that way this may not be guaranteed.
I believe you should permute the dimensions to be [batch_size, 6, 6, 256] before doing the view.
See for example the original implementation of the authors:
What are your results if you do that? Our experience so far seems that with the correct view the results are actually worse. (!)
Btw, we are using cosine annealing as learning rate decay and works better than the exponential one.