lisa-lab/pylearn2

get_weights_topo on fully conencted layer after convolutional layer returns the wrong space

Opened this issue · 1 comments

For example, if the input_space to the layer is 8(filters) x 4 (w) x 4 (h) and the output space is 128, get_weights_topo should return 128 x 4 x 4 x 8 (given the default axes b, 0, 1, c). However, it returns 128 x 4 x 8 x 4. This is because the code internally already transposes the weight to adhere to b, 0, 1, c, but then again transposes them assuming it to be b, c, 0, 1. The wrong code can be found in mlp.py, in the Linear-class.

Thanks for the report!
That is right, SoftmaxPool and Linear reshape W.T assuming the input of the dot product is b01c (which it currently is, since that is the default axes of Conv2DSpace), but then transposes it assuming the axes were the input space.
Softmax however calls desired_space.format_as(W.T, input_space) instead of reshaping, then transposes the axes to b01c, which gives the right result.

Given that get_weight_topo should return a tensor with axes b01c, I think the more future-proof approach would be to follow the implementation of Softmax.
A simpler implementation would be to make sure that the default axes of Conv2DSpace are still b01c, keep the explicit reshape, and drop the existing transpose.