bigger image size
artnose opened this issue · 1 comments
I get this error
epoch 0
Traceback (most recent call last):
File "train.py", line 135, in
L_feat = lambda_f * F.mean_squared_error(Variable(feature[2].data), feature_hat[2]) # compute for only the output of
layer conv3_3
File "/usr/lib64/python2.7/site-packages/chainer/functions/loss/mean_squared_error.py", line 44, in mean_squared_error
return MeanSquaredError()(x0, x1)
File "/usr/lib64/python2.7/site-packages/chainer/function.py", line 190, in call
self._check_data_type_forward(in_data)
File "/usr/lib64/python2.7/site-packages/chainer/function.py", line 273, in _check_data_type_forward
type_check.InvalidType(e.expect, e.actual, msg=msg), None)
File "/usr/lib/python2.7/site-packages/six.py", line 718, in raise_from
raise value
chainer.utils.type_check.InvalidType:
Invalid operation is performed in: MeanSquaredError (Forward)
Expect: in_types[0].shape == in_types[1].shape
Actual: (1, 256, 119, 119) != (1, 256, 118, 118)
what are the restrictions on image size?
The network subsamples your image twice during transformation, that is, the dimensions shrink 4x at a certain point in the process. So if your resolution is not divisible by 4 it won't necessarily turn out the same size. In fact, it will always be less.
If you really need to use 119, you can explicitly pass outsize
argument to Deconvolution2D
upon initialization, but it makes the operation less convenient.