input image size different for training and testing on the BSDS500 dataset??
Opened this issue · 1 comments
Hi,
I was running "compute_ssn_superpixels.py" when I found something curious that I wanted to ask the authors about.
The paper mentioned that for training, the authors used "image patches of size 201 x 201" of the original BSDS500 dataset. However, when I printed the image sizes used for testing in compute_ssn_superpixels.py
via print(net.blobs['img'].data.shape)
, it returned (1, 3, 321, 481)
.
So I wanted to confirm whether different input image sizes were using for training and testing when computing superpixels on the BSDS500 dataset (it seems that the trained weights have been reused in the test network whose activation sizes have accordingly been adjusted for the changed input image size?).
Thanks in advance :)
The network is convolutional. So, the network is independent of image/patch size as long as resolution (DPI) is roughly same and image heigh and width are multiples of 8 plus 1 (8X + 1). We used image patches of 201x201 during training to fit in bigger batches. We did not resize images, but used image crops during training. To reuse same network for different number of superpixels, we scale the input XYLab features accordingly.