naoto0804/SynShadow

Matte images fixed size

MohammedElGharawy opened this issue · 7 comments

Hello, Thanks for sharing your work

Is it possible to change the size of the matte images?

Or are they restricted to 256*256by design?

Here's the exact command i used
!python test.py --dataset_mode demo --dataset_root datapath --mask_to_G precomp_mask --mask_to_G_thresh 0.95

If you train the network with different resolution. Since the network of SP+M Net is mostly fully convolutional, you should modify a single part here, since you will 'flatten' the feature map, whose size is dependent of the input resolution.

I just used the dataset demo feature to test on multiple images I had with different resolutions (1520 * 912) but all their output was fixed at 256 * 256.
Do I need to retrain the network to get a different resolution for the output?!

My comment above may be wrong. Sorry for my misunderstanding.

As the original paper suggests, the easiest way to process arbitrary size input images is 'interpolating the shadow matte', though the performance may be a bit worse than the model tuned for a specific resolution,

I see, but is there a way in the code existing in this repo to change the output size for the shadow-free image and the matte output?

I want to use the pretrained model as it is and get an output with the same resolution as the input basically.

This part should be what you want; You might want to modify dataloader part to keep the given image in the original resolution as PyTorch tensor format.