FangjinhuaWang/PatchmatchNet

Questions about how the offsets in Adaptive Propagation are learned

Closed this issue · 1 comments

Hi, Thank you for your excellent work. I have some questions about how the offsets in Adaptive Propagation are learned.

As mentioned by your paper, "We base our implementation of the adaptive propagation on Deformable Convolution Networks". I could see that you use a conv2d to output offsets and use these offsets to sampling depths. The questions is you did not use similar implementations as the Deformable Convolution Networks. This may cause an backward problem because, to my understanding, the sampling operation could not be backpropagated through the indexes. So there are no gradients related to the offsets and the conv parameters producing the offsets could not be learned through training. The Deformable Convolution Networks just write new kernel functions by cuda for such backpropagation. So I an wondering whether your current network could learn the offsets in the training.

Thanks a lot.

Hi, after searching I found that torch.grid_sample() could compute the gradients w.r.t the grid, originally for Spatial Transformer Networks. Sorry for my mistakes.