ubc-vision/COTR

How is the warpped image in Figure 9 generated?

Closed this issue · 4 comments

Hi, thanks for the great work! I'm curious about how do you generate the warpped image in Figure 9 by dense flow. If I understand correctly, you input a pixel coordinate (x, y) in img1, and get its corresponding coordinate (x', y') in the img2. Then, you just copy the RGB in (x, y) to (x', y') in img2, and repeat this for all the coordinates in img1. Am I correct? Or, is there any efficient way of doing so? (like you've mentioned in #28 ?)

Hi, I think the dense flow of Fig9 is generated at 256x256 resolution, which is the coarsest result from COTR.
You can obtain the flow and the resampled images from this line:

corr_a, con_a, resample_a, corr_b, con_b, resample_b = cotr_flow(self.model,

Thanks for the quick reply, and correct me if I'm wrong: from this line, it seems that the input query coordinate of your model can be either in I or I'? If that's true the warping image is the same as people do in e.g. optical flow, and I understand the process.

Yes, the query coordinate can be either in I or I'.

Thanks for the reply! That helps a lot.