Guidance Needed: Mapping Single Points Using Homography for Images of Different Sizes
sbmalik opened this issue · 6 comments
I've been exploring your library and am genuinely impressed with its performance. As I'm relatively new to point/line mapping, I'm seeking guidance on transforming single points after estimating the homography matrix. Using the method:
H = hest.ransac_point_line_homography(matched_kps0, matched_kps1, line_seg0, line_seg1, tol_px, False, [], [])
I obtained the homography matrix H. My intention was to use this matrix to map individual points from img0 to img1. To achieve this, I utilized:
mapped_point = cv2.perspectiveTransform(np.array([[[x, y]]], dtype='float32'), H)
x_mapped, y_mapped = mapped_point[0][0]
However, the results from this transformation were not as expected. Notably, when using the homography matrix H
with cv2.warpPerspective()
to warp the entire image, the results are accurate and as anticipated. Furthermore, the images I'm working with differ in size:
img0: (1080, 1920)
img1: (2020, 3584)
Could the differing image dimensions influence the transformation results? If so, how should I appropriately handle this scenario?
Any insights or guidance on whether I need to adopt a different approach or utilize another function would be greatly appreciated.
Thank you for your assistance!
Hi,
I suppose that cv2.perspectiveTransform uses a different convention for the homography and/or keypoint coordinates than us. Instead, you can use our provided function:
GlueStick/gluestick/geometry.py
Line 51 in 7120c58
It takes torch tensors as input, so you probably need to convert the homography to a torch.Tensor first and change its format, then use it as follows:
H = torch.tensor(H, dtype=torch.float)
H = (H / H[-1, -1]).reshape(8)
points = torch.tensor([[[x, y]]], dtype=torch.float)
warped_points = warp_points_torch(points, H)
In case the result still seems wrong, it might be because the homography was inverted, in which case you can use the option 'inverse=False' in warp_points_torch
.
@rpautrat Thank you for the detailed response. I tested it but the H derived from
H = hest.ransac_point_line_homography(matched_kps0, matched_kps1, line_seg0, line_seg1, tol_px, False, [], [])
Is a 3x3 Matrix so can't be reshaped using
H = (H / H[-1, -1]).reshape(8)
# ERROR:
# RuntimeError: shape '[8]' is invalid for input of size 9
If I reshape it to 9 using
H = (H / H[-1, -1]).reshape(9)
# ERROR: from the relevant function
# geometry.py, line 67, in warp_points_torch
# H_mat = torch.cat([H, torch.ones_like(H[..., :1])], axis=-1).reshape(out_shape)
# RuntimeError: shape '[3, 3]' is invalid for input of size 10
Can you guide me how to resolve this?
Sorry I forgot one part. You should reshape it like this:
H = H.reshape(9)[:-1] / H[-1, -1]
and it's quite accurate. Thank you for the great work ❤️
Great, happy to hear that!