ubc-vision/COTR

How can i realize reconstruction

Closed this issue · 7 comments

hello, i want to know that how i can put your correspondences into colmap, thanks

COLMAP requires steady keypoints across images, I'd use the code example in guided matching to obtain matches with consistent keypoints.

Thanks! But I still have a problem, how to speed up the matching process, it is too slow, despite tuning the relevant parameters.

Yes, currently we are also working on the speed problem, but still in progress.

Sorry to bother you again, how did you get your.npy file

Which .npy do you mean?
If you are referring to the camera poses, we obtained it from the ground truth of MegaDepth dataset.

But I still have some questions: the camera pose I recovered using COLMAP seems to be different in format from the one you provided.For example, how to get 'CAM_center', do you have any good advice?

I see. In the script, we used 'cam_center'(camera center in world space), 'intrinsic'(camera intrinsic matrix), and 'c2w'(camera to world matrix).
We read the camera pose/intrinsics with the code here: https://github.com/ubc-vision/COTR/blob/master/COTR/datasets/colmap_helper.py#L140-L142
Once the cameras are read, the camera pose is encapsulated with https://github.com/ubc-vision/COTR/blob/master/COTR/cameras/camera_pose.py, and you can read the camera center in world space with property: camera_center_in_world.