eric-yyjau/pytorch-superpoint

Training MagicPoint on MS-COCO & how did you design Homographic Adaptation ?

TOMOKI953 opened this issue · 9 comments

Hi, @eric-yyjau @saunair
Thank you for your great job.
question.1
I was not able to find the step on yours which is training MagicPoint on MS-COCO like Mr.rpautrat's code(https://github.com/rpautrat/SuperPoint).

if you forget to mention and make the codes on your GitHub, Would you like us to reply with the way to make or the codes by using PyTorch?

question.2
I was not able to understand the process .

def combine_heatmap(heatmap, inv_homographies, mask_2D, device="cpu"):
    ## multiply heatmap with mask_2D
    heatmap = heatmap * mask_2D

    heatmap = inv_warp_image_batch(
        heatmap, inv_homographies[0, :, :, :], device=device, mode="bilinear"
    )

    ##### check
    mask_2D = inv_warp_image_batch(
        mask_2D, inv_homographies[0, :, :, :], device=device, mode="bilinear"
    )
    heatmap = torch.sum(heatmap, dim=0)
    mask_2D = torch.sum(mask_2D, dim=0)
    return heatmap / mask_2D
    pass

This is not a logical sum (OR) process.
I believe that it is better to make the process by using OR process.

hell, I think you can read the README.md carefully, It has the step you mentioned. And I have a question, the time of step 1, if you spent 2 days for training? My GPU just loads 20%, and it's long time to train

@Taogonglin

hell, I think you can read the README.md carefully, It has the step you mentioned.

Thank you for replying to us.
But, where is that mention training MagicPoint on MS-COCO in README.md? I did not know that.
Could you quote the texts in README.md?

And I have a question, the time of step 1 if you spent 2 days for training? My GPU just loads 20%, and it's long time to train

In my case, it was almost 1day I took.

@Taogonglin
If you don't mine, Could you tell me the way to train training MagicPoint on MS-COCO or code ?

@Taogonglin If you don't mine, Could you tell me the way to train training MagicPoint on MS-COCO or code ?

sorry, I misunderstand that, It's true that no way to train training MagicPoint on MS-COCO or code. But I think that is not necessary for that way, maybe you can read the paper to find what the function of the MagicPoint. I think it just to find the truth on the COCO, and then use COCO to train SuperPoint. I just an undergraduate student, maybe I'm wrong

sorry, I misunderstand that, It's true that no way to train training MagicPoint on MS-COCO or code. But I think that is not necessary for that way, maybe you can read the paper to find what the function of the MagicPoint. I think it just to find the truth on the COCO, and then use COCO to train SuperPoint. I just an undergraduate student, maybe I'm wrong

Never mind. Thank you for replying rapidly!
When I read the paper on page 6 and figure 7, The author said ''We repeat the Homographic Adaptation a
second time, using the resulting model trained from the first round of Homographic Adaptation(I think this step is step 2 in this GitHub)
''.

Namely, Doing some time of Homographic Adaptation is important.
The distribution of MagicPoint gets widely due to repeating Homographic Adaptation and training MagicPoint on new images.
And, Figure 7 said that the repeatability of detected points improves empirically.

sorry, I misunderstand that, It's true that no way to train training MagicPoint on MS-COCO or code. But I think that is not necessary for that way, maybe you can read the paper to find what the function of the MagicPoint. I think it just to find the truth on the COCO, and then use COCO to train SuperPoint. I just an undergraduate student, maybe I'm wrong

Never mind. Thank you for replying rapidly! When I read the paper on page 6 and figure 7, The author said ''We repeat the Homographic Adaptation a second time, using the resulting model trained from the first round of Homographic Adaptation(I think this step is step 2 in this GitHub) ''.

Namely, Doing some time of Homographic Adaptation is important. The distribution of MagicPoint gets widely due to repeating Homographic Adaptation and training MagicPoint on new images. And, Figure 7 said that the repeatability of detected points improves empirically.

I think you can just run this code, and see the data which is used to evalut, and I have a problem about trianing time. The readme.md says it just use about 8 hours, but at step 1, I have spend 20 hours, and I use 2080ti. Could you tell me why?

@Taogonglin

I think you can just run this code, and see the data which is used to evalut, and I have a problem about trianing time. The readme.md says it just use about 8 hours, but at step 1, I have spend 20 hours, and I use 2080ti. Could you tell me why?

When I tried step1, I have spend more than approximately 20 hours too but, I used GTX1070 ^_^.
To be honest, I do not know about it. sorry.
If you are working on something at the same time, it may run slower.

@Taogonglin

I think you can just run this code, and see the data which is used to evalut, and I have a problem about trianing time. The readme.md says it just use about 8 hours, but at step 1, I have spend 20 hours, and I use 2080ti. Could you tell me why?

When I tried step1, I have spend more than approximately 20 hours too but, I used GTX1070 ^_^. To be honest, I do not know about it. sorry. If you are working on something at the same time, it may run slower.

thank you!

in origin paper, Homographic Adaptation is used for generating more points in real image, my question is

 def combine_heatmap(heatmap, inv_homographies, mask_2D, device="cpu"):
    ## multiply heatmap with mask_2D
    heatmap = heatmap * mask_2D

    heatmap = inv_warp_image_batch(
        heatmap, inv_homographies[0, :, :, :], device=device, mode="bilinear"
    )

    ##### check
    mask_2D = inv_warp_image_batch(
        mask_2D, inv_homographies[0, :, :, :], device=device, mode="bilinear"
    )
    heatmap = torch.sum(heatmap, dim=0)
    mask_2D = torch.sum(mask_2D, dim=0)
    return heatmap / mask_2D
    pass

should be as follow

def combine_heatmap_new(heatmap, inv_homographies, mask_2D, device="cpu"):
    ## multiply heatmap with mask_2D
    heatmap = heatmap * mask_2D

    heatmap = inv_warp_image_batch(
        heatmap, inv_homographies[0, :, :, :], device=device, mode="bilinear"
    )
    heatmap = torch.max(heatmap, dim=0)[0]
    return heatmap