ducha-aiki/affnet

About train_OriNet_test_on_graffity.py

Closed this issue · 1 comments

I have a question about train_OriNet_test_on_graffity.py.
It's about the test function.
I think that for "input_img_fname1" and "input_img_fname2", the same 3d point part is extracted as a 32 * 32 image and assigned to the model (AffNet, OriNet).
However, I don’t know how to identify and extract the same 3D points.
Perhaps you are doing it in ine116, line118 of https://github.com/ducha-aiki/affnet/blob/master/SparseImgRepresenter.py, can you tell me the detailed mechanism?

For the graffity it is easy: you just reproject keypoint to another image using homography