BUG for Affine transformation in the coarse alignment
rcg12387 opened this issue · 7 comments
Hi. Nice work.
If I change coarseModel to 'Affine', it causes an error.
# coarseModel = CoarseAlign(nbScale, coarseIter, coarsetolerance, 'Homography', minSize, 1, True, imageNet, scaleR)
coarseModel = CoarseAlign(nbScale, coarseIter, coarsetolerance, 'Affine', minSize, 1, True, imageNet, scaleR)
---------------------------------------------------------------------------
IndexError Traceback (most recent call last)
<ipython-input-205-fc77fdfc4461> in <module>
11 warper = tgm.HomographyWarper(I2h, I2w)
12
---> 13 bestPara, InlierMask = coarseModel.getCoarse(np.zeros((I2h, I2w)))
14 bestPara = torch.from_numpy(bestPara).unsqueeze(0).cuda()
~/DeepLearning/ImageAlignment/RANSAC-Flow/quick_start/coarseAlignFeatMatch.py in getCoarse(self, Mt)
~/DeepLearning/ImageAlignment/RANSAC-Flow/utils/outil.py in RANSAC(nbIter, match1, match2, tolerance, nbPoint, Transform)
124 samples[:, 0] == samples[:, 1],
125 samples[:, 0] == samples[:, 2],
--> 126 samples[:, 0] == samples[:, 3],
127 samples[:, 1] == samples[:, 2],
128 samples[:, 1] == samples[:, 3],
IndexError: index 3 is out of bounds for dimension 1 with size 3
You commented in RANSAC function of outil.py like that:
# HARDCODED FOR HOMOGRPAHIES FOR NOW
conditions = torch.stack([
samples[:, 0] == samples[:, 1],
samples[:, 0] == samples[:, 2],
samples[:, 0] == samples[:, 3],
samples[:, 1] == samples[:, 2],
samples[:, 1] == samples[:, 3],
samples[:, 2] == samples[:, 3]
], dim=1) # N * nb_cond
It should be great that you fix your code to apply Affine Transformation as well.
Hello.
I have solved the problem by myself.
Hi,
Sorry for the late reply.
Very cool :)
If you want, you could make a pull request, I can double check what you have implemented.
Best
OK.
I created a new branch "fix_affine" and succeeded in committing in my local environment. But I cannot push it without your permission.
remote: Permission to XiSHEN0220/RANSAC-Flow.git denied to rcg12387.
Any suggestion?
I forked your repo to mine and made a new commit, so now I can make a pull request.
Please check.
Hi,
I quickly looked into your code.
I think you can directly compute the Affine transformation in cuda.
It is a solution of a least square problem, see here if it is not clear.
The code will be much simpler and faster.
Thank you.
Of course I know the affine transformation can be obtained using MSE.
However the issue is not the matter of what method you use to get the affine transformation. The issue is that your code causes the bug when you change the coarseModel to use "Affine".
# coarseModel = CoarseAlign(nbScale, coarseIter, coarsetolerance, 'Homography', minSize, 1, True, imageNet, scaleR)
coarseModel = CoarseAlign(nbScale, coarseIter, coarsetolerance, 'Affine', minSize, 1, True, imageNet, scaleR)
Is there any plan to fix your code?
I think you can directly compute the Affine transformation in cuda.
You're right. I will update Affine function in my local. Why don't you also update Homograpy function of outil.py to work in cuda?
I have tried the cuda version of Affine function but it is much slower than of cpu. You can refer this.
I think you can directly compute the Affine transformation in cuda.
It means that your suggestion may not be right.
And below is your code of Affine function.
def Affine(X, Y):
H21 = np.linalg.lstsq(Y, X[:, :2])[0]
H21 = H21.T
H21 = np.array([[H21[0, 0], H21[0, 1], H21[0, 2]],
[H21[1, 0], H21[1, 1], H21[1, 2]],
[0, 0, 1]])
return H21
This is also of cpu but input parameters X, Y are cuda tensors. Conversion to cuda tensors also does not work in the sequel logic because it is assumed that X, Y are batch of triple or quadruple pairs in your work.
IMO SVD is mathematically equivalent to MSE to get an affine transformation matrix. I do not mean the computational cost but mathematical result.
Anyway it should be good to fix your code to use the affine transformation as well.