Why isn't the GMM network correct to the original paper.
Closed this issue · 0 comments
Dear Mr @sergeywong ,
As i far as i understand, your network reimplements GMM from this paper: https://arxiv.org/pdf/1703.05593.pdf . In the original GMM's paper the architecture was:
Stage 1:
ImageA -> featureExtraction -> fA
ImageB -> featureExtraction -> fB
then matching(fA, fB) -> AffineRegresstion(CAB) -> ThetaAff , then warp to imageA to become ImageWarpA.
Stage 2:
Repeat over featureExtraction ImageWarpA , then matching(fImageWarpA, fB) -> TBS Regression -> ThetaTPS.
Which matching is correlation layers plus L2-Norm.
However, in your network you implemented like this (please see the attached)
Can you please explain why it that, or just point out where i am missing you in the paper?
Thank you for your time.
Cheers!