what does low_res_certainty do in dkm.py?
Moreland-cas opened this issue · 3 comments
Great work!
I wonder what is the effect of
"
low_res_certainty = factorlow_res_certainty(low_res_certainty < cert_clamp)
...
dense_certainty = dense_certainty - low_res_certainty
" in dkm/models/dkm.py?
Also, shouldn't this
"
query_coords = torch.meshgrid(
(
torch.linspace(-1 + 1 / hs, 1 - 1 / hs, hs, device=device),
torch.linspace(-1 + 1 / ws, 1 - 1 / ws, ws, device=device),
)
)
" be
"
query_coords = torch.meshgrid(
(
torch.linspace(-1 + 1 /( 2hs), 1 - 1 /(2 hs), hs, device=device),
torch.linspace(-1 + 1 /( 2* w)s, 1 - 1 /( 2* ws), ws, device=device),
)
)
"?
Thanks!
The first is a heuristic, we find that scale 16 is overly uncertain, so optionally we simply remove some of the uncertainty.
Second, if we take some extreme case like hs=2, we get points on -0.5,0.5. I think this is in line with "align_corners=False".
The first is a heuristic, we find that scale 16 is overly uncertain, so optionally we simply remove some of the uncertainty.
Second, if we take some extreme case like hs=2, we get points on -0.5,0.5. I think this is in line with "align_corners=False".
Thanks for the quick reply !
I'm good with the second one now. Can you explain more about the first one please? What do you mean by "overly uncertain" and how does subtraction make it better?
Basically we train the confidence in our model to match the MVS dataset (MegaDepth). It seems like megadepth fails quite often to provide accurate depth, hence the model learns this bias. Basically in the postprocessing we "soften" the uncertain regions, to somehow compensate for this. It's a quite heuristic approach and could be improved I think :D