weiyithu/NerfingMVS

The sampling in the code is different from that in the paper

cwchenwang opened this issue · 3 comments

In the paper: tn = D(1 − max(e, αl)), tf = D(1 + min(e, αh))
In the code:

near = (depth_priors * (1 - torch.clamp(depth_confidences, min=near_bound, max=far_bound))).unsqueeze(1)
far = (depth_priors * (1 + torch.clamp(depth_confidences, min=near_bound, max=far_bound))).unsqueeze(1))

Why do you clamp the confidence in the code?

In the paper, we also clamp the confidence: αl and αh defines the relative lower and higher
bounds of the ranges. The bounds avoid the samples being over-concentrated or overly random, which realizes a trade-off between diversity and precision of the sampled points.

Thanks for your speedy response. I am still a bit confused. Are e and depth_confidences represent the same variable? If so, how could the above two equations be equivalent to the below two?

Ohhh,thank you very much! This is a mistake in the paper. We just want to clamp the confidence, avoiding it to be too small or too large. The sampling in the code is right.