Hi, I am not sure why in NeuS the $\alpha_i$ values are computed differently in the code and in the paper. I know this repo is not from the original authors, but someone might have an idea.
In the NeuS paper, we find this expression
where consecutive samples in a ray are utilized to obtain the $\alpha_i$.
In the code
|
def get_alpha(self, sdf, normal, dirs, dists): |
|
inv_s = self.variance(torch.zeros([1, 3]))[:, :1].clip(1e-6, 1e6) # Single parameter |
|
inv_s = inv_s.expand(sdf.shape[0], 1) |
|
|
|
true_cos = (dirs * normal).sum(-1, keepdim=True) |
|
|
|
# "cos_anneal_ratio" grows from 0 to 1 in the beginning training iterations. The anneal strategy below makes |
|
# the cos value "not dead" at the beginning training iterations, for better convergence. |
|
iter_cos = -(F.relu(-true_cos * 0.5 + 0.5) * (1.0 - self.cos_anneal_ratio) + |
|
F.relu(-true_cos) * self.cos_anneal_ratio) # always non-positive |
|
|
|
# Estimate signed distances at section points |
|
estimated_next_sdf = sdf[...,None] + iter_cos * dists.reshape(-1, 1) * 0.5 |
|
estimated_prev_sdf = sdf[...,None] - iter_cos * dists.reshape(-1, 1) * 0.5 |
|
|
|
prev_cdf = torch.sigmoid(estimated_prev_sdf * inv_s) |
|
next_cdf = torch.sigmoid(estimated_next_sdf * inv_s) |
|
|
|
p = prev_cdf - next_cdf |
|
c = prev_cdf |
|
|
|
alpha = ((p + 1e-5) / (c + 1e-5)).view(-1).clip(0.0, 1.0) |
|
return alpha |
we do not follow the formulation above. Instead, we approximate the section points at each computed SDF value, which requires additional inputs: the directions and the intersample distances in the ray.
I wonder why this is done in this way. It seems to reduce the amount of queries to the SDF function by half, at the expense of introducing some noise. Could someone help me clarify?