bennyguo/instant-nsr-pl

Clamped input points before scaling in finite_difference

Closed this issue · 6 comments

Hi, I don't understand why the points_d_ are clamped to range (0, 1), and then scaled to the range (0, 1), rather than first clamped to range (-radius, radius) and then scaled from range (-radius, radius) to (0, 1).

The current approach is causing inputs in the [-1, 0] range to be mapped to 0 and thus have no associated gradient.

eps = 0.001
points_d_ = torch.stack([
points_ + torch.as_tensor([eps, 0.0, 0.0]).to(points_),
points_ + torch.as_tensor([-eps, 0.0, 0.0]).to(points_),
points_ + torch.as_tensor([0.0, eps, 0.0]).to(points_),
points_ + torch.as_tensor([0.0, -eps, 0.0]).to(points_),
points_ + torch.as_tensor([0.0, 0.0, eps]).to(points_),
points_ + torch.as_tensor([0.0, 0.0, -eps]).to(points_)
], dim=0).clamp(0, 1)
points_d = scale_anything(points_d_, (-self.radius, self.radius), (0, 1))

@bennyguo In case you have not encountered this error already, this will have get fixed for the Neuralangelo approach to work correctly.

Yes it's a bug 😅 I also noticed this when I implemented Neuralangelo. I'll fix this when I finish my experiments about Neuralangelo, and you can also open a PR to fix this.

Glad to hear you've also implemented Neuralangelo! I'm still experimenting on the DTU dataset and would love to chat about the implementation details if you're interested.

I was just starting to get my hands on the code, and so far I haven't implemented anything really. But I am happy to chat about the details with you if I can be of any help.

Great! How about a PR to get this bug fixed? I'll upload the Neuralangelo code very soon (hopefully tomorrow).

Sure, I'll have an open PR on this before you upload the code.