nmwsharp/diffusion-net

PointCloud classification

Opened this issue · 5 comments

Hi Nicholas,

Thanks for open-sourcing your code. I am using this method for point-cloud classification problem.

I wanted to know what classification head did you use in SHREC'11 dataset? The current DiffusionNet outputs at "vertices", "edges", or "faces". What if I want to output a single class for the whole point-cloud? Do I simply do mean over all point outputs? Did you try out different classification heads?

Hi!

We took outputs at vertices, then used a global mean to get classification scores. We did not really try out many experiments that deviated from this basic construction, I don't think.

Hi,
I have trained the model and I want to use it on my point cloud data. I am stuck with an error that says "segmentation fault" but looking at the logs I see it is happening at get_operators (which calls compute_operators and all the way down to some robust_laplacian_bindings file). Any guess where I might be wrong? Is it possible to share your point cloud inference code with me?

@aunagar, sorry for bothering you, but do you remember how you did it?

Thank you both!

The robust_laplacian bindings (from this repo https://github.com/nmwsharp/robust-laplacians-py) are used to build the Laplace matrices for point clouds.

A segfault indicates something very bad is happening inside of that library (leading to an illegal memory access). This should definitely not happen under normal circumstances. Even with not-so-great data.

A few things to check:

  • Are you using the latest version of the robust_laplacian package?
  • Are you doing anything funky with your environment setup which might lead to code compiled on one machine being used on a different machine? (The robust_laplacian package contains compiled code under the hood, and using code compiled for the wrong machine can cause segfaults).
  • Does your point cloud data have anything especially weird about it, like NaN/inf values, or may points exactly on top of each other?

Yes, I have the latest robust_laplacian package. I am running the code on our hpc server. I don't know the inner workings of the environment exactly. There aren't any nan/inf or duplicates.

This is my code, residing in src folder. Any insight would be appreciated


import torch
import numpy as np
import diffusion_net

C_in = 3
C_out = 30

model = diffusion_net.layers.DiffusionNet(
C_in=C_in,
C_out=C_out,
C_width=64,
last_activation=lambda x : torch.nn.functional.log_softmax(x,dim=-1),
outputs_at='vertices')

checkpoint = torch.load('pretrained/diffusionnet_25jun.pth')
model.load_state_dict(checkpoint['model_state_dict'])

model.eval()

g = np.loadtxt('data/data.csv', delimiter=',', skiprows=1, usecols=(0, 1, 2)) # this has x, y, z coordinates of an object

verts = torch.from_numpy(g)
faces = torch.tensor([])
verts = diffusion_net.geometry.normalize_positions(verts)
frames, mass, L, evals, evecs, gradX, gradY = diffusion_net.geometry.get_operators(verts, faces)
features = verts
outputs = model(features, mass, L=L, evals=evals, evecs=evecs, gradX=gradX, gradY=gradY, faces=faces)
print(outputs)