nv-tlabs/LION

Details of ShapeNet-vol evaluation

Closed this issue · 1 comments

Hello,
I'm bringing up the questions I had after you closed #16, in case you missed that part. I would be interested to know as much as possible about the (sub)set of examples you used to evaluate on ShapeNet-vol, such that I can meanigfully compare to LION in absence of released weights/samples. The easiest would be if you can share that subset as a files, but in absence I would need

  1. The IDs of the models used.
  2. The preprocessing scheme (in particular, are applying the scale and loc parameters found in the .npz files of the dataset?).
  3. Ideally the IDs of the points you picked in each model (since the dataset provides more than 2048 points per model).

Thanks for your swift responses on the previous issues.

ZENGXH commented

Thanks for bring this up. I upload the 1000 point cloud here (if you want to directly use this data, please verify the point cloud's axis/pose align with your training data)
for the pre-processing scheme, I normalized both the sampled data and the validation data into [-1,1] using this part of code, i.e. I call the function with compute_score(samples='samples.pt', ref_name='ref_ns_val_all.pt', norm_box=True)