IsaacGuan/PointNet-Plane-Detection

Question for preparing data

hjsg1010 opened this issue · 8 comments

Hello @IsaacGuan

Please check that I understand how you prepare your data

  1. get .pts data and filelist
  2. run make_filelist.py
    3(?). run rename_pts.py
  3. run make_ply.py
  4. write_hdf5.py

If not, could you tell me the right order for preparing my own data using your git codes?

Sorry for my poor english skills, and I will really really appreciate if you reply me.


I run your code as follow order without ply, hdf5_data, points_label folder

  1. make_filelist.py
  2. run make_ply.py
  3. write_hdf5.py

however, when I run write_hdf5.py, it shows that it needs points_label dats, so I put it.
Then only data_testing.h5 came out.
can you help me?

I'am trying to classify pointcloud data, but, I can't make my own data as hdf5 format :(
plz help me

Hi,

It depends on what kind of data you have.

For training:

  • If you are using PLY files, all you need to do is to put them into the data/ply folder and collect the corresponding segmentation information (the SEG files) in data/points_label for each PLY file. Then run make_filelist.py and write_hdf5.py sequentially.
  • If you are using PTS files, you could first run make_ply.py to convert them into PLY and then finish the aforementioned steps.

For testing:

  • You should prepare the HDF5 file as is described above.
  • The PTS files in data/points and SEG files in data/points_label are required for testing. Also make sure that you have this testing_ply_file_list prepared.
  • The rename_pts.py is optional. It's just for neat file naming.

Note that the current state of this repository only contains data for testing. That's why only the data_testing.h5 can be generated. You could roll back to a previous commit add chairs to the training set to see the training data.

hello,i am sorry to bother you about how to prepare the raw data,it seems that the points/ply needs to be the same size(2048), id like to know why and how to guarentee the number of the points to be 2048

Hi @weiweimanger,
You could try a uniform sampling algorithm (e.g., Poisson disk sampling) to select 2048 points from a point cloud. Instead of implementing such an algorithm yourself, the simplest way is to use the geometry processing tool MeshLab, where such point sampling interfaces are provided.

Hi @IsaacGuan, so i have .ply file, as you mentioned that i need to collect the .seg file for each .ply, how do i do that ?

Hi @dong274,
For preparing .seg for your own shapes, you will need to use some mesh segmentation tools. But if you are using data from ShapeNet, you will find the meshes are ready segmented.

@IsaacGuan I am using .csv files per image. It contains multidimensional features. Can I use .csv file instead of .ply file for .h5 file? Thanks

Hi @csitaula,
Sure, any format of point cloud is OK, as long as you read it as NumPy array so that it can be written into HDF5 file using the h5py package.

Hi @IsaacGuan
Thanks for sharing this work. I am new to these ply and HDF5 files. I want to prepare my own dataset for PointNet2 and from there I got the link of your repo. I am trying to follow the steps u mentioned above but facing the problem. After running the file sequentially, the data_testing.h5 generated but it is empty. Not taking any values.