isl-org/Open3D-PointNet2-Semantic3D

[Problem] Preprocess.py Memeory Error

fenglupeter opened this issue ยท 4 comments

point_cloud = open3d.read_point_cloud(pts_file)
MemoryError: std::bad_alloc

My laptop is 16GB RAM

yxlao commented

64GB+ memory and/or a large swap space is recommended. The biggest point cloud could contain up to 5e8 points, see here. Open3D stores x, y, z, r, g, b in double, so every point is 8 x 6 = 48 bytes, just storing that point cloud alone would take 5e8 x 48bytes = 24 GB.

Currently we are loading the full point cloud to memory for down sampling. Although more memory-efficient voxel down sampling is possible but it's not implemented. The suggestion is to only use the smaller point clouds in the dataset or implement a more memory-efficient voxel down sampling.

64GB+ memory and/or a large swap space is recommended. The biggest point cloud could contain up to 5e8 points, see here. Open3D stores x, y, z, r, g, b in double, so every point is 8 x 6 = 48 bytes, just storing that point cloud alone would take 5e8 x 48bytes = 24 GB.

Currently we are loading the full point cloud to memory for down sampling. Although more memory-efficient voxel down sampling is possible but it's not implemented. The suggestion is to only use the smaller point clouds in the dataset or implement a more memory-efficient voxel down sampling.

Thank you

64GB+ memory and/or a large swap space is recommended. The biggest point cloud could contain up to 5e8 points, see here. Open3D stores x, y, z, r, g, b in double, so every point is 8 x 6 = 48 bytes, just storing that point cloud alone would take 5e8 x 48bytes = 24 GB.

Currently we are loading the full point cloud to memory for down sampling. Although more memory-efficient voxel down sampling is possible but it's not implemented. The suggestion is to only use the smaller point clouds in the dataset or implement a more memory-efficient voxel down sampling.

Hi, could you please recommend how much swap space should I allocate for a system with 16GB memory?

@yxlao Could you please make a zipped semantic_downsampled directory and/or pre-trained network available for download? Would help someone who simply wants to evaluate/demo semantic the segmentation using Open3D on a laptop. I'm having the same issue.