Pointcept/SegmentAnything3D

scannetv2

BKINGING opened this issue · 5 comments

scannetv2 dataset is too large, do we need to download it all? Or just download these four?
python 1.py -o scannet/ --type _vh_clean_2.ply
python 1.py -o scannet/ --type _vh_clean_2.labels.ply
python 1.py -o scannet/ --type _vh_clean_2.0.010000.segs.json
python 1.py -o scannet/ --type .aggregation.json

Hi, you just need these four for this project. Also, if you want to have a try first, you can just download specific scenes.

Thank you for your reply

Hello, how can I solve this problem?
Traceback (most recent call last):
File "sam3d.py", line 283, in
voxelize, args.th, train_scenes, val_scenes, args.save_2dmask_path)
File "sam3d.py", line 239, in seg_pcd
indices, dis = pointops.knn_query(1, gen_coord, offset, scene_coord, new_offset)
AttributeError: module 'pointops' has no attribute 'knn_query'

Hi, @BKINGING Do we really need only the four? It asked me for the .sens data when I run prepare_2d_data :(
Instead I download the scannet_frames_25k zip file, will it work? Thanks.

Just in case someone else comes across this and wants to use it without downloading much data while being a bit unfimilar with Scannet, you can follow this :

  1. Download the already preprocessed scenes from here to obtain preprocessed scenes as mentioned in the Pointcept readme.

  2. To get the image data, use the scannet download script and download processed test frames :

python download_scannnet.py -o <output_dir> --test_frames_2d
  1. Make slight changes in code :
  • change this line to fit the format for intrinsics from test_frames_2d :
    intrinsic_path = join(rgb_path, scene_name,'intrinsics_depth.txt')
  • add simple else condition here to enable run on test data :
    if scene_name in train_scenes:
            scene_path = join(data_path, "train", scene_name + ".pth")
        elif scene_name in val_scenes:
            scene_path = join(data_path, "val", scene_name + ".pth")
        else : 
            scene_path = data_path