open-mmlab/OpenPCDet

Problems using custom data sets

xixioba opened this issue · 44 comments

I am trying to use my own Lidar data to test PV-RCNN instead of kitti data, I used similar kaggle annotations
However, I get an error when trying to run the code and the error message is as follows

File "***/OpenPCDet/pcdet/datasets/innovusion/innovusion_dataset.py", line 77, in __getitem__
    data_dict = self.prepare_data(data_dict=input_dict)
  File "***/OpenPCDet/pcdet/datasets/dataset.py", line 124, in prepare_data
    'gt_boxes_mask': gt_boxes_mask
  File "***/OpenPCDet/pcdet/datasets/augmentor/data_augmentor.py", line 93, in forward
    data_dict = cur_augmentor(data_dict=data_dict)
  File "***/OpenPCDet/pcdet/datasets/augmentor/database_sampler.py", line 179, in __call__
    sampled_boxes = np.stack([x['box3d_lidar'] for x in sampled_dict], axis=0).astype(np.float32)
  File "<__array_function__ internals>", line 6, in stack
  File "***/anaconda3/envs/ml/lib/python3.7/site-packages/numpy/core/shape_base.py", line 423, in stack
    raise ValueError('need at least one array to stack')
ValueError: need at least one array to stack

I located the code and found that it was related to data enhancement, in pcdet/datasets/augmentor/database_sampler.py

    def __call__(self, data_dict):
        """
        Args:
            data_dict:
                gt_boxes: (N, 7 + C) [x, y, z, dx, dy, dz, heading, ...]

        Returns:

        """
        gt_boxes = data_dict['gt_boxes']
        gt_names = data_dict['gt_names'].astype(str)
        existed_boxes = gt_boxes
        total_valid_sampled_dict = []
        for class_name, sample_group in self.sample_groups.items():
            if self.limit_whole_scene:
                num_gt = np.sum(class_name == gt_names)
                sample_group['sample_num'] = str(int(self.sample_class_num[class_name]) - num_gt)
            if int(sample_group['sample_num']) > 0:
                sampled_dict = self.sample_with_fixed_number(class_name, sample_group)  ### need help

                sampled_boxes = np.stack([x['box3d_lidar'] for x in sampled_dict], axis=0).astype(np.float32)

                if self.sampler_cfg.get('DATABASE_WITH_FAKELIDAR', False):
                    sampled_boxes = box_utils.boxes3d_kitti_fakelidar_to_lidar(sampled_boxes)

                iou1 = iou3d_nms_utils.boxes_bev_iou_cpu(sampled_boxes[:, 0:7], existed_boxes[:, 0:7])
                iou2 = iou3d_nms_utils.boxes_bev_iou_cpu(sampled_boxes[:, 0:7], sampled_boxes[:, 0:7])
                iou2[range(sampled_boxes.shape[0]), range(sampled_boxes.shape[0])] = 0
                iou1 = iou1 if iou1.shape[1] > 0 else iou2
                valid_mask = ((iou1.max(axis=1) + iou2.max(axis=1)) == 0).nonzero()[0]
                valid_sampled_dict = [sampled_dict[x] for x in valid_mask]
                valid_sampled_boxes = sampled_boxes[valid_mask]

                existed_boxes = np.concatenate((existed_boxes, valid_sampled_boxes), axis=0)
                total_valid_sampled_dict.extend(valid_sampled_dict)

        sampled_gt_boxes = existed_boxes[gt_boxes.shape[0]:, :]
        if total_valid_sampled_dict.__len__() > 0:
            data_dict = self.add_sampled_boxes_to_scene(data_dict, sampled_gt_boxes, total_valid_sampled_dict)

        data_dict.pop('gt_boxes_mask')
        return data_dict

Then the key function is sample_with_fixed_number(self, class_name, sample_group)

    def sample_with_fixed_number(self, class_name, sample_group):
        """
        Args:
            class_name:
            sample_group:
        Returns:

        """
        sample_num, pointer, indices = int(sample_group['sample_num']), sample_group['pointer'], sample_group['indices']
        if pointer >= len(self.db_infos[class_name]):
            indices = np.random.permutation(len(self.db_infos[class_name]))
            pointer = 0

        sampled_dict = [self.db_infos[class_name][idx] for idx in indices[pointer: pointer + sample_num]]
        pointer += sample_num
        sample_group['pointer'] = pointer
        sample_group['indices'] = indices
        return sampled_dict

Self.db_infos is used in the code, it is specified by sampler_cfg.DB_INFO_PATH, but my data dose not have it, so I am stuck here, what do I need to do to fix it, or is there a detailed explanation for me to understand this code
Note: My data annotation format

id confidence center_x center_y center_z width length height yaw class_name

thank you all

You can directly disable gt_sampling by set DATA_AUGMENTOR.DISABLE_AUG_LIST if you don't have gt database.

Have you tried running python -m pcdet.datasets.kitti.kitti_dataset create_kitti_infos tools/cfgs/dataset_configs/kitti_dataset.yaml after changing a different dataset?

@jihanyang
Thank you, I set DATA_AUGMENTOR.DISABLE_AUG_LIST to [ ], it should be effective, but there are some other problems
First of all, my environment is conda + pytorch1.6.0 + spconv1.2 and the computer has 32G memory and rtx2080

  1. I used the kitti dataset to train without errors, This happened when I used my own dataset.
File "train.py", line 198, in <module>
    main()
  File "train.py", line 153, in main
    train_model(
  File "***/OpenPCDet/tools/train_utils/train_utils.py", line 86, in train_model
    accumulated_iter = train_one_epoch(
  File "***/OpenPCDet/tools/train_utils/train_utils.py", line 38, in train_one_epoch
    loss, tb_dict, disp_dict = model_func(model, batch)
  File "***/OpenPCDet/pcdet/models/__init__.py", line 30, in model_func
    ret_dict, tb_dict, disp_dict = model(batch_dict)
  File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 722, in _call_impl
    result = self.forward(*input, **kwargs)
  File "***/OpenPCDet/pcdet/models/detectors/pv_rcnn.py", line 11, in forward
    batch_dict = cur_module(batch_dict)
  File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 722, in _call_impl
    result = self.forward(*input, **kwargs)
  File "***/OpenPCDet/pcdet/models/backbones_3d/pfe/voxel_set_abstraction.py", line 220, in forward
    pooled_points, pooled_features = self.SA_layers[k](
  File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 722, in _call_impl
    result = self.forward(*input, **kwargs)
  File "***/OpenPCDet/pcdet/ops/pointnet2/pointnet2_stack/pointnet2_modules.py", line 71, in forward
    new_features, ball_idxs = self.groupers[k](
  File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 722, in _call_impl
    result = self.forward(*input, **kwargs)
  File "***/OpenPCDet/pcdet/ops/pointnet2/pointnet2_stack/pointnet2_utils.py", line 142, in forward
    grouped_xyz[empty_ball_mask] = 0
RuntimeError: copy_if failed to synchronize: cudaErrorIllegalAddress: an illegal memory access was encountered
  1. I think I need some help to make my own data set.
  • I have established my own class to inherit DatasetTemplate and completed the __getitem__ function. In the prepare_data function, I completed the following dictionary
  • Then I rewritten pv_rcnn.yaml (delete AUG_CONFIG_LIST related content) to use my dataset, but the above error occurred Note: When testing the kitti dataset, I used --batch-size=1, otherwise an error CUDA:out of memery will appear
data_dict:
    frame_id: string   ### index
    points: (N, 3 + C_in)   ### x,y,z,i
    gt_boxes: optional, (N, 7 + C) [x, y, z, dx, dy, dz, heading, ...]
    gt_names: optional, (N), string  ### ['Car','Car', .....] 

@Gltina
Thank you very much.

I did not execute this command when using a custom data set, but just executed it when I tested the kitti data set. What is the function of this command? I am not very clear about the whole workflow

I think my current research is the same as what you did in #198 .I also hope to use only 3D points clouds without 2d information. Do you have any progress?As I found some error above, I am stuck here

I would appreciate it if you could offer me any help to use the self-made data set correctly

Hi,

Honestly, I think the author of this repo should be the best person to answer this question. What I did, you know, just is verify what kind of information that is not required in training process and what is not, for now, we can make sure that if you want to train your own dataset instead of the standard one, there are some things you need to do and folders you should structure as below:

first of all, the file structure should like this:

├── gt_database // generated form "python -m pcdet.datasets.kitti.kitti_dataset create_kitti_infos tools/cfgs/dataset_configs/kitti_dataset.yaml"
├── ImageSets // in this folder, test.txt should be the total number of point cloud that you provide, the "train.txt" should the total number of files (point cloud) you want to train, and then the "val.txt" should be the number of evaluation point cloud, generally, it will be less than "train.txt" 
├── testing
│   ├── calib / keep the same number as ../training/velodyne
│   ├── image_2 // keep the same number as ../training/velodyne
│   └── velodyne // it can train if it is empty, but I really have no ideas how it works in training process.
└── training
    ├── calib / keep the same number as ./velodyne
    ├── image_2 // it doesn't matter what kind of images you put, just keep the same numbers as ./velodyne
    ├── label_2 // keep the same number as ./velodyne
    └── velodyne // **put your data**

Along with the file structure, the coordinate of point could that you captured should keep the same parameters as KITTI, which is an important step for training data, click here to know more details.

On the other hand, the label file should keep the 3D information as following:

Car 0.0 0 0 1 2 3 4 0.57 0.33 0.99 -0.52 1.73 6.45 -0.22 

As it clearly shows, "1 2 3 4" is the 2D bounding box but we ignore it using meaningless data, and the following data like " 0.57 0.33 0.99 -0.52 1.73 6.45" are 3D dimension and 3D position respectively, the final one is also important because it defines the orientation of a 3D box in the scene.

That's what we did about training, I don't think that is a perfect and complete way to make a pre-training preparation if you have any methods or different way to train custom data with your own label, welcome to leave your comment to here (let more people help you)

@Gltina Hi, thank you.

I simply completed __getitem__ and made some progress and I guess we don’t need to be as complicated as you to use our own dataset

I can train with the default configuration, but there are some problems with my dataset, so it may be the cause of the failure to converge.

At the same time, I encountered some problems when trying to modify kitti_dataset.yaml, such as POINT_CLOUD_RANGE: [0, -40, -3, 70.4, 40, 1] or VOXEL_SIZE, which always caused errors at runtime.

I also tested pointpillar, but it was also stuck in POINT_CLOUD_RANGE: [0, -39.68, -3, 69.12, 39.68, 1], once I modify z, an error will occur. I am very confused about this. Do you know how to modify the configuration to fit your own dataset?

I hope you and the auther @jihanyang can give me some suggestions, thank you very much

Thanks for reminding. @xixioba

To avoid these problems you mentioned, we made the dataset with KITTI standard, such as keeping the height of the coordinate, move the detection scene far away from the origin of coordinate. So, I did not change the value of POINT_CLOUD_RANGE and VOXEL_SIZE, also because I don't know the exact meaning of these parameters during training 😟.

For now, we got a not so bad result using ~100 our own point cloud. However, we still have many works to do for checking it is a good method.

Thank you @Gltina

It is necessary for me to modify the parameters because my dataset is different from the kitti dataset. It is a 300-line lidar so it can see far away, but for now I will try the effect with the default parameters.

I will let you know if I have new progress

@xixioba
If you want to modify POINT_CLOUD_RANGE and VOXEL_SIZE, you can compare these two parameters setting between nuscenes and kittis. You should keep POINT_CLOUD_RANGE / VOXEL_SIZE are integer along x, y, z axis.

@Gltina @xixioba About the 2d information of custom data sets, we are thinking about releasing an example tha use kitti metric on other datasets (such as nuscenes) for evalutation.

@jihanyang
Thanks, I will test the configuration.

So, I did not change the value of POINT_CLOUD_RANGE and VOXEL_SIZE, also because I don't know the exact meaning of these parameters during training worried.

POINT_CLOUD_RANGE defines the range where space should be voxelized
or in other words, space which contain points you assume to be relevant and you have annotations for.
For KITTI this space is around 40m to the sides, 70m to the front, 3m below and 1m above the sensor,
hence [0, -39.68, -3, 69.12, 39.68, 1].

VOXEL_SIZE defines the [length, width, height] of each voxel,
since PointPillars uses pillars instead of voxels, the height of a voxel is set to the full height of your point cloud range.
For the KITTI frames the default length and width of a voxel is set to 16cm, hence [0.16, 0.16, 4].

I hope this helps.

@MartinHahner88
Thank you, I understand their specific meaning.

I will get an error when I try to modify them to match my dataset, so now I modify them proportionally at the same time for normal training.

In addition, I want to use data augmentation to optimize my dataset, do you have some suggestions for the above questions?

I am sorry, I do not work with custom data (yet), so currently, I cannot offer more help than the explanations above.

@Gltina @xixioba About the 2d information of custom data sets, we are thinking about releasing an example tha use kitti metric on other datasets (such as nuscenes) for evalutation.

Hi, @jihanyang

Is there a way to evaluate with only the 3D information such as result bounding box?

You mean that if there is no correct calibration info or the 2D bounding box rectangle in label files, no evaluation analysis will be performed?

😦

@Gltina
Hello, you can set the 2D boxes (x1, y1, x2, y2) to (0, 0, 50, 50) since kitti will filter the boxes height smaller than 20px.

@Gltina
Hello, you can set the 2D boxes (x1, y1, x2, y2) to (0, 0, 50, 50) since kitti will filter the boxes height smaller than 20px.

Hi @jihanyang ,

Thanks for your help, I will use this value to train later. And here are some questions I was wondering:

  1. Will (0, 0, 50, 50) as fake data affect the final training result?

  2. How to change voxel_size properly?

If changing the voxel size in kitti_dataset.yaml directly as below:

    - NAME: transform_points_to_voxels
        VOXEL_SIZE: [0.05, 0.05, 0.1]
        ...

-------to-------

    - NAME: transform_points_to_voxels
        VOXEL_SIZE: [0.02, 0.02, 0.01]
        ...

Following kitti_infos generation, an error occurs when running train.py:

details on error
2020-08-24 17:33:17,570   INFO  **********************Start training kitti_models/pv_rcnn(default)**********************
epochs:   0%|                                           | 0/160 [00:02<?, ?it/s]
Traceback (most recent call last):                      | 0/264 [00:00<?, ?it/s]
  File "train.py", line 198, in <module>
    main()
  File "train.py", line 170, in main
    merge_all_iters_to_one_epoch=args.merge_all_iters_to_one_epoch
  File "/home/linux/Desktop/OpenPCDet3/OpenPCDet/tools/train_utils/train_utils.py", line 93, in train_model
    dataloader_iter=dataloader_iter
  File "/home/linux/Desktop/OpenPCDet3/OpenPCDet/tools/train_utils/train_utils.py", line 38, in train_one_epoch
    loss, tb_dict, disp_dict = model_func(model, batch)
  File "/home/linux/Desktop/OpenPCDet3/OpenPCDet/pcdet/models/__init__.py", line 30, in model_func
    ret_dict, tb_dict, disp_dict = model(batch_dict)
  File "/home/linux/.local/lib/python3.6/site-packages/torch/nn/modules/module.py", line 493, in __call__
    result = self.forward(*input, **kwargs)
  File "/home/linux/Desktop/OpenPCDet3/OpenPCDet/pcdet/models/detectors/pv_rcnn.py", line 11, in forward
    batch_dict = cur_module(batch_dict)
  File "/home/linux/.local/lib/python3.6/site-packages/torch/nn/modules/module.py", line 493, in __call__
    result = self.forward(*input, **kwargs)
  File "/home/linux/Desktop/OpenPCDet3/OpenPCDet/pcdet/models/backbones_3d/pfe/voxel_set_abstraction.py", line 235, in forward
    point_features = self.vsa_point_feature_fusion(point_features.view(-1, point_features.shape[-1]))
  File "/home/linux/.local/lib/python3.6/site-packages/torch/nn/modules/module.py", line 493, in __call__
    result = self.forward(*input, **kwargs)
  File "/home/linux/.local/lib/python3.6/site-packages/torch/nn/modules/container.py", line 92, in forward
    input = module(input)
  File "/home/linux/.local/lib/python3.6/site-packages/torch/nn/modules/module.py", line 493, in __call__
    result = self.forward(*input, **kwargs)
  File "/home/linux/.local/lib/python3.6/site-packages/torch/nn/modules/linear.py", line 92, in forward
    return F.linear(input, self.weight, self.bias)
  File "/home/linux/.local/lib/python3.6/site-packages/torch/nn/functional.py", line 1408, in linear
    output = input.matmul(weight.t())
RuntimeError: size mismatch, m1: [2048 x 3456], m2: [640 x 128] at /pytorch/aten/src/THC/generic/THCTensorMathBlas.cu:268

RuntimeError: size mismatch, m1: [2048 x 3456], m2: [640 x 128] at /pytorch/aten/src/THC/generic/THCTensorMathBlas.cu:268

In short, this question could be simplified as How to modify net parameters when changing the voxel_size?

Thanks in advance!

@Gltina
Please make sure:

  1. point cloud range along z-axis / voxel_size is 40
  2. point cloud range along x,y -axis / voxel_size is the multiple of 16.

Have you fixed the problem yet @Gltina ?

@thptai
Yes, just follow #jihanyang said to adjust your POINT_CLOUD_RANGE. It works!

I still get only zeros when evaluating. I just dont know why, because when I visualize the predicted labels and my gt labels, the intersection is very large. I just dont know how to get the corresponding value for Average Precision etc.

@josy43
I'm not sure what your problem is?
But so far it seems that everyone can train normally, I will close this issue soon

I can train too, that's not the problem, the problem is that when I run test.py the results of AP AOS etc. are zero. But that's not possible, because when I made 4 frames to evaluate and the predicted and gt bounding boxes are the same. So it has to be 100%. For the 2D boxes I typed in 0, 0, 50, 50.

@Gltina
Hello, you can set the 2D boxes (x1, y1, x2, y2) to (0, 0, 50, 50) since kitti will filter the boxes height smaller than 20px.
@jihanyang
Hello, I have a question hope that you can clarify to me, Can I set 2D BB as (0,0,50,50) for the official KITTI benchmark? I want to evaluate the BEV benchmark only?

Hello, if I only use my own lidar point cloud data, how should I set the 2D-related label here, whether it is automatically ignored or set to zero?

@josy43 Hi, I met the same problem as yours, training and visualizing are fine, but when run test.py, all is zero. How did you solve this problem? could you please give me some advices? thanks.

@Gltina
Please make sure:

  1. point cloud range along z-axis / voxel_size is 40
  2. point cloud range along x,y -axis / voxel_size is the multiple of 16.

Hello @jihanyang, can you please explain why the point cloud range has to be adjusted with the values,
40 and multiple of 16? what's the reason for these numbers?

Hi, I was following your discuss and modified the getitem in the database but I got stucked on creating the data information as I only have pointcloud, but no image and calib. May I ask you how did you get it running?
@Gltina @xixioba

@beedrill The image and calib are not necessary, you can refer to the data loader of waymo and nuscenes.

@jihanyang Thank you for the information. But to get my own dataset working, I'll need to rewrite create_groundtruth_database method while generating the database information and I am having a hard time understanding what this method is trying to do here. Is there some documentation on this, like what should be included in the kiti_dbinfos_train.pkl to make it work, or is there a way to get around. Thank you for the help!

@jihanyang Hi ,i want to ask if I have no calib data ,If I could train the model? Looking forward to your reply!

@jihanyang if i have no calib and image data ,what should I do to train the model? directly delete the code of them or set them to 0 or use the kitti's calib? It means a lot to me. Looking forward to your reply! Thanks again!

@jihanyang if i have no calib and image data ,what should I do to train the model? directly delete the code of them or set them to 0 or use the kitti's calib? It means a lot to me. Looking forward to your reply! Thanks again!

Hi, since there is only 3D information, how to change the evaluation function ?

@clytze0216 @russellyq

def kitti_eval(eval_det_annos, eval_gt_annos):

@josy43 , @cowarder - I am facing the same issue. I can train fine but when I evaluate all metrics come to zero! Did you get past that?

how can i disable it?

Hello! @jihanyang @MartinHahner
I am calling getitem repeatedly when training a custom dataset. What's wrong with this?
File "/nas/lyp/code/tools/../pcdet/datasets/dataset.py", line 155, in prepare_data
return self.getitem(new_index)
File "/nas/lyp/code/tools/../pcdet/datasets/kitti/kitti_dataset.py", line 441, in getitem
data_dict = self.prepare_data(data_dict=input_dict) #################################
File "/nas/lyp/code/tools/../pcdet/datasets/dataset.py", line 155, in prepare_data
return self.getitem(new_index)
File "/nas/lyp/code/tools/../pcdet/datasets/kitti/kitti_dataset.py", line 441, in getitem
data_dict = self.prepare_data(data_dict=input_dict) #################################
File "/nas/lyp/code/tools/../pcdet/datasets/dataset.py", line 155, in prepare_data
return self.getitem(new_index)
File "/nas/lyp/code/tools/../pcdet/datasets/kitti/kitti_dataset.py", line 441, in getitem
data_dict = self.prepare_data(data_dict=input_dict) #################################
File "/nas/lyp/code/tools/../pcdet/datasets/dataset.py", line 155, in prepare_data
return self.getitem(new_index)
File "/nas/lyp/code/tools/../pcdet/datasets/kitti/kitti_dataset.py", line 441, in getitem
data_dict = self.prepare_data(data_dict=input_dict) #################################
File "/nas/lyp/code/tools/../pcdet/datasets/dataset.py", line 155, in prepare_data
return self.getitem(new_index)
File "/nas/lyp/code/tools/../pcdet/datasets/kitti/kitti_dataset.py", line 441, in getitem
data_dict = self.prepare_data(data_dict=input_dict) #################################
File "/nas/lyp/code/tools/../pcdet/datasets/dataset.py", line 155, in prepare_data
return self.getitem(new_index)
File "/nas/lyp/code/tools/../pcdet/datasets/kitti/kitti_dataset.py", line 441, in getitem
data_dict = self.prepare_data(data_dict=input_dict) #################################
File "/nas/lyp/code/tools/../pcdet/datasets/dataset.py", line 155, in prepare_data
return self.getitem(new_index)
File "/nas/lyp/code/tools/../pcdet/datasets/kitti/kitti_dataset.py", line 441, in getitem
data_dict = self.prepare_data(data_dict=input_dict) #################################
File "/nas/lyp/code/tools/../pcdet/datasets/dataset.py", line 155, in prepare_data
return self.getitem(new_index)
File "/nas/lyp/code/tools/../pcdet/datasets/kitti/kitti_dataset.py", line 441, in getitem
data_dict = self.prepare_data(data_dict=input_dict) #################################
File "/nas/lyp/code/tools/../pcdet/datasets/dataset.py", line 155, in prepare_data
return self.getitem(new_index)
File "/nas/lyp/code/tools/../pcdet/datasets/kitti/kitti_dataset.py", line 441, in getitem
data_dict = self.prepare_data(data_dict=input_dict) #################################
File "/nas/lyp/code/tools/../pcdet/datasets/dataset.py", line 155, in prepare_data
return self.getitem(new_index)
File "/nas/lyp/code/tools/../pcdet/datasets/kitti/kitti_dataset.py", line 441, in getitem
data_dict = self.prepare_data(data_dict=input_dict) #################################
File "/nas/lyp/code/tools/../pcdet/datasets/dataset.py", line 155, in prepare_data
return self.getitem(new_index)
File "/nas/lyp/code/tools/../pcdet/datasets/kitti/kitti_dataset.py", line 441, in getitem
data_dict = self.prepare_data(data_dict=input_dict) #################################
File "/nas/lyp/code/tools/../pcdet/datasets/dataset.py", line 155, in prepare_data
return self.getitem(new_index)
File "/nas/lyp/code/tools/../pcdet/datasets/kitti/kitti_dataset.py", line 441, in getitem
data_dict = self.prepare_data(data_dict=input_dict) #################################
File "/nas/lyp/code/tools/../pcdet/datasets/dataset.py", line 127, in prepare_data
data_dict = self.data_augmentor.forward(
File "/nas/lyp/code/tools/../pcdet/datasets/augmentor/data_augmentor.py", line 240, in forward
data_dict = cur_augmentor(data_dict=data_dict)
File "/nas/lyp/code/tools/../pcdet/datasets/augmentor/data_augmentor.py", line 49, in random_world_flip
gt_boxes, points = getattr(augmentor_utils, 'random_flip_along_%s' % cur_axis)(
File "/nas/lyp/code/tools/../pcdet/datasets/augmentor/augmentor_utils.py", line 15, in random_flip_along_x
enable = np.random.choice([False, True], replace=False, p=[0.5, 0.5])
File "mtrand.pyx", line 978, in numpy.random.mtrand.RandomState.choice
File "<array_function internals>", line 5, in unique
File "/root/anaconda3/envs/openpcdet/lib/python3.8/site-packages/numpy/lib/arraysetops.py", line 262, in unique
ret = _unique1d(ar, return_index, return_inverse, return_counts)
File "/root/anaconda3/envs/openpcdet/lib/python3.8/site-packages/numpy/lib/arraysetops.py", line 315, in _unique1d
ar = np.asanyarray(ar).flatten()
File "/root/anaconda3/envs/openpcdet/lib/python3.8/site-packages/numpy/core/_asarray.py", line 171, in asanyarray
return array(a, dtype, copy=False, order=order, subok=True)
RecursionError: maximum recursion depth exceeded while calling a Python object

@josy43 , @cowarder - I am facing the same issue. I can train fine but when I evaluate all metrics come to zero! Did you get past that?

Have you solved the same problem

@sshaoshuai @jihanyang
Now it seems that you have not officially given a pipeline on how to use custom own data.

So I wrote a pipeline on how to import custom own data.
Currently only the Dataloader stage.
https://github.com/Leozyc-waseda/Lidar_Openpcdet_ST3D#use-openpcdet-to-train-your-own-dataset

#771 Hi guys.
Hope this helps everyone.
image
image
image

Hello, you can refer to my successful example using kitti format custom dataset. README describes how to label, train, inference it including transformation of coordinates. It may solve your problems!
https://github.com/OrangeSodahub/CRLFnet#lid-cam-fusion
https://github.com/OrangeSodahub/CRLFnet/blob/master/src/site_model/src/LidCamFusion/OpenPCDet/pcdet/datasets/custom/README.md
You can also review this pull requrest: #1032

assert obj_points.shape[0] == info['num_points_in_gt']

AssertionError

Can someone help me with is error?