YCAyca/3D-Multi-Object-Tracking-using-Lidar

running on a collected PC data set

Opened this issue · 8 comments

I want to run this code on a data contains just X, Y, Z, Intensity of my own.

Can you guide me with this process?

YCAyca commented

Did you already train with OpenPCDet and asking help for tracking part or detection also?

I saw there is a demo.py on OpenPCDet that allows you run the algorithm on <X, Y, Z, Intensity> and I am trying to run that part first and then try to use the extracted bounding boxes on your tracker.

But honestly I would like to know the whole pipeline modification needed in case if I want to run you tracker by my point cloud data in the shape of <X, Y, Z, Intensity> which they are just vehicles, pedestrians, trucks and motorcycles on it and captured by Blickfield Cube 1.

Thanks for any further help in advance.

YCAyca commented

Demo.py is to make inference and check your results by visualizing 3D boxes with their labels. Apart from I created it especially for my project, so you need to make some changes, first you need to train your own model using OpenPCdet train.py. The instructions are quite simple, I suggest you to check the official repo, create a dataloader for your custom dataset and train with available model in OpenPCdet repo like pointnet, second etc.

After that we can take a look how to visualize your detection results or apply tracking

first you need to train your own model using OpenPCDet train.py.

So in your opinion if the data is changed then we need to train a new model and pretrained models wont work?

thanks again for your kind and fast response

YCAyca commented

In my case it wasnt working. For example when I train the model with kitti and inference with panda, the boxes was not good. This is because the setup of lidar is a sensitive subject (eventhough the lidar sensor is the same, if its for example 2 cm higher in your setup than your training dataset's setup, it effects the results bad). So its not like images collected from different cameras which is working fine mostly.

Whether to train or inference demo, you need to create a dataloader for your custom dataset, so I suggest you to create it first and check your results using demo.py from the official OpenPCDet repo. Maybe it will work and you wont need to train from scratch, otherwise you have the dataloader you can just train.

Your welcome! :)

It did not work as you anticipated because I have zero BB :-) .

So I need to re-train OpenPCDet by using a data loader.

This is because the setup of lidar is a sensitive subject (even though the lidar sensor is the same, if its for example 2 cm higher in your setup than your training dataset's setup, it effects the results bad)

There is a big doubt in this case that how it would be possible to generalise a trained model ?

I need to have a model to work in so many different situations and It is not possible to train a model for every location and parameter (height, angle, ....) of a scanner.

It is true that the scanners with different resolution may have different scans and needs they own training session but how the height and angle of scanner changes this process is doubtful.

could please explain me better if you know what is the problem here?

YCAyca commented

Actually I meant not only the resolution difference, different scale or height is also a different setup. I didnt work on this subject but I would try to adjust the data according to the height in inference. For example you trained your model with a sensor height = 1m and you will use it with another sensor height = 2m, you can adjust your data by substracting 1 for their z axis.

Maybe something like that can work :)