Dynamic objects detection in LiDAR

MIT License

The result of network (click on the image below)

result The network weights could be loaded weight.

The method

The lidar point cloud represented as top view image where each pixel of the image corresponds to 12.5x12.5 cm. For each grid cell we project random point and get the height and intensity

We are doing direct regression of the 3D boxes, thus for each pixel of the image we regress confidence between 0 and 1, 7 parameters for box (dx_centroid, dy_centroid, z_centroid, width, height, dx_front, dy_front) and classes.

We apply binary cross entrophy for confidence loss, l1 loss for all box parameters regression and softmax loss for classes prediction. The confidence map computed from ground truth boxes. We assign the closest to the box centroid cell as confidence 1.0 (green on the image above) and 0 otherwise. We apply confidence loss for all the pixels. Other losses applied only for those pixels where we have confidence ground truth 1.0.