mpitropov/cadc_devkit

Lidar Camera Synchronization

Closed this issue · 6 comments

First I'd like to thank you for the work you put to deliver such paper and dataset.

When reading the paper there is the percedure of synchronization between the sensors especially the lidar and the cameras. I wonder how did you correct for the motion of the car for left and right point cloud. Also the cut off angle that is 180 degrees from the lidar frame how it is defined practically?
I know the phase lock angle could be defined within the velodyne configuration and it is defined to be the start of firing sequence of the lasers.

Second what do you mean by the camera timestamp is truncated within 0.1 of the VARF?

Regards

Hi, I'll try to go into a bit more detail to clarify things here. We used this driver for our LiDAR: https://github.com/ros-drivers/velodyne

The GPS outputs a pulse per second (PPS) signal as well as a VARF signal (which we set to output at 10 Hz) that will match with the output of the PPS every second. This 10 Hz signal is split by our synchronization board to send the 10 Hz VARF signal to each of the 8 cameras. The LiDAR only takes in the PPS signal.

For our LiDAR configuration, we first set the RPM so it spins at 10 Hz. The cut angle is the angle at which the current point cloud sweep ends and the next one begins. We defined this to be when the sweep passes the x-axis of the LiDAR in the negative direction which is when the sweep passes behind the direction the car faces. We defined this for two reasons, I) we don't want the sweep to be cut off in the direction of movement for our car, 2) we will motion correct to the middle of the point cloud sweep so points in front of the car will be motion corrected the least amount. The last parameter we set was the phase lock angle which is the direction you want the sweep to be when the PPS from the GPS is received. We set this to the positive direction of the x-axis of the LiDAR which is the front direction of the car. With these settings, a point cloud sweep occurs every 0.1 seconds, for example, a sweep starts behind the car at -0.05 seconds moves clockwise to pass the front of the car at 0.0 seconds and then moves clockwise to behinds the car to finish collecting the sweep. The points collected from -0.05 to 0.0 will be motion corrected forwards in time while the points that were collected from 0.0 to 0.05 are motion-corrected backwards in time. Due to this, it seems like we collect a full 360-degree point cloud sweep instantly which is matched with the PPS and 10 HZ signal.
image

For the cameras, they can't directly take in the GPS's PPS signal and timestamp data. Due to this, we give it the 10 Hz VARF signal. At each of the signal pulses, a picture is taken, sent to the computer through USB and timestamped by the computer. The data transfer through USB as well as having to be timestamped by the ROS driver causes additional delay and makes the timestamps inaccurate. We instead truncate the timestamps to every 10 Hz which is when the cameras were signalled to trigger by the 10 Hz VARF signal. It would have been better if our cameras could receive a PPS signal to timestamps themselves. Another option would have been to truncate to the 10 Hz VARF but then add the exposure time to the timestamps.

Thanks @mpitropov for the detailed reply.

How did you actually correct for the motion in the point cloud? could you give me a reference for this ?

Also, the 180degrees cut angle how did you implement it? the lidar starts rotating once it's powered and there's no absolute encoder info available.. did you use the the first pointcloud to estimate the starting point of the sweep ? one more thing the ROS driver as it's only gives access to the last packet timestamp to time the whole pointcloud if I'm not mistaken, so did you modify the driver to get exact timestamp of each datapacket ?

Thanks again for info and amazing work.

The driver has 2 nodes to run. One creates the packets by getting data directly from the LiDAR and then the other reads the packets and converts them to the point cloud. Each packet is like a pie slice and has a timestamp.

The timestamps are used by the driver to cut off the message when it passes the cut angle. For motion correction, the driver uses the ROS transform tree to first convert the point from LiDAR frame to the static frame at its timestamp then convert to the static frame at the new timestamp and then lastly to the LiDAR frame.

The main commit I added was this one. I'm not too familiar with the recent changes to the driver code. The main thing is that the LiDAR should be configured to use the packet timestamps from GPS and not just make timestamps in ROS as it reads them.
ros-drivers/velodyne#223

Thanks again @mpitropov for the reply.

So what i understood and please correct me if I'm wrong that you published the odom frame from the localization stack to account for vehicle speed and angular velocity and made a static TF between the velodyne link and the odom link ?

I'm currently working on the same problem I made a node that publishes vehicle speed and angular velocity and modified the driver to subscribe to the topics but correct the motion with every point from the packet given its timestamp but it isn't working properly

That is pretty close, we have static transforms to the different sensors on the car through calibration. So we have a lidar to GPS transform that is static. Then the localization stack will publish the dynamic transform between the odom frame and the GPS frame which will move with the car.

Thanks @mpitropov for the help and explanation.