https://chat.whatsapp.com/BXXoSFUktpHHrK3izaRfCw
Are you passionate about making widespread, impactful global changes? Autonomous vehicles represent one of the biggest revolutions mankind has ever seen and they will affect every aspect of our daily lives. In this challenge you will help to enable the autonomous car revolution. Teams undertaking Innoviz’s Rigid Motion Segmentation Challenge will solve the problem of decomposing LIDAR data (point cloud) into background and moving objects.
Detect all points that belong to a moving object!
Check out our online point cloud (open in chrome)
Team | Score |
---|---|
Sorry4YourLoss - #2 | 31.95% |
Arthur - #2 | 30.35% |
Arthur - #3 | 29.97% |
Arthur - #1 | 29.38% |
Sorry4YourLoss - #0 | 28.85% |
Sorry4YourLoss - #1 | 28.7% |
Arthur - #0 | 27.74% |
Not Netanel - #1 | 20.22% |
Talos - #1 | 18.22% |
Not Netanel - #2 | 6.69% |
AAA - #0 | 2.34% |
The dataset consists of simulated videos of urban driving.
Lidar simulation is generated by Carla.
Lidar Details
Field of view - 80°X40°
Resolution - 0.2°X0.2°
Maximal distance - 100m
Minimal distance - 3m
Coordinate system origin - Center of the Lidar, translation between Lidar and center of ego vehicle is (x=0.6, y=0.0, z=1.3)[m]
We always use right hand coordinate system, x is forward and z is upward.
Point cloud is a set of (3D) points in space, in our case, generated by Lidar.
In addition to the spatial location of the points Innoviz's Lidar extract also reflectivity, which is similar to "color".
The point cloud coordinate system is the center of the ego vehicle (aligned with the ego motion data).
File structure:
x[cm], y[cm], z[cm], reflectivity[0-100]
number of rows (number of point in point cloud) is unknown.
Ego motion is the motion of the vehicle on which the Lidar is mounted. Applying the ego motion translation and rotation to the point cloud will transform it to the global coordinate system.
Rotation order: rotation_x -> rotation_y -> rotation_z
File structure
rotation_x[rad], rotation_y[rad], rotation_z[rad], translation_x[m], translation_y[m], translation_z[m]
single row
Points that belong to moving objects are labeled by 1 all others by 0.
File structure
label[0-1]
number of rows is identical to point cloud file.
Just download test and train set ant unzip them.
Test set
Train set
YOU DON'T NEED TO BE A 3D OR POINT CLOUD EXPERT!!
We will give you all you need to get started with point cloud data. Every thing we give is open source and you are more than welcome to explore our source code and change it to suit your need.
The code has been tested on Linux and (most of it) on Windows.
In the repo you will find the following (among other code):
In this file you will find the RotationTranslationData class that can help with affine transformation you might want to apply on the point cloud. We strongly recommend using this class for transformations.
In this file you will find pc_show() function. This is our point cloud viewer for the challenge. It is based on Panda3d graphic engine. You should think of it as "matplotlib.pyplot.imshow()" for point cloud (after installing Panda3d). If you are familiar with cython you can dramatically accelerate the viewer. Install cython and build the our code by running this in terminal:
cd .../datahack2018
python visualizations/setup.py build_ext --inplace
Holds two scripts for playing a point cloud video and aggregation of point cloud. This is the best place to start your challenge.
We will evaluate your scores using this script. more in the Evaluation section below
Evaluation is IOU based.
Let the yellow circle be the set of ground truth moving points and the orange circle be the positive predictions.
The IOU score is (A intersection B) / (A + B).
Note: We only consider the positive labels/prediction IOU.
More information here
The evaluation script is using the directory and file name to identify the correct gt file, so you need to keep the original directory tree.
In order to submit your predictions for the test set you need to zip your prediction directory and send it to datahack2018@innoviz.tech. Each team can submit results at most 3 times throughout the hackathon.
Coming soon..