/LidarDomainAdaptation

Domain adaptation for LIDAR resource list

Lidar Domain Adaptation Resource list

Survey Papers

Domain Invariant Data

Sensor-to-Sensor

Compare down-upsampled with original range images via pointwise/perceptive/semantic losses. Possible to use both on semantic and detection tasks without using the semantic loss
Reproducibility: Hard / No Code

Use U-Net with droupout on range image to upsample simulated environments with different sensors. Any super resolution model would likely perform similar, mentioning effect of drop out rate.
Reproducibility: Hard (Custom-Simulation Data) / Code

TBD

2D-3D labelling transfer mechanism with different sensor cross tests, no specific adaptation mechanism is proposed.

Dataset-to-Dataset

Create incomplete versions of original lidars and train a voxel based sparse convolution completer.
Reproducibility: Hard / No Code

Domain Mapping

Dataset-to-Dataset n Sim-to-Real

Using GAN to make synthethic LIDAR-BEV data more realistic improve BEV-YOLO detector

Cycle GAN over BEV-LIDAR of synthetic data to improve performance of YOLO-R over KITTI. [50] is pretty much same work by same authors.

Sim-to-Real

Special 2D representation of LIDAR data by keeping (x,y,z) unlike range image. Perform noise and removal to original data, use VAE and GAN for reconstruction. Surprisingly good results with VAE trained only on compressed embeddings not the noisy data itself.
Reproducilibility: Easy / Code

Uses baseline Deep Generative Modeling of LiDAR Data, similar approach with more advanced data prep with Gumbel distribution based mask on confidence map for providing differentiability for back-prop and point drop modeling.
Quite amazing test results with recovery on manual perturbation. 🌟
Reproducilibility: Easy / Code

Using modules of self-supervised dropout noise rendering, statistics-invariant and spatially-adaptive feature alignment,and transferable segmentation learning.
Reproducilibility: Hard

Dataset-to-Dataset

Using combination of statistics/feature normalization with MinEnt, achieves good results with inter-dataset adaptations. Simple tricks however no code.
Reproducilibility: Medium

Sensor-to-Sensor

From creators of Semantic KITTI, transfer Velodyne HDL-64 scans to match scans from a Velodyne HDL-32, a sensor with a lower resolution and different FOV. Exploit the fusion of multiple scans of the source dataset and meshing for a denser map to sample virtual scans, however this aims high to low quality adaptation.
Reproducilibity: Medium / Code

Domain Invariant Feature Learning

Sim-to-Real

Predecessor work of ePointDA, and following work of SqueezeSeg, uses GTA-LIDAR and combines learned intensity rendering, geodesic correlation alignment, and progressive domain calibration for adaptation.
Reproducibility: Easy / Code

Multi adaptations

One of the few model using multi modality (lidar-image) learning with 3d semantic labels. Mimicking between the modalities, achieved through KL divergence.
An architecture with separate main and mimicking head to disentangle the segmentation from the cross-modal learning objective. Considers day-night, country-country, dataset-dataset adaptation.
Reproducilibity: Medium / Code

Dataset-to-Dataset

Boundary-aware domain adaptation approach for semantic segmentation of the lidar point cloud. Utilizes the GatedSCNN to enable the domain shared feature extractor to keep boundary information in the domain shared features and utilize the learned boundary to refine the segmentation.
Reproducilibity: Medium / Code

Sensor-to-Sensor

Cross-range (near/far) and cross-device(multi-dataset) adaptation using adversarial global adaptation and fine-grained local adaptation. Only losses are described no architecture or training details provided. 🌟
Reproducilibity: Hard

Dataset-to-Dataset

Pseudo-annotations, reversible scale-transformations and motion coherency of object size. Beats the few shot tuning only with scale supervision 🌟
Reproducilibity: Hard / Code not released yet.

Normalization Statistics n Others

Task agnostic, multi task approach combining detection and segmentation using SECOND-like architecture. Reproducilibity: Hard

Using wide/augmented multi-frame teachers for pseudolabelling on unlabeled data and train students over those. Shows promising results especially when the labeled data ratio is low. 🌟
Reproducilibity: Medium

Uses self-training created pseudo-labels with smoothing(online/offline), resizing and extrapolation(dreaming). 🌟
Reproducibility: Hard

Random scale object augmentation, iou-loss for quality aware and memory ensembling, curriculum data augmentation self-training(pseudo-label generation). 🌟🌟
Reproducilibility: Easy / Code

Dataset Benchmark

Benchmark/Dataset work in self(contrastive/clustering), semi-supervised(pseudo-label,student/teacher) methods, and unsupervised methods (SN-like) over their new dataset and cross dataset using the SECOND as baseline. 🌟🌟
Reproducibility : Easy / Code (only semi-supervised and unsupervised methods)

Table

image