Compare down-upsampled with original range images via pointwise/perceptive/semantic losses. Possible to use both on semantic and detection tasks without using the semantic loss
Reproducibility: Hard / No Code
Use U-Net with droupout on range image to upsample simulated environments with different sensors. Any super resolution model would likely perform similar, mentioning effect of drop out rate.
Reproducibility: Hard (Custom-Simulation Data) / Code
TBD
Analyzing the Cross-Sensor Portability of Neural Network Architectures for LiDAR-based Semantic Labeling
2D-3D labelling transfer mechanism with different sensor cross tests, no specific adaptation mechanism is proposed.
Create incomplete versions of original lidars and train a voxel based sparse convolution completer.
Reproducibility: Hard / No Code
Using GAN to make synthethic LIDAR-BEV data more realistic improve BEV-YOLO detector
Cycle GAN over BEV-LIDAR of synthetic data to improve performance of YOLO-R over KITTI. [50] is pretty much same work by same authors.
Special 2D representation of LIDAR data by keeping (x,y,z) unlike range image. Perform noise and removal to original data, use VAE and GAN for reconstruction. Surprisingly good results with VAE trained only on compressed embeddings not the noisy data itself.
Reproducilibility: Easy / Code
Uses baseline Deep Generative Modeling of LiDAR Data, similar approach with more advanced data prep with Gumbel distribution based mask on confidence map for providing differentiability for back-prop and point drop modeling.
Quite amazing test results with recovery on manual perturbation. 🌟
Reproducilibility: Easy / Code
ePointDA: An End-to-End Simulation-to-Real Domain Adaptation Framework for LiDAR Point Cloud Segmentation
Using modules of self-supervised dropout noise rendering, statistics-invariant and spatially-adaptive feature alignment,and transferable segmentation learning.
Reproducilibility: Hard
Using combination of statistics/feature normalization with MinEnt, achieves good results with inter-dataset adaptations. Simple tricks however no code.
Reproducilibility: Medium
From creators of Semantic KITTI, transfer Velodyne HDL-64 scans to match scans from a Velodyne HDL-32, a sensor with a lower resolution and different FOV.
Exploit the fusion of multiple scans of the source dataset and meshing for a denser map to sample virtual scans, however this aims high to low quality adaptation.
Reproducilibity: Medium / Code
Predecessor work of ePointDA, and following work of SqueezeSeg, uses GTA-LIDAR and combines learned intensity rendering, geodesic correlation alignment, and progressive domain calibration for adaptation.
Reproducibility: Easy / Code
One of the few model using multi modality (lidar-image) learning with 3d semantic labels. Mimicking between the modalities, achieved through KL divergence.
An architecture with separate main and mimicking head to disentangle the segmentation from the cross-modal learning objective. Considers day-night, country-country, dataset-dataset adaptation.
Reproducilibity: Medium / Code
Boundary-aware domain adaptation approach for semantic segmentation of the lidar point cloud. Utilizes the GatedSCNN to enable the domain shared feature extractor to keep boundary information in the domain shared features and utilize the learned boundary to refine the segmentation.
Reproducilibity: Medium / Code
Cross-range (near/far) and cross-device(multi-dataset) adaptation using adversarial global adaptation and fine-grained local adaptation. Only losses are described no architecture or training details provided. 🌟
Reproducilibity: Hard
Pseudo-annotations, reversible scale-transformations and motion coherency of object size. Beats the few shot tuning only with scale supervision 🌟
Reproducilibity: Hard / Code not released yet.
Task agnostic, multi task approach combining detection and segmentation using SECOND-like architecture. Reproducilibity: Hard
Using wide/augmented multi-frame teachers for pseudolabelling on unlabeled data and train students over those. Shows promising results especially when the labeled data ratio is low. 🌟
Reproducilibity: Medium
Uses self-training created pseudo-labels with smoothing(online/offline), resizing and extrapolation(dreaming). 🌟
Reproducibility: Hard
Random scale object augmentation, iou-loss for quality aware and memory ensembling, curriculum data augmentation self-training(pseudo-label generation). 🌟🌟
Reproducilibility: Easy / Code
Benchmark/Dataset work in self(contrastive/clustering), semi-supervised(pseudo-label,student/teacher) methods, and unsupervised methods (SN-like) over their new dataset and cross dataset using the SECOND as baseline. 🌟🌟
Reproducibility : Easy / Code (only semi-supervised and unsupervised methods)