awesome pointcloud processing algorithm
公众号:3D视觉工坊
主要关注:3D视觉算法、SLAM、vSLAM、计算机视觉、深度学习、自动驾驶、图像处理以及技术干货分享
运营者和嘉宾介绍:运营者来自国内一线大厂的算法工程师,深研3D视觉、vSLAM、计算机视觉、点云处理、深度学习、自动驾驶、图像处理、三维重建等领域,特邀嘉宾包括国内外知名高校的博士硕士,旷视、商汤、百度、阿里等就职的算法大佬,欢迎一起交流学习
商用软件很多,阿里、腾讯、百度、京东都有对应业务
传统的点云获取技术包括非接触式测量和接触式测量两种,它们的主要区别在于,在测量过程中测头是否与工件的表面相接触。
非接触式测量是利用光学原理的方法采集数据,例如结构光法、测距法以及干涉法等。该方法的优点在于测量速度较快、测量精度高,并且能够获得高密度点云数据,但其测量精度易受外界因素干扰,而且测量物 体表面的反射光与环境光对测量精度也有一定影响。
相反,接触式测量是通过将测头上的测量传感器与被测物体的外表面相接触,然后通过移动测头来读取物体表面点的三维坐标值。该方法的优点在于测头的结构相对固定,并且其测量结果不受被测物体表面的材料与表面特性等因素的影响。这种方法的不足在于,由于测头长期与被测物体表面相接触,易产生磨损,并且这种测量方式的测量速度较慢,不适合测量几何结构较复杂的物体。
逆向工程、游戏人物重建、文物保护、数字博物馆、医疗辅助、三维城市建模
不同的点云获取技术获取的点云数据类型不同,根据点云数据中点的分布情况可将点云数据划分为以下四种类型
散乱点云是指所有数据点在空间中以散乱状态分布,任意两点之间没有建立拓扑连接关系。一般而言,激光点测量系统获得的点云数据以及坐标测量机在随机扫描状态下获得的点云数据都为散乱点云数据。
测量设备所获得的三维点云数据是由多条直线或曲线构成,点与点之间有一定的拓扑连接关系。一般而言,这种点云数据类型常见于扫描式点云数据中。
网格化点云是指点云数据中任意一点,均对应于其参数域所对应的一个均匀网格的顶点。当对空间散乱点云进行网格化插值时,所获得的点云数据即为网格化点云数据。
多边形点云是指分布在一组平面内的点云数据,该组平面内的平面两两互相平行,并且一个平面内距离最近的点连接起来可以形成平面多边形。这种点云数据常见于等高线测量、CT 测量等获得的点云数据中。
主要包括双边滤波、高斯滤波、条件滤波、直通滤波、随机采样一致滤波、VoxelGrid滤波等
三角网格去噪算法、
- 基于K-近邻点云去噪算法的研究与改进
- Point cloud denoising based on tensor Tucker decomposition
- 3D Point Cloud Denoising using Graph Laplacian Regularization of a Low Dimensional Manifold Model
孤立点排异法、曲线拟合法、弦高差法、全局能量法和滤波法.
孤立点排异法是通过观察点云数据,然后将与扫描线偏离较大的点剔除掉,从而达到去噪的目的。这类方法简单,可除去比较明显的噪声点,但缺点是只能对点云做初步的去噪处理,并不能滤除与真实点云数据混合在一起的噪声数据点。曲线拟合法是根据给定数据点的首末点,然后通过最小二乘等方法拟合一条曲线,通常为3到4 阶,最后计算中间的点到该曲线的距离,如果该距离大于给定阈值,则该点为噪声点,予以删 除,相反,如果该距离小于给定阈值,则该点为正常点,应该保留。弦高差法通过连接给定点集的首末点形成弦,然后求取中间每个点到该弦的距离,如果该距离小于给定阈值,则该点为正常点,予以保留,相反,若大于给定阈值,则该点为噪声点,予以删除。全局能量法通常用于网格式点云去噪,它通过建立整个曲面的能量方程,并求该方程在约束情况下的能量值的最小值。可以看出,这是一个全局最优化问题,因为网格数量比较大,因此会消耗大量的计算机资源与计算时间,而且由于约束方程是建立在整体网格的基础上,所以对于局部形状的去噪效果并不是很好。滤波法也是一种常用的有序点云去噪方法,它通过运用信号处理中的相关方法,使用合适的滤波函数对点云数据进行去噪处理,常用的滤波方法主要包括高斯滤波、均值滤波以及中值滤波法等。
目前,针对空间散乱点云数据去噪方法,主要分为两类方法,即基于网格模型的去噪方法和直接对空间点云数据进行去噪的方法。
其中,基于网格模型的去噪方法需要首先建立点云的三角网格模型,然后计算所有三角面片的纵横比和顶点方向的曲率值,并将该值与相应的阈值进行比较,若小于阈值,则为正常点,予以保留,相反,则为噪声点,予以删除。由于该方法需要对空间点云数据进行三角网格剖分,所以,往往比较复杂,并需要大量计算。
采用三维激光扫描仪获得的点云数据往往十分密集,点云数据中点的数量往往高达千万级甚至数亿级,即使对点云数据进行了去噪处理,点云数据中点的数量还是很多,所以往往不会直接使用这些原始点云数据进行曲面重建等工作,因为这会使后续处理过程变得耗时并且消耗过多的计算机资源,而且重构的曲面,其精度也不一定高,甚至出现更大的误差。所以,在进行空间点云曲面重建之前,往往需要对高密度的点云数据进 行点云精简操作。点云精简的目的是在保持原始点云的形状特征以及几何特征信息的前提下,尽量删除多余的数据点。
目前,空间散乱点云数据的精简方法主要分为两大类:基于三角网格模型的空间点云精简方法与直接基于数据点的空间点云精简方法。
其中,基于三角网格模型的空间点云精简方法需要先对点云数据进行三角剖分处理,建立其相应的三角网格拓扑结构,然后再对该三角网格进行处理,并将区域内那些形状变化较小的三角形进行合并,最后删除相关的三角网格顶点,从而达到点云数据精简的目的。这种方法需要对点云数据建立其相应的三角网格,该过程比较复杂,且因为需要存储网格数据,故需要消耗大量的计算机系统资源,并且该方法的抗噪能力较弱,对含有噪声的点云数据,构造的三角网格可能会出现变形等情况,因此精简后的点云数据经过曲面重建后的 模型与原始点云经过曲面重建后的模型可能大不相同。因此,目前关于直接基于点云数据的精简方法成为点云精简方法的主流。这种方法依据点云数据点之间的空间位置关系来建立点云的拓扑连接关系,并根据建立的拓扑连接关系计算点云数据中每个数据点的几何特征信息,最后根据这些特征信息来对点云数据进行点云精简处理。相比基于三角网格的空间点云精简方法,由于直接基于点云数据点的精简方法无需计算和存储复杂的三角网格结构,使得其精简的效率相对较高。因此,本章只研究直接基于空间点云数据的精简算法。
其中基于空间点云精简方法主要有:空间包围盒法、基于聚类的方法、法向偏差法、曲率精简法、平局点距法以及均匀栅格划分法。
- 点模型的几何图像简化法
- 基于相似性的点模型简化算法
- 基于最小曲面距离的快速点云精简算法
- 大规模点云选择及精简
- 一种基于模糊聚类的海量测量数据简化方法
- 基于均值漂移聚类的点模型简化方法
- 基于局部曲面拟合的散乱点云简化方法
常见的三维点云关键点提取算法有一下几种:ISS3D、Harris3D、NARF、SIFT3D,这些算法在PCL库中都有实现,其中NARF算法是用的比较多的
如果要对一个三维点云进行描述,光有点云的位置是不够的,常常需要计算一些额外的参数,比如法线方向、曲率、文理特征等等。如同图像的特征一样,我们需要使用类似的方式来描述三维点云的特征。
常用的特征描述算法有:法线和曲率计算、特征值分析、PFH、FPFH、SHOT、VFH、CVFH、3D Shape Context、Spin Image等。PFH:点特征直方图描述子,FPFH:跨苏点特征直方图描述子,FPFH是PFH的简化形式。
针对直线拟合:RANSAC算法、最小二乘法、平面相交法
针对曲线拟合:拉格朗日插值法、最小二乘法、Bezier曲线拟合法、B样条曲线法(二次、三次B样条曲线拟合)
针对平面拟合:主成成分分析、最小二乘法、粗差探测法、抗差估计法
针对曲面拟合:最小二乘法(正交最小二乘、移动最小二乘)、NURBS、 Bezier
- 三维激光扫描拟合平面自动提取算法
- 点云平面拟合新方法
- 海量散乱点的曲面重建算法研究
- 一种稳健的点云数据平面拟合方法
- 迭代切片算法在点云曲面拟合中的应用
- 基于最小二乘的点云叶面拟合算法研究
- 点云曲面边界线的提取
针对曲面/平面可以使用积分方法、网格划分(三角网格)等方法计算面积。
常用的思路:三角剖分算法(Delaunay,Mesh之后)来得到由三角形构成的最小凸包。得到三角形集合后,计算所有三角形面积并求和。
基于三维点云求取物理模型体积的研究算法大致可分为以下 4 大类。
1.凸包算法:使用凸包模型近似表示不规则体,再通过把凸包模型切片分割进行累加、或将凸包模型分解为上下两个三角网格面,采用正投影法求取两者的投影体积,其差即所求体积。此方法适用于凸模型,非凸模型误差较大。
2.模型重建法:在得到点云数据后,使用三角面片构建物理模型的方法求得体积。该算法受点云密度、生成的三角网格数量、点精度影响较大,易产生孔洞。
3.切片法:将点云沿某一坐标轴方向进行切片处理,再计算切片上下两表面的面积,通过累加切片体积求得总体积。该方法受到切片厚度的影响,切片厚度越小,计算精度越高但会导致计算效率下降。
4.投影法:先将点云投影进行三角形剖分,再将投影点与其原对应点构建出五面体,通过累加五面体体积求得总体积。该算法同样容易产生孔洞。上述算法,无论是通过三维点云先构建物理模型再求体积、还是基于三维点云通过几何方法直接求体积,当激光雷达采集的三维点云存在密度不均匀、空间物体存在过渡带或过渡线等问题时,重建三维模型的误差较大,体积计算精度不高。
分类:基于点的分类,基于分割的分类,监督分类与非监督分类
除此之外,还可以基于描述向量/关键点描述进行分类。
- 3D ShapeNets: A Deep Representation for Volumetric Shapes
- PartNet: A Large-scale Benchmark for Fine-grained and Hierarchical Part-level 3D Object Understanding
- Revisiting Point Cloud Classification: A New Benchmark Dataset and Classification Model on Real-World Data
- Escape from Cells: Deep Kd-Networks for the Recognition of 3D Point Cloud Models[ICCV2017]
- [ICCV2017] Colored Point Cloud Registration Revisited.
- [ICRA2017] SegMatch: Segment based place recognition in 3D point clouds.
- [IROS2017] 3D object classification with point convolution network.
- [CVPR2018] Pointwise Convolutional Neural Networks.
- [CVPR2018] SO-Net: Self-Organizing Network for Point Cloud Analysis.
- [CVPR2018] PointNetVLAD: Deep Point Cloud Based Retrieval for Large-Scale Place Recognition.
- [CVPR2018] PointGrid: A Deep Network for 3D Shape Understanding.
- [CVPR2019] Spherical Fractal Convolutional Neural Networks for Point Cloud Recognition.
- [MM] MMJN: Multi-Modal Joint Networks for 3D Shape Recognition.
点云配准的概念也可以类比于二维图像中的配准,只不过二维图像配准获取得到的是x,y,alpha,beta等放射变化参数,三维点云配准可以模拟三维点云的移动和对齐,也就是会获得一个旋转矩阵和一个平移向量,通常表达为一个4×3的矩阵,其中3×3是旋转矩阵,1x3是平移向量。严格说来是6个参数,因为旋转矩阵也可以通过罗格里德斯变换转变成1*3的旋转向量。
常用的点云配准算法有两种:正太分布变换和著名的ICP点云配准,此外还有许多其它算法,列举如下:
ICP:稳健ICP、point to plane ICP、point to line ICP、MBICP、GICP
NDT 3D、Multil-Layer NDT
FPCS、KFPSC、SAC-IA
Line Segment Matching、ICL
- An ICP variant using a point-to-line metric
- Generalized-ICP
- Linear Least-Squares Optimization for Point-to-Plane ICP Surface Registration
- Metric-Based Iterative Closest Point Scan Matching for Sensor Displacement Estimation
- NICP: Dense Normal Based Point Cloud Registration
- Efficient Global Point Cloud Alignment using Bayesian Nonparametric Mixtures[CVPR2017]
- 3DMatch: Learning Local Geometric Descriptors from RGB-D Reconstructions[CVPR2017]
- [CVPR2018] Density Adaptive Point Set Registration.
- [CVPR2018] Inverse Composition Discriminative Optimization for Point Cloud Registration.
- [CVPR2018] PPFNet: Global Context Aware Local Features for Robust 3D Point Matching.
- [ECCV2018] Learning and Matching Multi-View Descriptors for Registration of Point Clouds.
- [ECCV2018] 3DFeat-Net: Weakly Supervised Local 3D Features for Point Cloud Registration.
- [ECCV2018] Efficient Global Point Cloud Registration by Matching Rotation Invariant Features Through Translation Search.
- [IROS2018] Robust Generalized Point Cloud Registration with Expectation Maximization Considering Anisotropic Positional Uncertainties.
- [CVPR2019] PointNetLK: Point Cloud Registration using PointNet.
- [CVPR2019] SDRSAC: Semidefinite-Based Randomized Approach for Robust Point Cloud Registration without Correspondences.
- [CVPR2019] The Perfect Match: 3D Point Cloud Matching with Smoothed Densities.
- [CVPR] FilterReg: Robust and Efficient Probabilistic Point-Set Registration using Gaussian Filter and Twist Parameterization.
- [CVPR2019] 3D Local Features for Direct Pairwise Registration.
- [ICCV2019] DeepICP: An End-to-End Deep Neural Network for 3D Point Cloud Registration.
- [ICCV2019] Deep Closest Point: Learning Representations for Point Cloud Registration.
- [ICRA2019] 2D3D-MatchNet: Learning to Match Keypoints across 2D Image and 3D Point Cloud.
- [CVPR2019] The Perfect Match: 3D Point Cloud Matching with Smoothed Densities.
- [CVPR2019] 3D Local Features for Direct Pairwise Registration.
- [ICCV2019] Robust Variational Bayesian Point Set Registration.
- [ICRA2019] Robust low-overlap 3-D point cloud registration for outlier rejection.
- Learning multiview 3D point cloud registration[CVPR2020]
- [IROS2017] Analyzing the quality of matched 3D point clouds of objects.
点云的分割也算是一个大Topic了,这里因为多了一维就和二维图像比多了许多问题,点云分割又分为区域提取、线面提取、语义分割与聚类等。同样是分割问题,点云分割涉及面太广,确实是三言两语说不清楚的。只有从字面意思去理解了,遇到具体问题再具体归类。一般说来,点云分割是目标识别的基础。
分割主要有四种方法:基于边的区域分割、基于面的区域分割、基于聚类的区域分割、混合区域分割方法、深度学习方法
分割:区域声场、Ransac线面提取、NDT-RANSAC、K-Means(谱聚类)、Normalize Cut、3D Hough Transform(线面提取)、连通分析
- 基于局部表面凸性的散乱点云分割算法研究
- 三维散乱点云分割技术综述
- 基于聚类方法的点云分割技术的研究
- SceneEncoder: Scene-Aware Semantic Segmentation of Point Clouds with A Learnable Scene Descriptor
- From Planes to Corners: Multi-Purpose Primitive Detection in Unorganized 3D Point Clouds
- Learning and Memorizing Representative Prototypes for 3D Point Cloud Semantic and Instance Segmentation
- JSNet: Joint Instance and Semantic Segmentation of 3D Point Clouds
- PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation
- PointNet++: Deep Hierarchical Feature Learning on Point Sets in a Metric Space
- SyncSpecCNN: Synchronized Spectral CNN for 3D Shape Segmentation,CVPR2017
- [ICRA2017] SegMatch: Segment based place recognition in 3D point clouds.
- [3DV2017] SEGCloud: Semantic Segmentation of 3D Point Clouds.
- [CVPR2018] Recurrent Slice Networks for 3D Segmentation of Point Clouds.
- [CVPR2018] SGPN: Similarity Group Proposal Network for 3D Point Cloud Instance Segmentation.
- [CVPR2018] Large-scale Point Cloud Semantic Segmentation with Superpoint Graphs.
- [ECCV2018] 3D Recurrent Neural Networks with Context Fusion for Point Cloud Semantic Segmentation.
- [CVPR2019] JSIS3D: Joint Semantic-Instance Segmentation of 3D Point Clouds with Multi-Task Pointwise Networks and Multi-Value Conditional Random Fields.
- [CVPR2019] PartNet: A Recursive Part Decomposition Network for Fine-grained and Hierarchical Shape Segmentation.
- [ICCV2019] 3D Instance Segmentation via Multi-Task Metric Learning.
- [IROS2019] PASS3D: Precise and Accelerated Semantic Segmentation for 3D Point Cloud.
这是点云数据处理中一个偏应用层面的问题,简单说来就是Hausdorff距离常被用来进行深度图的目标识别和检索,现在很多三维人脸识别都是用这种技术来做的。
3D检索方法主要包括基于统计特征、基于拓扑结构、基于空间图、基于局部特征、基于视图。
基于统计特征的方法包括:基于矩、体积、面积
基于拓扑结构的方法包括:基于骨架、基于Reeb图
基于空间图的方法包括:形状直方图、球谐调和 、3D Zernike 、法线投影
基于局部特征的方法包括:表面曲率
基于视图的方法包括:光场描述子、草图
我们获取到的点云数据都是一个个孤立的点,如何从一个个孤立的点得到整个曲面呢,这就是三维重建的topic。
在玩kinectFusion时候,如果我们不懂,会发现曲面渐渐变平缓,这就是重建算法不断迭代的效果。我们采集到的点云是充满噪声和孤立点的,三维重建算法为了重构出曲面,常常要应对这种噪声,获得看上去很舒服的曲面。
常用的三维重建算法和技术有:
泊松重建、Delauary triangulatoins(Delauary三角化)
表面重建,人体重建,建筑物重建,输入重建
实时重建:重建纸杯或者农作物4D生长台式,人体姿势识别,表情识别
- 改进的点云数据三维重建算法
- Scalable Surface Reconstruction from Point Clouds with Extreme Scale and Density Diversity,CVPR2017
- [ICCV2017] PolyFit: Polygonal Surface Reconstruction from Point Clouds.
- [ICCV2017] From Point Clouds to Mesh using Regression.
- [ECCV2018] Efficient Dense Point Cloud Object Reconstruction using Deformation Vector Fields.
- [ECCV2018] HGMR: Hierarchical Gaussian Mixtures for Adaptive 3D Registration.
- [AAAI2018] Learning Efficient Point Cloud Generation for Dense 3D Object Reconstruction.
- [CVPR2019] Robust Point Cloud Based Reconstruction of Large-Scale Outdoor Scenes.
- [AAAI2019] CAPNet: Continuous Approximation Projection For 3D Point Cloud Reconstruction Using 2D Supervision.
- [MM] L2G Auto-encoder: Understanding Point Clouds by Local-to-Global Reconstruction with Hierarchical Self-Attention.
- SurfNet: Generating 3D shape surfaces using deep residual networks
- [CVPR2018] Reflection Removal for Large-Scale 3D Point Clouds.
- [ICML2018] Learning Representations and Generative Models for 3D Point Clouds.
- [3DV] PCN: Point Completion Network.
- [CVPR2019] PartNet: A Large-scale Benchmark for Fine-grained and Hierarchical Part-level 3D Object Understanding.
- [CVPR2019] ClusterNet: Deep Hierarchical Cluster Network with Rigorously Rotation-Invariant Representation for Point Cloud Analysis.
- [ICCV2019] LPD-Net: 3D Point Cloud Learning for Large-Scale Place Recognition and Environment Analysis.
- [ICRA2019] Speeding up Iterative Closest Point Using Stochastic Gradient Descent.
- [KITTI] The KITTI Vision Benchmark Suite.
- [ModelNet] The Princeton ModelNet .
- [ShapeNet] A collaborative dataset between researchers at Princeton, Stanford and TTIC.
- [PartNet] The PartNet dataset provides fine grained part annotation of objects in ShapeNetCore.
- [PartNet] PartNet benchmark from Nanjing University and National University of Defense Technology.
- [S3DIS] The Stanford Large-Scale 3D Indoor Spaces Dataset.
- [ScanNet] Richly-annotated 3D Reconstructions of Indoor Scenes.
- [Stanford 3D] The Stanford 3D Scanning Repository.
- [UWA Dataset] .
- [Princeton Shape Benchmark] The Princeton Shape Benchmark.
- [SYDNEY URBAN OBJECTS DATASET] This dataset contains a variety of common urban road objects scanned with a Velodyne HDL-64E LIDAR, collected in the CBD of Sydney, Australia. There are 631 individual scans of objects across classes of vehicles, pedestrians, signs and trees.
- [ASL Datasets Repository(ETH)] This site is dedicated to provide datasets for the Robotics community with the aim to facilitate result evaluations and comparisons.
- [Large-Scale Point Cloud Classification Benchmark(ETH)] This benchmark closes the gap and provides a large labelled 3D point cloud data set of natural scenes with over 4 billion points in total.
- [Robotic 3D Scan Repository] The Canadian Planetary Emulation Terrain 3D Mapping Dataset is a collection of three-dimensional laser scans gathered at two unique planetary analogue rover test facilities in Canada.
- [Radish] The Robotics Data Set Repository (Radish for short) provides a collection of standard robotics data sets.
- [IQmulus & TerraMobilita Contest] The database contains 3D MLS data from a dense urban environment in Paris (France), composed of 300 million points. The acquisition was made in January 2013.
- [Oakland 3-D Point Cloud Dataset] This repository contains labeled 3-D point cloud laser data collected from a moving platform in a urban environment.
- [Robotic 3D Scan Repository] This repository provides 3D point clouds from robotic experiments,log files of robot runs and standard 3D data sets for the robotics community.
- [Ford Campus Vision and Lidar Data Set] The dataset is collected by an autonomous ground vehicle testbed, based upon a modified Ford F-250 pickup truck.
- [The Stanford Track Collection] This dataset contains about 14,000 labeled tracks of objects as observed in natural street scenes by a Velodyne HDL-64E S2 LIDAR.
- [PASCAL3D+] Beyond PASCAL: A Benchmark for 3D Object Detection in the Wild.
- [3D MNIST] The aim of this dataset is to provide a simple way to get started with 3D computer vision problems such as 3D shape recognition.
- [WAD] [ApolloScape] The datasets are provided by Baidu Inc.
- [nuScenes] The nuScenes dataset is a large-scale autonomous driving dataset.
- [PreSIL] Depth information, semantic segmentation (images), point-wise segmentation (point clouds), ground point labels (point clouds), and detailed annotations for all vehicles and people. [paper]
- [3D Match] Keypoint Matching Benchmark, Geometric Registration Benchmark, RGB-D Reconstruction Datasets.
- [BLVD] (a) 3D detection, (b) 4D tracking, (c) 5D interactive event recognition and (d) 5D intention prediction. [ICRA 2019 paper]
- [PedX] 3D Pose Estimation of Pedestrians, more than 5,000 pairs of high-resolution (12MP) stereo images and LiDAR data along with providing 2D and 3D labels of pedestrians. [ICRA 2019 paper]
- [H3D] Full-surround 3D multi-object detection and tracking dataset. [ICRA 2019 paper]
- [Matterport3D] RGB-D: 10,800 panoramic views from 194,400 RGB-D images. Annotations: surface reconstructions, camera poses, and 2D and 3D semantic segmentations. Keypoint matching, view overlap prediction, normal prediction from color, semantic segmentation, and scene classification. [3DV 2017 paper] [code] [blog]
- [SynthCity] SynthCity is a 367.9M point synthetic full colour Mobile Laser Scanning point cloud. Nine categories.
- [Lyft Level 5] Include high quality, human-labelled 3D bounding boxes of traffic agents, an underlying HD spatial semantic map.
- [SemanticKITTI] Sequential Semantic Segmentation, 28 classes, for autonomous driving. All sequences of KITTI odometry labeled. [ICCV 2019 paper]
- [NPM3D] The Paris-Lille-3D has been produced by a Mobile Laser System (MLS) in two different cities in France (Paris and Lille).
- [The Waymo Open Dataset] The Waymo Open Dataset is comprised of high resolution sensor data collected by Waymo self-driving cars in a wide variety of conditions.
- [A*3D: An Autonomous Driving Dataset in Challeging Environments] A*3D: An Autonomous Driving Dataset in Challeging Environments.
- [PointDA-10 Dataset] Domain Adaptation for point clouds.
- [Oxford Robotcar] The dataset captures many different combinations of weather, traffic and pedestrians.