/point-attention

Primary LanguageJupyter Notebook

DAPnet: A Double Self-attention Convolutional Network for Point Cloud Semantic Labeling

nets

Airborne Laser Scanning (ALS) point clouds have complex structures, and their 3D semantic labeling has been a challenging task. It has three problems: (1) the difficulty of classifying point clouds around boundaries of objects from different classes, (2) the diversity of shapes within the same class, and (3) the scale differences between classes. In this study, we propose a novel double self-attention convolutional network called the DAPnet. The double self-attention module originates from the self-attention mechanism, including the point attention module (PAM) and the group attention module (GAM). The PAM can effectively assign different weights based on the relevance of point clouds in adjacent areas. Meanwhile, the GAM enhances the correlation between groups, i.e., grouped features within the same classes, which reduces the effect of shape differences. To solve the scaling problem, we adopt a multiscale radius to construct the groups and concatenate extracted hierarchical features with the output of the corresponding upsampling process. In the experiments, the DAPnet performs well in several semantic labeling contests. By conducting ablation comparisons, we find that the PAM is more effective toward the overall improvement of the model than the GAM, and the incorporation of the double self-attention module has an average of 7% improvement on the pre-class accuracy of the classes. Plus, the DAPnet consumes a similar training time to those without the attention modules for model convergence. The experimental results evidence the effectiveness and the efficiency of the DAPnet for the semantic labeling of ALS point clouds.