Subdataset experiment results
Closed this issue ยท 4 comments
Hi @saltoricristiano ,
Due to I am not good at PyTorch lighting, #8 and #9 , I re-implement your code with PyTorch and used the MinkUnet34 as the backbone network. From your code, I find the backbone network is MinkUnet34, not 32C. And there is no 32C in the official repo.
I use the same subdatset with PCT (SynLiDAR) and replace softdice loss with cross-entropy loss (CE achieved better performance than softdice), the experiment results are listed below,
SynLiDAR -> SemanticKITTI experiment results:
SynLiDAR -> SemanticPOSS experiment results:
Sorry, my re-implement results may not be the best performance of cosmix. However, to fairly compare my method with cosmix and other UDA methods, I will list these results in my following paper.
About #5 , the intensity does not impact the final performance on the SynLiDAR->SemanticKITTI task, but it degrades the performance on the SynLiDAR->SemanticPOSS task.
In my recent experiments, I find that I can not directly use self-training with the source only model without cosmix. Thanks for your work because it makes self-training work on this cross-domain UDA segmentation task.
Hi @dream-toy,
thanks for your interest in our work and, most importantly, thanks for the amazing work you have done! ๐
I have to apologise, on the lower performance on SynLiDAR -> SemanticPOSS when using the intensity. All the experiments we did by using the intensity were on SemanticKITTI where as you also experienced, there's a negligible drop.
I found also interesting that on SemanticPOSS there's a huge drop in the performance. Did you try to use SoftDICELoss instead of the CELoss? Btw, feel free to use your numbers ๐
Looking forward for reading your paper!
Best regards,
@saltoricristiano
Thank you very much for your support and encouragement, @saltoricristiano .
-
About intensity
As pointed out in SynLiDAR, the point-wise intensity value of SynLiDAR dataset is obtained by training a rendering model from a real-world dataset (i.e., semanticKITTI)}. Thus, the intensity of the SynLiDAR is especially different from that of the semanticPOSS and is similar to the semanticKITTI to a certain extent. So I sincerely hope that all work in this field directly use XYZ as input features and avoid wasting a lot of computation resources. -
Softdice loss
In my experiment results, the CELoss brings about +0.5 mIoU in SynLiDAR->SemanticKITTI task, which is a negligible improvement. I don't use SoftDICELoss in SynLiDAR -> SemanticPOSS task, so there is no result.
Hey,
thanks for the info!
-
I agree with you on the intensity discussion. Using the intensity can be useful for other tasks but, for semantic segmentation and with these classes, there are not real advantages in using also the intensity. Instead, it worsens the domain shift.
-
Interesting! Does it take longer to converge with CE Loss? In my experience, SoftDICE makes the training a bit faster ๐ค
Hey, @saltoricristiano
-
No, the intensity is important for segmentation and this was proved by previous works ๏ผe.g., Cylinder3D, Salsanext are using intensity as the input feature). For target only model, using intensity as the input feature can bring about +3% mIoU. However, for UDA segmentation, we do not know which LiDAR sensor of the target domain, the intensity of SynLiDAR may not suitable of the current target domain. If the target domain data is not collected by the Velodyne-64 LiDAR sensor, it will enlarge the domain gap. Moreover, if we train an intensity render model for the current target domain by ourselves, it is hard to compare fairly. So I sincerely hope that all work in this field directly use XYZ as input features. Regarding the effect of intensity on UDA segmentation task, we will show it in our recent work (if all goes well). ๐
-
The CE Loss converges fast, too.
Thank you for your quickly reply, I fell asleep last night ๐.....