aboulch/ConvPoint

Measure incorrect caused by a wrong confusion matrix

Closed this issue · 1 comments

Hello Boulch,

I found an issue related to your code: [https://github.com/aboulch/ConvPoint/blob/master/examples/semantic3d/semantic3d_seg.py]

Line346: cm_ = confusion_matrix(target_np.ravel(), output_np.ravel(), labels=list(range(N_CLASSES)))

The arg of "labels" here can cause an issue in my environment (see my environment below) that it will omit some part of the confusion matrix. For example, if the confusion matrix should be [[1,2,3],[4,5,6],[7,8,9]] with 3 classes, it will become [[0.0.0],[0,1,2],[0,7,8]] with your current code in my current environment. This will cause wrong values for the measures (OA, AA, IOU).

Solution: you can simply omit the argument of "labels" to solve the issue.

Environment:
scikit-learn 0.23.1
python 3.7.7

Hi Boulch,

It is my fault.

Your design is better since we may meet an issue that cm_ may not be the same size with cm if the tested batch does not include all the classes. I met that issue because I tried to add a validation script to your script for the semantic_3D benchmark. To compare with the original data or target data, I add 1 to the predicted value (start from 0) so the range of the value in the prediction results becomes the same with that of target data (start from 1). In such a case, if we set the arg of labels in the confusion_matrix function to labels=list(range(N_classes)), it will then show the error I stated.

A good way to solve this issue is to let the target data minors one instead of the predicted data plus one.

I just post it here for further reference for others.

With many best wishes,
Tianyang