Using ignore_index in ConfusionMatrix return 0 IoU in Train and 100 IoU in Validation
Closed this issue · 2 comments
For some dataset, we have to use ignore_index parameter in ConfusionMatrix and Loss.
In function train_one_epoch :
cm = ConfusionMatrix(num_classes=cfg.num_classes, ignore_index=cfg.ignore_index)
in train_one_epoch line 336 of main.pycm = ConfusionMatrix(num_classes=cfg.num_classes, ignore_index=cfg.ignore_index)
in validate line 391 of main.py
It seems that, in TRAIN mode IoU for ignore label is 0 and IoU in VAL mode is equal to 100, introducing a gap between TRAIN mIoU and VAL mIoU.
Example for S3DIS (if we introduce ignore_index = 0), we get this IoU :
- TRAIN mode : [ 0. 23.93 17.28 7.50 1.58 13.90 20.28 17.11 14.70 2.13 4.27 6.03 10.24 ] // mIoU : 10.69
- VAL mode : [100. 69.72 0.07 0.07 0. 0.37 0.11 32.82 21.51 1.62 0.34 0. 9.83] // mIoU : 18.19
Example for Semantic3D ( in this dataset ignore_index is set to 0) :
- TRAIN mode : [ 0. 0.04 0.03 1.00 8.88 25.67 0.18 0.10 1.41 ] // mIoU : 4.15
- VAL mode : [100. 0. 0. 0. 8.41 0. 0. 0. 0. ] // mIoU : 12.05
Any ideas ? Thank's
@hpc100 Did you find a solution for this? I am also currently looking into excluding the ignored class in the loss- and metric calculations. I might give an update, when I've looked into it.
Update:
I have the following 4 classe in my custom dataset:
['background', 'utility_pipes', 'main_pipe', 'ignore']
In order to make PointNeXt not include the ignore class (which has index 3 in my GT labels), I do the following two things:
- Modify the init method of ConfusionMatrix class (openpoints/utils/metrics.py):
Original:
def __init__(self, num_classes, ignore_index=None):
self.value = 0
self.num_classes = num_classes
self.virtual_num_classes = num_classes + 1 if ignore_index is not None else num_classes
self.ignore_index = ignore_index
Change to:
def __init__(self, num_classes, ignore_index=None):
self.value = 0
self.num_classes = num_classes - 1 if ignore_index is not None else num_classes
self.virtual_num_classes = num_classes
self.ignore_index = ignore_index
- Add an ignore_index attribute to default.yaml in the cfg file of my dataset (e.g. cfgs/[datasetname]/default.yaml):
criterion_args:
NAME: CrossEntropy
label_smoothing: 0.2
ignore_index: 3 # <---- Add id of the class that should be ignored
Thank you both! I also updated the following code in "metrics.py" to avoid 100 mIoU on the validation dataset:
def get_mious(tp, union, count):
# iou_per_cls = (tp + 1e-10) / (union + 1e-10) * 100
# acc_per_cls = (tp + 1e-10) / (count + 1e-10) * 100
iou_per_cls = torch.where(union + tp == 0, torch.tensor([0.0], device=tp.device), (tp + 1e-10) / (union + 1e-10)) * 100
acc_per_cls = torch.where(count + tp == 0, torch.tensor([0.0], device=tp.device), (tp + 1e-10) / (count + 1e-10)) * 100
over_all_acc = tp.sum() / count.sum() * 100
miou = torch.mean(iou_per_cls)
macc = torch.mean(acc_per_cls) # class accuracy
return miou.item(), macc.item(), over_all_acc.item(), iou_per_cls.cpu().numpy(), acc_per_cls.cpu().numpy()