VDIGPKU/CBNetV2

Questions about fused_semantic_head

JR-Wang opened this issue · 1 comments

I tried to train my model as follow
python tools/train.py configs/cbnet/htc_cbv2_swin_large_patch4_window7_mstrain_400-1400_giou_4conv1f_adamw_1x_coco.py

But when it calculated fused_semantic_loss, an error occurred as follow
1only batches of spatial targets supported (3D tensors) but got targets of size: : [1, 100, 148, 3]

Your code is written as : loss_semantic_seg = self.criterion(mask_pred, labels)
I notice you use cross entropy loss in your code but the mask_pred and labels are both BCH*W size. Is this reasonable?
What should I do to solve this problem?

Change color images of stuff annotation to gray can solve this problem.