Validation confusion
zx-explorer opened this issue · 5 comments
Clearly, the pre-change image results are divided into 7 categories, resulting in 49 possible changes. Why, then, is the parameter passed into SCDD_eval_all set to 37?
Traceback (most recent call last): File "/home/dmx_bs/MambaCD2/MambaCD/changedetection/script/train_MambaSCD.py", line 259, in <module> main() File "/home/dmx_bs/MambaCD2/MambaCD/changedetection/script/train_MambaSCD.py", line 255, in main trainer.training() File "/home/dmx_bs/MambaCD2/MambaCD/changedetection/script/train_MambaSCD.py", line 145, in training kappa_n0, Fscd, IoU_mean, Sek, oa = self.validation() ^^^^^^^^^^^^^^^^^ File "/home/dmx_bs/MambaCD2/MambaCD/changedetection/script/train_MambaSCD.py", line 203, in validation kappa_n0, Fscd, IoU_mean, Sek = SCDD_eval_all(preds_all, labels_all, 37) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/dmx_bs/MambaCD/changedetection/utils_func/mcd_utils.py", line 209, in SCDD_eval_all assert unique_set.issubset(set([x for x in range(num_class)])), f"unrecogniz ed label number, {unique_set}, {set([x for x in range(num_class)])}" ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ AssertionError: unrecognized label number, {0, 8, 9, 10, 11, 12, 14, 15, 16, 17, 18, 20, 21, 23, 24, 26, 27, 32, 33, 35, 36, -4, -3}, {0, 1, 2, 3, 4, 5, 6, 7, 8 , 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36}
Hi,
Thank you for yoru question. The SECOND dataset has six semantic categories and one unchanged category. Therefore, the final category number is 6x6 + 1 = 37. Please take a look at the introduction of this dataset: https://captain-whu.github.io/SCD/
Best,
In the raw paper of SECOND, the author claimed that 30 categories and one unchanged class are considered. Why 37 in total?
grount truth has 30 categories, but there are cases of misclassification, so 37 categories must be considered for accuracy assessment.
Thank you for your patient answer. So why the assertion in SCDD_eval_all can be always satisfied? The preds can be negative according to the calculation of preds_csd in the file "train_MambaSCD.py".
`def SCDD_eval_all(preds, labels, num_class):
hist = np.zeros((num_class, num_class))
for pred, label in zip(preds, labels):
infer_array = np.array(pred)
unique_set = set(np.unique(infer_array))
assert unique_set.issubset(set([x for x in range(num_class)])), "unrecognized label number"
label_array = np.array(label)
assert infer_array.shape == label_array.shape, "The size of prediction and target must be the same"
hist += get_hist(infer_array, label_array, num_class)
hist_fg = hist[1:, 1:]
c2hist = np.zeros((2, 2))
c2hist[0][0] = hist[0][0]
c2hist[0][1] = hist.sum(1)[0] - hist[0][0]
c2hist[1][0] = hist.sum(0)[0] - hist[0][0]
c2hist[1][1] = hist_fg.sum()
hist_n0 = hist.copy()
hist_n0[0][0] = 0
kappa_n0 = cal_kappa(hist_n0)
iu = np.diag(c2hist) / (c2hist.sum(1) + c2hist.sum(0) - np.diag(c2hist))
IoU_fg = iu[1]
IoU_mean = (iu[0] + iu[1]) / 2
Sek = (kappa_n0 * math.exp(IoU_fg)) / math.e
pixel_sum = hist.sum()
change_pred_sum = pixel_sum - hist.sum(1)[0].sum()
change_label_sum = pixel_sum - hist.sum(0)[0].sum()
change_ratio = change_label_sum/pixel_sum
SC_TP = np.diag(hist[1:, 1:]).sum()
SC_Precision = SC_TP/change_pred_sum
SC_Recall = SC_TP/change_label_sum
Fscd = stats.hmean([SC_Precision, SC_Recall])
return kappa_n0, Fscd, IoU_mean, Sek`
Hi, Can you explain the question in more detail?