MIC-DKFZ/nnUNet

The resampling preprocessing makes the originally consistent image parameters data.shape, spacing': array inconsistent.

Opened this issue · 1 comments

Hi, I am now reproducing the mednext content, my dataset is from several different organisations, when I input the pre-training command it shows: ValueError: all the input array dimensions except for the concatenation axis must match exactly , but along dimension 1, the array at index 0 has size 240 and the array at index 1 has size 87
I resampled, cropped and normalised the data and input it into the pre-training model, at this point data.shape, spacing': array shows that it is uniform, but why after nnunet preprocessing will the uniform parameter become different, here are a few different examples I intercepted, I would like to know what I should do for this dataset, looking forward to your reply!

no separate z, order 3
before: {‘spacing’: array([2., 2., 2.]), ‘spacing_transposed’: array([2., 2., 2.]), ‘data.shape (data is transposed)’: (2, 140, 140, 140)}
after: {‘spacing’: array([1.38869009, 1.38869009, 1.38869009]), ‘data.shape (data is resampled)’: (2, 202, 202, 202)}

1 860
2 10000
saving: /media/liu/3685e1a0-9021-4bac-a1b6-5e72e0b08b0a/hh_yxh/hhhhh/0-dataset-hh/Dataset-nnunetv1/nnUNet_preprocessed/Task801_ hecktor_resample_crop_norm/nnUNetData_plans_v2.1_trgSp_1x1x1_stage1/hecktor_388_MDA066.npz
no separate z, order 3
no separate z, order 3
no separate z, order 3
no separate z, order 3
no separate z, order 3
no separate z, order 1
before: {‘spacing’: array([2., 2., 2.]), ‘spacing_transposed’: array([2., 2., 2.]), ‘data.shape (data is transposed)’: (2, 140, 140, 140)}
after: {‘spacing’: array([1., 1., 1.]), ‘data.shape (data is resampled)’: (2, 280, 280, 280)}

normalisation...
normalisation done
1 736
2 6888
saving: /media/liu/3685e1a0-9021-4bac-a1b6-5e72e0b08b0a/hh_yxh/hhhhh/0-dataset-hh/Dataset-nnunetv1/nnUNet_preprocessed/Task801_ hecktor_resample_crop_norm/nnUNetData2D_plans_v2.1_trgSp_1x1

After preprocessed , I noticed that the training time is quite long while using the code they recommended. Have you experienced similar issues with long training times when using MedNext? Thank you!

2024-09-24 21:29:45.962188:
epoch: 1
2024-09-25 03:54:18.427683: train loss : -0.3505
2024-09-25 04:18:11.552264: validation loss: -0.4416
2024-09-25 04:18:11.552535: Average global foreground Dice: [np.float32(0.682), np.float32(0.6089)]
2024-09-25 04:18:11.552575: (interpret this as an estimate for the Dice of the different classes. This is not exact.)
2024-09-25 04:18:11.796514: lr: 0.000998
2024-09-25 04:18:11.797650: saving checkpoint...
2024-09-25 04:18:11.866219: done, saving took 0.07 seconds
2024-09-25 04:18:11.877072: This epoch took 24505.914864 s

2024-09-25 04:18:11.877121:
epoch: 2
2024-09-25 10:42:24.207050: train loss : -0.4211
2024-09-25 11:06:42.930458: validation loss: -0.5107
2024-09-25 11:06:42.930737: Average global foreground Dice: [np.float32(0.7205), np.float32(0.5943)]
2024-09-25 11:06:42.930787: (interpret this as an estimate for the Dice of the different classes. This is not exact.)
2024-09-25 11:06:43.180424: lr: 0.000997
2024-09-25 11:06:43.181376: saving checkpoint...
2024-09-25 11:06:43.621577: done, saving took 0.44 seconds
2024-09-25 11:06:43.761684: This epoch took 24511.884540 s