Dootmaan/MT-UNet

errors

long123524 opened this issue · 4 comments

非常好的工作,感谢你的发布。我使用了自己的数据集,但在评估时出现了这个错误,我很疑惑,我的image的形状大小是[3,224,224], label是[224,224], 我的是二分类,但出现了下面维度不匹配的情况,为什么要用 for ind in range(imag.shape)这个代码吗?这样输入的input不就变成了[1,1,224,224]吗?这和我训练时的输入形状[12,3,224,224]岂不是不匹配,预测会不会因此有问题呢?
S1 T9A{ QKEW%BYA57E(6

Hi @long123524, thank you for your question. The function "test_single_volume()" is directly borrowed from https://github.com/Beckschen/TransUNet/blob/main/utils.py, and "for ind in range(imag.shape)" is used to perform 2D segmentation for each slice in a 3D volumetric medical image (because MT-UNet is a 2D network trained with 2D slices). I'm sorry that this function can only process ordinary single-channel 3D image right now.

If you would like to apply MT-Unet to other 2D datasets, i suggest that you use your own evaluation code. For example, if you would like to evalute the Jaccard Coefficient between the output and your label, simply use:

from medpy.metric import jc

def test_2d:
    model.eval()
    mt_unet_output=model(YOUR_3-224-224_INPUT)  # make sure num_classes is set to 1(BCE).
    score=jc(mt_unet_output, your_label)  # squeeze(0) if u have to
    return score

谢谢你的答复!我还有一个问题,MT-UNet好像并不能适应其他尺寸的大小的图片,比如256×256的图片,我应该如何去修改,才能使得它适用于任何尺寸大小的图片呢?

Hi @long123524, thank you for your further interest. MT-UNet is especially designed for 224×224 input so it can compare with previous sota methods. To make MT-Unet accept 256×256 input, you have to modify the network structure and some more.

Modifications to make MT-Unet accept 256×256 input includes but not limited to:

  • position_embedding
  • args.img_size

This issue is closed since no further activity has happened for a while.