askerlee/segtran

Baseline comparison for 2D dataset

wshi8 opened this issue · 6 comments

wshi8 commented

Hi Dr. Lee, do you have the code that you used to compare with various baselines in section 5.2 (list of other models) for 2D dataset? Is there a similar comparison for the 3D dataset?

Yes all the baseline code is included in this repo. You just need to provide different --net parameters. For example, 'unet-scratch' for vanilla u-net, 'unet' for using pretrained encoders (--bb eff-b4 to use efficientnet-b4 as the encoder).

For 3D images, only Unet-3D and V-Net were included in the code. There are two more baselines in the paper, whose results were extracted from their papers.

wshi8 commented

Thanks a lot, I tried to run test using vnet

python3 test3d.py --task [mydataset] --split all --bs 1 --ds 2020valid --net vnet --attractors 1024 --translayers 2

but got the following errors:

Error Given groups=1, weight of size [16, 1, 3, 3, 3], expected input[1, 2, 112, 112, 96] to have 1 channels, but got 2 channels instead

I see. In test3d.py, the input channels of V-Net / U-Net 3d are hardcoded as 1. You could try to change it to 2 and see what you can get. Anyway at the later stage of my development, I no longer used these models, so their API calls may contain bugs.

wshi8 commented

Thanks! Is there or Do I need to upload any checkpoint for the U-net/V-net 3D t for testing?

Do you mean pretrained checkpoints? I don't have them at hand. You have to train first, then do evaluation.