bowang-lab/U-Mamba

problem in testing

gumayusi3 opened this issue · 4 comments

I can only get high dice when training in" fold all",but it doesn't work when testing. What should I do to find the best configuration? Just
turning "(0,1,2,3,4)" into "all," in "find_best_configuration.py" and pythoning it don't work.

Hi @gumayusi3 ,

We didn't train so many folds.

Did you test this function with nnunet? If it works for nnunet, it should also work for u-mamba.

Thank you,I have solved this problem. The reason of low dice is that dataset is too small.I get high dice just because training dataset is same as validation when training in "fold all" .

Hi, I'm having the same problem.
May I ask, what is the number of datasets before and after your expansion? What are the dice values after expansion?
Thank you very much!

The dataset had about 50 plots before augmentation, and after augmentation to 1000 plots, Dice increased from about 0.3 to about 0.7, but still far from the theoretical optimum