kiharalab/ACC-UNet

About the dice of the model on the GlaS dataset

Closed this issue · 5 comments

Hello!
First of all, thank you for coming up with such a great model!

However, I ran the code you gave(No modifications were made), but the dice on the GlaS dataset could only reach 0.82, not 0.88 as you said in your paper.

Could you please tell me the reason for this? Or how should I modify my code?

Thank you very much! Wish you all the best!

Hi, thank you for your interest in our project.

We expect that due to randomness in different hardware systems, 1-2% fluctuation may occur. I tested in a few other machines and the score didn't fluctuate more than 1%. However, this 6% drop is too much 😕

Would you please mention for how many epochs the model was trained and what was the validation dice score?

Thank you for reporting this issue. We are actually in the process of developing a more efficient and stable version of the model and I will keep your results in mind.

Hello! Thank you very much for your reply!

I trained ACC-UNet several times.

For the first time, training stopped at epoch 273 , and the validation dice score reached 0.8819.
image

At this point, the dice on the test set is 0.82 and the IoU is 0.72.

I trained again last night and the training stopped at epoch 237. The validation dice score reached reached 0.8856.
image

This time the data on the test set is: dice: 0.81 , IoU: 0.70.

Thank you again for your reply and look forward to your new version!
Wish you all the best!

Thank you for sharing these logs. I am wondering, what is your validation data size?

I quickly checked our training logs and our model was trained for much longer. It appears that your batch size is smaller, probably that is affecting the training. Unfortunately, our current implementation requires a lot of gpu memory to run properly (for the torch concat operations). We hope to release an updated lightweight model that can be used in 12GB GPU, without much sacrifice in performance.

Hello!
The training set for GlaS contains 85 images, so I chose 17 of them as the validation set.

Thank you for your reply, I think it is my hardware that causes the training limitation.
Due to gpu limitations, I can't choose a large batch-size.

I'm looking forward to the lightweight models you release next!

Finally, thank you again for your patient response and wish you all the best.

Thank you for all the clarification. We are urgently working on developing a lighter yet competent version of our model. Once we release it, I will notify you.

Best wishes for you as well.