YashSharma/C2C

About accuracy of validation phase.

Opened this issue · 0 comments

ing907 commented

Hello, thank you for nice work and open-sourcing your project.

Recently, I attempted to train the C2C model using my own dataset and monitored its accuracy through tensorboard. The model appeared to train properly; however, I encountered an issue with the validation accuracy. During the validation phase, the model always outputs 'positive' regardless of the input data.

Upon further investigation, I noticed a significant disparity in the output distribution of the ResNet backbone during the validation phase compared to the training phase. By enabling the train mode using model.train() in the evaluation code, the model's outputs seem to be correct once again.

I have a few questions regarding this phenomenon:

  1. Are you aware of the possible reasons behind this occurrence and any potential solutions to address it?
  2. If there is no direct resolution to this issue, would it be acceptable to measure the accuracy of C2C using model.train()?
  3. Have you encountered a similar phenomenon in your experimental environment?

Although I have not yet tried the CAMELYON16 dataset, I would like to gain a clear understanding of the exact environment before conducting any experiments.

Thank you for your time and consideration.