JuliaWolleb/diffusion-anomaly

testing problems

max0312 opened this issue · 10 comments

Hi, thanks for your great job
I have some questions after training and testing on the brats dataset

  1. Did you normalize the seg part(data/brats/training/000001/brats_train_001_seg_080_w.nii.gz) during preprocessing? I see that the training sample set you gave has not been normalized. Will this affect the cross_entropy calculation during classifier training?

  2. I found that the results generated by using the brats2update010000.pt and modelbratsclass010000.pt saved at the beginning of the training are better than those of the model saved in the later training. What is the reason for this? (What are the specific model parameters you used in the test?)

image

  1. When testing one by one, the dice coefficient fluctuated greatly, but the auc was always high (0.98~0.99) could you please tell me why?
    image

Hi

  1. The ground truth segmentation is not used during the training. This is only used to compute the dice score during evalutation. All class labels >0 get the class label 1 as "anomalous", as indicated in the line 34 of the file scripts/evaluation_metrics.py. All MR slices of the BRATS dataset are normalized to values between 0 and 1.
  2. There seems to be a mistake in the example image you gave for brats2update050000.pt and modelbratsclass070000.pt. Do you compare the right input vs. the right output?
  3. I would check in what cases the dice score was so bad. were these the images where the tumor was really small?

Hi, Thank you for publishing your work. Im trying to reproduce your work for a week now and couldn't figure out what's wrong with my reproduction. After I trained the classifier and diffusion model, I run the classifier_sample_known.py with checkpoint brats2update050000.pt and modelbratsclass020000.pt. However, I got the results as below. The sampled_output is just the black and white image. I didn't change anything from your code. Could you figure what I'm doing wrong here? Thank you.
image
image

can you run the script classifier_sample_known.py with a classifier_scale of 0? then the output should show the same image as the input.

I still got output as back and white image. What do you think of in this situation? The input is clearly correct, but the sample output is wrong.

Do you mind uploading your checkpoints?

Thanks for your advice and I rechecked the results
2. I found that the results generated by using the beginning of the training are still better than those of the model saved in the later training.
3. After checking the low dice score, it was found that the original mask was very small.
But I want to know why the auc is still so high when the dice score calculation is low?

image
image

Thank you very much for your sharing but I encountered a similar problem.
I used the checkpoints in your link above, but still output black and white images. And there is no obvious noise on reversesample image. For the code, I just modified bratsloader.py.
Can you give me some advice?
image

I still got output as back and white image. What do you think of in this situation? The input is clearly correct, but the sample output is wrong.

hello, have you solved this problem after using the checkpoints uploaded?

There is something going wrong with your noise encoding and decoding. How large did you choose L? The image "reversesample" should show a noisy image of noise level L. If you put s=0, the final images should show the same images as the input. Could you check four function "ddim_sample_loop_known"?