I got bad results with my datasets
hpbzxxxx opened this issue · 9 comments
Hi @hpbzxxxx,
This phenomenon usually happens when the background intensity of your input image is not equal to zero or the input images are not normalized within [0, 1].
If you are performing resize/interpolation using the sklearn package, please make sure the background intensity of the input images is equal to zero. Otherwise, you may want to switch to the masked NCC similarity function when measuring the distance of the images during training.
Could you share a few samples of the training dataset and your training code with me?
I used OASIS2. I ran freesurfer -autorecon1, and got brainmask.mgz. And normalized ,cropped to 160*192*144
OAS2_0002_MR1.nii.gz
OAS2_0009_MR1.nii.gz
OAS2_0010_MR1.nii.gz
OAS2_0012_MR1.nii.gz
The training code was copied from your code, just changed datapath, and sorted '*.nii.gz'
Hi @hpbzxxxx ,
Your preprocessed data looks fine to me. I decide to reproduce your error by training a new model with your preprocessed data. I will get back to you very soon.
If possible, could you please provide the exact modified Train_cLapIRN.py file to me, so that I can minimize the discrepancy between your training code and my training code.
There are all data and code I used for training.Thank you again!
code.zip
OAS2_0002_MR1.nii.gz
OAS2_0009_MR1.nii.gz
OAS2_0010_MR1.nii.gz
OAS2_0012_MR1.nii.gz
OAS2_0013_MR1.nii.gz
OAS2_0016_MR1.nii.gz
OAS2_0027_MR1.nii.gz
OAS2_0030_MR1.nii.gz
OAS2_0052_MR1.nii.gz
OAS2_0061_MR1.nii.gz
OAS2_0067_MR1.nii.gz
OAS2_0077_MR1.nii.gz
OAS2_0081_MR1.nii.gz
OAS2_0087_MR1.nii.gz
OAS2_0089_MR1.nii.gz
OAS2_0098_MR1.nii.gz
OAS2_0102_MR1.nii.gz
OAS2_0105_MR1.nii.gz
OAS2_0111_MR1.nii.gz
OAS2_0124_MR1.nii.gz
OAS2_0126_MR1.nii.gz
OAS2_0143_MR1.nii.gz
OAS2_0146_MR1.nii.gz
OAS2_0147_MR1.nii.gz
OAS2_0156_MR1.nii.gz
OAS2_0158_MR1.nii.gz
OAS2_0169_MR1.nii.gz
OAS2_0174_MR1.nii.gz
Hi @hpbzxxxx,
I have tried your preprocessed dataset with my method. Unfortunately, I have the same results as yours, and I am not able to locate the bug.
Yet, I want to share the debugging process with you.
Potential problems: PyTorch version problem and the instance normalization makes the training unstable.
Results: Tested with the latest Pytorch and old Pytorch, removed instance normalization. Yet, the problem persisted.
Potential problem: The contrast of your data is significantly lower than my OASIS training dataset.
Results: Improve the contrast using windowing technique, i.e. np.clip(img, 0, 0.65). Yet, the problem persisted.
Potential problem: The size of your training dataset is too small, i.e., 28 image scans.
Results: Sampled 28 image scans from my OASIS training dataset. It works, which implies our method can train with a small dataset.
Potential problem: The conditional module is somehow not compatible with your preprocessed dataset.
Results: Training LapIRN without conditional modules using your preprocess dataset. The model was still collapsing at the first stage.
Conclusion: There is something wrong with your preprocessed dataset or in my code, but I cannot tell what exactly the problem is.
Suggestions:
- Try another image registration method such as the Voxelmorph to see if the problem persists. If the problem persists, revisit the preprocess pipeline of your training data. If the problem is solved by another deep learning-based method, try not to crop the data but downsample it instead when using our method because the gradient in the image boundary is not well defined in the deep learning-based method.
- Use the preprocessed OASIS dataset provided by Adrian Dalca in link.
Would you mind letting me know if you have made any progress? I apologize I cannot locate the errors.
Im turely grateful for helping me!
Now Im training LapIRN with these data to see if it works. If it does not work, I will try data in link next.
As for voxelmorph, I tried it before which did work. The difference between these data and data for voxelmorph is that I didnt do normalization training voxelmorph and trained with another size(160*192*224). I will try downsample instead of crop as you said.
I will let you know if I get any progress.
AND THANK YOU AGAIN