Model training
wshi8 opened this issue · 4 comments
The loss curve seems not to decrease a lot... Have you preprocessed the images using brats_processing.py
?
I attached my training log for your reference.
Thank you so much! I did preprocessing the data. One quick question, how did the patch size get selected as [128, 144, 80]? Is this the optimal one?
Do you mean output_size
in brats_processing.py
? There is actually a comment explaining that this setting doesn't really matter:
# output_size is only used when do_localization=True.
# By default, do_localization is disabled. So the value here doesn't matter.
In train3d.py, I cropped the image to [112, 112, 96]
, which is somewhat important. For this size, I actually tried quite a few settings before deciding on it. The biggest limitation is RAM; so I can only choose H, W, D values that are around 100. H, W, D have to be divided by 16, and Brats data are isotropic (same resolution along H, W, and D), so we better have similar H, W and D. Thus I chose [112, 112, 96]
.
Thank you!