Better model selection
Closed this issue · 5 comments
wolny commented
When trying all available models, we should only try 2D or 3D models on a given data. We can select the models automatically based on the patch_size
or input size. Currently if the patch size is 3D the 2d models just fail and the other way around.
lorenzocerrone commented
I think it make sense to only try 2D models on 2D inputs, but sometimes it might be useful to run the 2D models on 3D inputs (like if you have very coarse resolution in z, or not enough z-slices, or if you have a 2D+t stack)
wolny commented
yeah, I agree to that. Then we should add some simple logic to handle that
lorenzocerrone commented
Let's add more metadata to the model_zoo.yaml
, something like:
generic_confocal_3D_unet:
model_url: https://zenodo.org/record/7768223/files/unet3d-arabidopsis-ovules-confocal-ds2x.pytorch
resolution: [0.235, 0.150, 0.150]
description: "Unet trained on confocal images of Arabidopsis Ovules on 1/2-resolution in XY with BCEDiceLoss."
dimensionality: "3D"
model_name: "UNet3D"
modality: "confocal"
recomended_patch_size: [40, 160, 160]
output: "boundaries"
PS we need to be careful with this it might break back-compatibility
lorenzocerrone commented
TODOs:
- checkboxs for dimensionality
- dropdown for modality
- remove parse patch_size from config
- dropdown for output
lorenzocerrone commented
fixed in #149