kreshuklab/plant-seg

LiftedMulticut error

Closed this issue · 5 comments

Hi all! First, congrats for this amazing and friendly tool.

I have been trying the lifted_multicut protocol as you described in the wiki.
https://github.com/hci-unihd/plant-seg/blob/master/plantseg/resources/nuclei_predictions_example.yaml
https://github.com/hci-unihd/plant-seg/blob/master/plantseg/resources/lifted_multicut_example.yaml

but when the program execute the segmentation module of the second step, then, crash -> "Unsupported algorithm name LiftedMulticut"

Have I to download or add this algorithm segmentation method to the preinstalled plantseg program?

Thank you in advance.

Pedro

Hi Pedro, Thanks a lot!

Thanks for letting us know, we only recently added the lifted_multicut and there was a little bug in the pipeline building script. The bug is fixed now, but we are working on some issues with the newest release.

I will give you an update when the new version is online.

Best,

Lorenzo

Hi Pedro,

The bug should be fix now. You can test the workflow on the newest version of plantseg (1.1.8).

Best,

Lorenzo

Thank you so much Lorenzo!

I'll try it as soon as possible. In a few days I will provide my feedback. Thank again. We keep in touch.

Best,
Pedro

Hi guys! I am really sorry for this delay. Days turned in weeks due to a high workload.

Anyway, I have tried the LiftedMulticut protocol and I have just realized that is only possible to execute this pipeline in lightsheet microscopy images, but do not in confocal images (my case). I have tried to mix lightsheet of nuclei and generic confocal of membranes in this protocol but I didn't get any output, that is, the LiftedMulticut folder. So, that trick didn't work.

do you have any idea about where I am failing? (attached .yaml as .txt)
liftedMulticut_1stRound_nuclei.txt
liftedMulticut_2ndRound_membranes.txt

does it exist any pretrained nuclei model enabling to use liftedMulticut in confocal images? Thank in advance.

wolny commented

Hi @pedgomgal1,

apologies for a delayed response on this one, 2021 has been quite busy so far...

Anyway, I have tried the LiftedMulticut protocol and I have just realized that is only possible to execute this pipeline in lightsheet microscopy images, but do not in confocal images (my case)

yes the current pre-trained model for the 3d nuclei was trained on the lightsheet data and one should not expect to give high accuracy when applied to the confocal stacks, however it should give you some prediction so the final output from the LiftedMulticut should not be empty.

I've looked at you configs and found following issues:

  • (MINOR) liftedMulticut_1stRound_nuclei.txt: everything after cnn_prediction should be disabled so the state for cnn_postprocessing, segmentation and segmentation_postprocessing should be set to False. In your config cnn_postprocessing is set to True.
  • (MAJOR) liftedMulticut_2ndRound_membranes.txt: the path attribute 1st line has to point to the raw files containing the boundary staining. In your case it points to the nuclei files /media/pedro/6TB/GPU_segmentation/plantSeg/Sqh-RNAi/newImages4_dapi+c2. The rest of the config looks correct!

does it exist any pretrained nuclei model enabling to use liftedMulticut in confocal images?

we don't have any pre-trained model for 3d nuclei with DAPI staining, but if you have some ground truth segmentation for your data, we're happy to guide you through the process of training your own network (see e.g. https://github.com/wolny/pytorch-3dunet#train).

Good luck with your experiments!