andreped/NoCodeSeg

generic multi-class support?

andreped opened this issue · 4 comments

The current importTiles script does not seem to support importing patches of multiple labels. This one is quite annoying to add multi-class support for, but is something we could try to do early next week? Let us schedule a session.

There have been implemented multi-class variants for the other scripts (i.e., exportTiles and importPyramidalTIFF), but these have not been tested in various scenarios. Currently, these are split into separate scripts; one for single-class and one for multi-class, but I think the multi-class scripts might actually handle the single-class situation as well. Could you run some checks in your pipeline to see if the multi-class scripts are sufficient?

Have you had the time to test the two scripts yet, @SahPet? We just need to verify that they work for single and multi-class scenarios, preferably in your workflow.

@pr4deepr I just made a script for importing predictions from MIB into QuPath directly, which simplifies the active learning pipeline. Then FP is not needed, but in the end for the final model, I would still incourage using FP for deployment, as it is better suited for larger images, supports a broader range of architectures, and can solve more range of tasks - even simultaneously (multi-task models).

For you, that means that you can annotate multiple classes in QuPath, export these as tiles (using this script), import these in MIB and train multi-class model, then run predictions on full WSIs in MIB which produces stitched predictions images (somewhat similar to what is done in FP, but not pyramidal), and then simply import these predictions in QuPath (using this script). If you do the FP workaround, you can import predictions into QuPath (using this script).

@SahPet is going to make a tutorial video for the multi-class scenario that showcases the full pipeline. I will let you know when that video is published :] @SahPet could perhaps add a comment here with the URL when it is published?

But overall, that means that the pipeline is ready to be used. It should just work. If you are experiencing any issues, please, let us know and we can do our best to support you.

Thanks for this @andreped . This is super useful!
It will speed up a lot of things.. We still are in the data generation process. Will let you know how we go...
Again, appreciate your support and how responsive you guys have been. I wish more open-source software were like this!

We still are in the data generation process

@pr4deepr Remember that data generation, preprocessing, and cleaning is 90% of ML, so that is to be expected ;)