comic/evalutils

Add models to the algorithm templates

jmsmkn opened this issue · 4 comments

Request from Erdi:

it would also be super cool if I could upload a tensorflow/pytorch model directly?

Maybe we can do something with ONNX?

I don't have experience with ONNX but I don't have great experiences with MMDNN. I think supporting tensorflow and pytorch would be somewhat easy.

I think the first step would be to add support in evalutils by having a place to drop your model in the templated repo, then we can see how it would work, then integrate if it's good

@jmsmkn That's a good suggestion. As you say it should be easy to just support PyTorch and TensorFlow models for this for now. I could create a quick mockup for PyTorch when I find some time. I guess we have to carefully think about the interface for the template.

Would be really good to know if ONNX could be used, so that we do not have to maintain support for all of the frameworks: https://onnx.ai/supported-tools.html#buildModel

Ok, let me do some research on ONNX first.

We now use ONNX Runtime (CPU only) for bodyct-multiview-nodule-detection and this works fine, also model/weigths conversion from PyTorch to ONNX format is very easy to do (probably very similar for TensorFlow). Haven't tested it for the GPUs yet.

The only caveat I found for using ONNX Runtime (CPU mode) on grand-challenge is that you must explicitly specify the CPU affinities, since it has no permissions for automatic affinity resolution there. See the following code for creating an onnxruntime.InferenceSession from a ONNX model file, which accounts for this:
https://github.com/DIAGNijmegen/bodyct-multiview-nodule-detection/blob/7a6fd7e0590eeeeecf4cee2afa032e0bdeeaeeff/packages/onnxruntime_utils.py