lescientifik/open_brats2020

How to add in new metrics to judge performance while training?

Harsh-Gill opened this issue · 6 comments

UPDATED QUESTION:

How can i log the hausdorff distance with the dice at the same time while doing validating on the folds ? It only logs dice and i am not sure how to log both Hausdorff and dice for the 3 labels

The 'step' function in train.py has a metric argument, which is a callable in charge of computing the metric you want. I would start here, using the directed_haussdorf function from scipy as a base.

As a side note, keep in mind that the Hausdorff distance takes quite some time to compute, that's why at some point I removed its computation from my training loop

Hope it helps.

The 'step' function in train.py has a metric argument, which is a callable in charge of computing the metric you want. I would start here, using the directed_haussdorf function from scipy as a base.

As a side note, keep in mind that the Hausdorff distance takes quite some time to compute, that's why at some point I removed its computation from my training loop

Hope it helps.

Thank you very much and great code base! Very much appreciate your response.

I figured the step functions metric arguement was it but had some trouble making hausdorff a part of it. Though ill try add other metrics.

I want to ask when you removed its computation from your training loop, do you not think it is a useful metric to visualize for your model or is dice alone enough in your opinion?

Thank you again!

I left it only in the final evaluation after the training is complete.

Of course it is important, but I feel experiencing more in the short timespan of the competition would be a better trade off. In the end, I focused too much on the dice and not enough on the Hausdorff...

I've been trying to implement the hausdorff all day but still not successful.

I am now trying to add the loop of hausdorff into this part:

if not model.training:
metric_ = metric(segs, targets)

but using
from scipy.spatial.distance import directed_hausdorff

metric_2 = directed_hausdorff(segs,targets)

This doesnt work but is my approach impossible?

That's not how you use the scipy function.

There is an exemple usage in my utils.py file.

Most relevant part for you:

preds_coords = np.argwhere(preds[i])
targets_coords = np.argwhere(targets[i])
haussdorf_dist = directed_hausdorff(preds_coords,targets_coords)[0]

@Harsh-Gill
Have you resolved this issue at the time? if so please let me know the steps you applied to get HD scores