open-mmlab/mmengine

[Feature] Log metrics in test mode

Opened this issue · 8 comments

What is the feature?

Just noticed that when running mmseg with a pretrained model and a test set to evaluate its performance, the final metrics are not logged in my visbackend (mlflow).

I was exploring the source and noticed that the LoggerHook class is the one in charge of dumping metrics during training and eval.

I was wondering if there is any reason of why runner.visualizer.add_scalars() is not called after_test_epoch

Any other context?

No response

@HAOCHENYE any news? Just trying to figure out if I should patch this locally or send a PR here

@HAOCHENYE would it be possible to get some feedback on this?

Also need update on this!

Sorry for the late response. The reason for not calling add_scalars in after_test_epoch is that the test set typically does not have the ground truth, and we usually only calculate various metrics and statistics on the validation set.

that is a somewhat valid response, but especially if I'm running a test.py script I would expect that test metrics would be logged in

I see... So is your plan to assume that the test set does not have the ground truth, or should we find a way to extract and log it when the ground truth is present?

If gt is not present do we even have metrics? I'm using this for MMagic, so in generation we also don't have gt, but we mainly compute metrics by comparing test features (generated samples) and train features... I believe that whenever metrics are calculated for testing they should also be added to the visualizer

Visualizer is a globally accessible variable, and you can get the visualizer at any location using visualizer = Visualizer.get_current_instance() and then call the interface like visualizer.add_scalar() to record the information you want. You can implement it in a custom hook, or any other places you want (maybe model.xxx, metric.xxx ...)