pangeo-data/climpred

Implementing mean error as a verification metric?

Closed this issue · 1 comments

Is your feature request related to a problem? Please describe.
First, thanks for putting together this package! It has been very useful. One metric I have not seen is implemented is just calculating the actual mean error (not normalized or absolute) between an ensemble-mean of forecasts (or individual ensemble members) and the obs/verification. Maybe I missed something...?

Describe the solution you'd like
xskillscore has a "me" function for mean error, but it isn't imported into metrics.py. Perhaps adding this measure to the package might be useful? Or, do you have other quick suggestions on how to do that?

Describe alternatives you've considered
One could code manually the error difference, but for large ensembles and multiple initializations, it could be a little cumbersome. For the mean time, I modified metrics.py to add in '__me' as a metric, along with importing it from xskillscore. You may have some nicer ways to do it, but I am trying this workaround for now.

Thanks again for everything!

True. Forgot to implement. Happy to receive a PR on that.

In the meantime you can wrap xs.me see user defined metrics in https://climpred.readthedocs.io/en/stable/metrics.html