RobustiPy/robustipy

Different OOS evaluation metrics, user specified

Opened this issue · 3 comments

Pseudo R2 is the natural candidate as it applies to probabilities as opposed to categories for binary outcomes, and is nicely interpretable. Ideally a suite of things that work for both binary and continuous DVs are preferable, but we could have a flag to check and catch warn then revert to a default if the specified option isn't amenable.

to me: This means Out of Sample Evaluation Metric

Metrics to include:

  1. Default to Pseudo-R2
  2. Include RMSE
  3. Include Cross Entropy Loss.

Include in results object a string which can be used for x-axis label of the out of sample plot (top right).

I've updated code in b97259d to fix the xaxis tick label of subfigure 'c.', but I'd like to leave this open if possible because I think we still need more than two\three metrics choices (I would like to get the IMV into this, for example).