Scoring and evaluation for continuous outcome
Opened this issue · 0 comments
shaddyab commented
Q1)
Given the fact that for continuous outcome the theoretical max (i.e., q1_) and practical max(i.e., q2_) curves are not well defined and will not be correct, then only the following six metrics can be used to evaluate the model. Is this correct?
- Q_cgains
- Q_aqini
- Q_qini
- max_ cgains
- max_aqini
- max_qini
Q2)
Based on lines 205
score_name = 'q1_'+method
And the _score function in base.py
def _score(self, y_true, y_pred, method, plot_type, score_name):
""" scoring function to be passed to make_scorer.
"""
treatment_true, outcome_true, p = self.untransform(y_true)
scores = get_scores(treatment_true, outcome_true, y_pred, p, scoring_range=(0,self.scoring_cutoff[method]), plot_type=plot_type)
return scores[score_name]
three of the scoring methods which can be used for grid search: 'q1_qini', 'q1_cgains', 'q1_aqini' should not be used with continuous variables. If this is indeed the case, then I would suggest that this issue be fixed using the continuous_outcome argument already available and maybe be replaced with ‘Q_’ scores for continuous variables.