Add probability calibration to the classifier outputs
sabinthomas opened this issue · 1 comments
sabinthomas commented
Classify() needs to implement a threshold mechanism for classify() errors. An error is a condition where the labels and probabilities are inconclusive, and a match cannot be obtained.
One way around this is by computing a priorProbabilities classification, and then comparing every getClassification result to the value of this priorProbabilities
DrDub commented
What you describe seems more in line with application code and it is beyond what a classifier is or does.
But working out confidence levels on the predictions is a direction ML packages are moving towards: http://scikit-learn.org/stable/modules/calibration.html
I'm retitling this and labeling a feature enhancement.