JCPC : Approche de calibration des probabilités des classifieurs basée sur la règle de Jeffrey
Abstract
In many critical applications, machine learning models must not only predict the class label accurately, but also provide the probability that the prediction is correct. This probability determines whether or not to trust the prediction. In this paper, we present a novel approach for calibrating the probabilities of machine learning models via a post-processing step. The starting point of this work is the observation that calibration is rather better on a small number of categories or subsets of classes than on a large number of classes. Our proposed calibration approach, named JCPC, is based on probabilistic belief update and calibrates the predicted probabilities on classes using the predicted probabilities on categories. Our experimental study on many datasets and machine learning models show very promising results.