Prédiction conformelle profonde pour des modèles robustes
Abstract
Deep networks like other learning models can associate high trust to unreliable predictions.
Making these models robust and reliable is therefore essential, especially for critical decisions.
This experimental paper shows that the conformal prediction approach of [Hechtlinger et al.
(2018)] brings a convincing solution to this challenge. Conformal prediction consists in
predicting a set of classes covering the real class with a user-defined frequency. In the case of
atypical examples, the conformal prediction will predict the empty set. Experiments show the
good behavior of the conformal approach, especially when the data is noisy.