RNTI

MODULAD
Rectifier pour mieux distiller
In EGC 2024, vol. RNTI-E-40, pp.59-70
Abstract
In this paper we present an approach to the distillation of boosted trees into decision trees. Such a distillation process aims to derive an ML model offering a acceptable trade-off in terms of precision and interpretability. We explain how the approach to correction of binary classifiers, called rectification, and introduced recently can be used to implement such a distillation process. We show empirically that this approach provides interesting results, compared to distillation by re-training.