RNTI

MODULAD
Stratégies coalitionnelles pour une explication efficace des prédictions individuelles.
In EGC 2022, vol. RNTI-E-38, pp.395-402
Abstract
As Machine Learning (ML) is now widely applied in many domains, in both research and industry, the need to understand how black-box algorithms work has grown, especially among non-experts. Several approaches had thus been developed to provide clear insights of a model prediction for a particular observation but at the cost of long computation time or restrictive hypothesis that does not fully take into account interaction between attributes. This paper provides methods based on the detection of relevant groups of attributes -named coalitions- influencing a prediction. Compared to the literature, our results show that coalitional methods outperform existing ones such as SHapley Additive exPlanation (SHAP). Computation time is shortened while preserving an acceptable accuracy of individual prediction explanations. Therefore, this enables wider practical use of explanation methods to increase trust between developed ML models, end-users, and whoever impacted by any decision where these models played a role.