RNTI

MODULAD
Représentation des poids d'auto-attention sous forme de graphe pour l'évaluation des Transformers
In EGC 2025, vol. RNTI-E-41, pp.351-358
Abstract
Transformers have revolutionized sequential data processing but lack explainability, particularly problematic in regulated fields like healthcare. Our work introduces a graph-based visualization of attention learning and a metric for validating learned connections against ground truth. Testing on Behrt (a diagnostic prediction model) demonstrates how our method reveals inter-diagnosis relationships and dataset biases.