RNTI

MODULAD
Fine-tuning des Modèles de Langage Large (LLMs) pour l'alignement d'entités au sein des graphes de connaissances (GCs)
In EGC 2025, vol. RNTI-E-41, pp.311-318
Abstract
Finding similar entities across diverse and heterogeneous data sources in knowledge graphs (KGs) remains a major challenge. The emergence of LLMs has introduced new research opportunities. Fine-tuning LLMs has been rapidly adopted due to their ability to specialize in specific tasks. This challenge focuses on capturing subtle linguistic, syntactic, and semantic similarities between entities. In this paper, we propose a fine-tuning approach for GPT-2 and BERT to address the generalization of entity alignment (EA) across multiple datasets using a single model. Additionally, we introduce a protocol based on the Kolmogorov Arnold Network (KAN) to overcome the limitations of LLMs regarding interpretability, redundancy, and computational cost. Our evaluations demonstrate that the fine-tuned GPT-2 model significantly outperforms BERT and KAN in entity alignment tasks, offering better performance and reliability.