Étude de l'utilisation des modèles de langages pour l'interrogation en langue naturelle des graphes de connaissances
Abstract
In this article, we present the results of an in-depth study of the performance of language models (LLMs) in the context of question-answering over knowledge graphs (KGQA). The experimental methodology was structured around two different approaches: generation of SPARQL queries and direct question-answering. The results on the QALD-10 benchmark showed very poor results in the first approach and fair results in the second approach, with important variations between the different types of questions and answers.