Get our free extension to see links to code for papers anywhere online!Free add-on: code for papers everywhere!Free add-on: See code for papers anywhere!
Abstract:Large language models (LLMs) like Claude, Mistral IA, and GPT-4 excel in NLP but lack structured knowledge, leading to factual inconsistencies. We address this by integrating Knowledge Graphs (KGs) via KG-BERT to enhance grounding and reasoning. Experiments show significant gains in knowledge-intensive tasks such as question answering and entity linking. This approach improves factual reliability and enables more context-aware next-generation LLMs.
* This paper was accepted and scheduled for inclusion in the ICALT 2025 proceedings but was ultimately not published due to absence from the conference presentation. It appears in the official program booklet. Conference: 2025 IEEE International Conference on Advanced Learning Technologies (ICALT)