Skip to main content
Academic

RADAR: Reasoning as Discrimination with Aligned Representations for LLM-based Knowledge Graph Reasoning

arXiv:2602.21951v1 Announce Type: new Abstract: Knowledge graph reasoning (KGR) infers missing facts, with recent advances increasingly harnessing the semantic priors and reasoning abilities of Large Language Models (LLMs). However, prevailing generative paradigms are prone to memorizing surface-level co-occurrences rather than learning genuine relational semantics, limiting out-of-distribution generalization. To address this, we propose RADAR, which reformulates KGR from generative pattern matching to discriminative relational reasoning. We recast KGR as discriminative entity selection, where reinforcement learning enforces relative entity separability beyond token-likelihood imitation. Leveraging this separability, inference operates directly in representation space, ensuring consistency with the discriminative optimization and bypassing generation-induced hallucinations. Across four benchmarks, RADAR achieves 5-6% relative gains on link prediction and triple classification over str

B
Bo Xue, Yuan Jin, Luoyi Fu, Jiaxin Ding, Xinbing Wang
· · 1 min read · 3 views

arXiv:2602.21951v1 Announce Type: new Abstract: Knowledge graph reasoning (KGR) infers missing facts, with recent advances increasingly harnessing the semantic priors and reasoning abilities of Large Language Models (LLMs). However, prevailing generative paradigms are prone to memorizing surface-level co-occurrences rather than learning genuine relational semantics, limiting out-of-distribution generalization. To address this, we propose RADAR, which reformulates KGR from generative pattern matching to discriminative relational reasoning. We recast KGR as discriminative entity selection, where reinforcement learning enforces relative entity separability beyond token-likelihood imitation. Leveraging this separability, inference operates directly in representation space, ensuring consistency with the discriminative optimization and bypassing generation-induced hallucinations. Across four benchmarks, RADAR achieves 5-6% relative gains on link prediction and triple classification over strong LLM baselines, while increasing task-relevant mutual information in intermediate representations by 62.9%, indicating more robust and transferable relational reasoning.

Executive Summary

The article proposes a novel approach to knowledge graph reasoning (KGR) using a discriminative paradigm, dubbed RADAR. By reformulating KGR as entity selection and leveraging reinforcement learning for separability, RADAR achieves improved out-of-distribution generalization and robust relational reasoning. The proposed method demonstrates relative gains of 5-6% on link prediction and triple classification over strong LLM baselines while increasing task-relevant mutual information in intermediate representations by 62.9%. This breakthrough has significant implications for the application of LLMs in knowledge graph reasoning, enabling more accurate and transferable relational reasoning. The results demonstrate the potential of RADAR to address the limitations of prevailing generative paradigms and highlight its potential for real-world applications in areas such as data integration and knowledge discovery.

Key Points

  • RADAR reformulates KGR from generative pattern matching to discriminative relational reasoning
  • Reinforcement learning enforces relative entity separability beyond token-likelihood imitation
  • Inference operates directly in representation space, ensuring consistency with discriminative optimization

Merits

Strength in addressing limitations of generative paradigms

RADAR effectively addresses the limitations of prevailing generative paradigms, which are prone to memorizing surface-level co-occurrences rather than learning genuine relational semantics.

Improved out-of-distribution generalization

RADAR achieves improved out-of-distribution generalization, which is critical for real-world applications of KGR.

Robust relational reasoning

RADAR enables more robust and transferable relational reasoning, as evident from the increased task-relevant mutual information in intermediate representations.

Demerits

High computational requirements

RADAR's reinforcement learning framework may require significant computational resources, which can be a limitation in certain applications.

Dependence on high-quality training data

The effectiveness of RADAR relies on the availability of high-quality training data, which can be a challenge in certain domains.

Expert Commentary

The proposed approach of RADAR is a significant breakthrough in the field of knowledge graph reasoning. By reformulating KGR as entity selection and leveraging reinforcement learning for separability, RADAR effectively addresses the limitations of prevailing generative paradigms and enables more robust and transferable relational reasoning. The results demonstrate the potential of RADAR to improve the accuracy and efficiency of KGR in real-world applications. However, the high computational requirements and dependence on high-quality training data are significant limitations that need to be addressed. Further research is needed to explore the potential of RADAR in various domains and to develop more efficient and scalable implementations.

Recommendations

  • Develop more efficient and scalable implementations of RADAR to reduce computational requirements
  • Invest in the development of high-quality training data for RADAR to improve its effectiveness

Sources