Skip to main content
Academic

Mnemis: Dual-Route Retrieval on Hierarchical Graphs for Long-Term LLM Memory

arXiv:2602.15313v1 Announce Type: new Abstract: AI Memory, specifically how models organizes and retrieves historical messages, becomes increasingly valuable to Large Language Models (LLMs), yet existing methods (RAG and Graph-RAG) primarily retrieve memory through similarity-based mechanisms. While efficient, such System-1-style retrieval struggles with scenarios that require global reasoning or comprehensive coverage of all relevant information. In this work, We propose Mnemis, a novel memory framework that integrates System-1 similarity search with a complementary System-2 mechanism, termed Global Selection. Mnemis organizes memory into a base graph for similarity retrieval and a hierarchical graph that enables top-down, deliberate traversal over semantic hierarchies. By combining the complementary strength from both retrieval routes, Mnemis retrieves memory items that are both semantically and structurally relevant. Mnemis achieves state-of-the-art performance across all compared

arXiv:2602.15313v1 Announce Type: new Abstract: AI Memory, specifically how models organizes and retrieves historical messages, becomes increasingly valuable to Large Language Models (LLMs), yet existing methods (RAG and Graph-RAG) primarily retrieve memory through similarity-based mechanisms. While efficient, such System-1-style retrieval struggles with scenarios that require global reasoning or comprehensive coverage of all relevant information. In this work, We propose Mnemis, a novel memory framework that integrates System-1 similarity search with a complementary System-2 mechanism, termed Global Selection. Mnemis organizes memory into a base graph for similarity retrieval and a hierarchical graph that enables top-down, deliberate traversal over semantic hierarchies. By combining the complementary strength from both retrieval routes, Mnemis retrieves memory items that are both semantically and structurally relevant. Mnemis achieves state-of-the-art performance across all compared methods on long-term memory benchmarks, scoring 93.9 on LoCoMo and 91.6 on LongMemEval-S using GPT-4.1-mini.

Executive Summary

This article presents Mnemis, a novel memory framework for Large Language Models (LLMs) that integrates System-1 similarity search with a complementary System-2 mechanism, Global Selection. Mnemis organizes memory into a base graph for similarity retrieval and a hierarchical graph for top-down, deliberate traversal over semantic hierarchies. This framework achieves state-of-the-art performance on long-term memory benchmarks, scoring 93.9 on LoCoMo and 91.6 on LongMemEval-S using GPT-4.1-mini. The authors demonstrate the effectiveness of Mnemis in retrieving memory items that are both semantically and structurally relevant. The proposed framework has the potential to address the limitations of existing methods, which primarily rely on similarity-based mechanisms. Overall, Mnemis is a significant contribution to the field of AI Memory and has important implications for the development of more sophisticated LLMs.

Key Points

  • Mnemis integrates System-1 similarity search with a complementary System-2 mechanism, Global Selection
  • Mnemis organizes memory into a base graph for similarity retrieval and a hierarchical graph for top-down traversal
  • Mnemis achieves state-of-the-art performance on long-term memory benchmarks

Merits

Strength in Handling Complex Scenarios

Mnemis is capable of handling scenarios that require global reasoning or comprehensive coverage of all relevant information, which existing methods struggle with.

Improved Retrieval Accuracy

Mnemis retrieves memory items that are both semantically and structurally relevant, outperforming existing methods on long-term memory benchmarks.

Demerits

Computational Complexity

The use of a hierarchical graph may introduce additional computational complexity, which could impact the performance of Mnemis in resource-constrained environments.

Training Requirements

The effectiveness of Mnemis may depend on the availability of high-quality training data, which could be a limitation in certain applications.

Expert Commentary

Mnemis is a significant contribution to the field of AI Memory, and its combination of System-1 and System-2 mechanisms offers a novel approach to the retrieval of memory items. The use of a hierarchical graph and the integration of global selection provide a powerful tool for addressing the limitations of existing methods. However, the computational complexity and training requirements of Mnemis may be significant challenges in certain applications. Additionally, the implications of Mnemis for the field of artificial intelligence and its applications are far-reaching and would require careful consideration by policymakers and regulatory agencies.

Recommendations

  • Further research is needed to investigate the computational complexity and training requirements of Mnemis in different applications.
  • The development of Mnemis should be accompanied by careful consideration of the potential implications for data privacy, security, and bias.

Sources