Academic

Retrievit: In-context Retrieval Capabilities of Transformers, State Space Models, and Hybrid Architectures

arXiv:2603.02874v1 Announce Type: new Abstract: Transformers excel at in-context retrieval but suffer from quadratic complexity with sequence length, while State Space Models (SSMs) offer efficient linear-time processing but have limited retrieval capabilities. We investigate whether hybrid architectures combining Transformers and SSMs can achieve the best of both worlds on two synthetic in-context retrieval tasks. The first task, n-gram retrieval, requires the model to identify and reproduce an n-gram that succeeds the query within the input sequence. The second task, position retrieval, presents the model with a single query token and requires it to perform a two-hop associative lookup: first locating the corresponding element in the sequence, and then outputting its positional index. Under controlled experimental conditions, we assess data efficiency, length generalization, robustness to out of domain training examples, and learned representations across Transformers, SSMs, and hyb

arXiv:2603.02874v1 Announce Type: new Abstract: Transformers excel at in-context retrieval but suffer from quadratic complexity with sequence length, while State Space Models (SSMs) offer efficient linear-time processing but have limited retrieval capabilities. We investigate whether hybrid architectures combining Transformers and SSMs can achieve the best of both worlds on two synthetic in-context retrieval tasks. The first task, n-gram retrieval, requires the model to identify and reproduce an n-gram that succeeds the query within the input sequence. The second task, position retrieval, presents the model with a single query token and requires it to perform a two-hop associative lookup: first locating the corresponding element in the sequence, and then outputting its positional index. Under controlled experimental conditions, we assess data efficiency, length generalization, robustness to out of domain training examples, and learned representations across Transformers, SSMs, and hybrid architectures. We find that hybrid models outperform SSMs and match or exceed Transformers in data efficiency and extrapolation for information-dense context retrieval. However, Transformers maintain superiority in position retrieval tasks. Through representation analysis, we discover that SSM-based models develop locality-aware embeddings where tokens representing adjacent positions become neighbors in embedding space, forming interpretable structures. This emergent property, absent in Transformers, explains both the strengths and limitations of SSMs and hybrids for different retrieval tasks. Our findings provide principled guidance for architecture selection based on task requirements and reveal fundamental differences in how Transformers and SSMs, and hybrid models learn positional associations.

Executive Summary

This study investigates the in-context retrieval capabilities of Transformers, State Space Models (SSMs), and hybrid architectures on two synthetic retrieval tasks, n-gram retrieval and position retrieval. The results show that hybrid models outperform SSMs in data efficiency and extrapolation, but Transformers maintain superiority in position retrieval tasks. A key finding is the emergent property of locality-aware embeddings in SSM-based models, which forms interpretable structures and explains their strengths and limitations. The study provides principled guidance for architecture selection based on task requirements and reveals fundamental differences in how Transformers and SSMs learn positional associations. The findings have significant implications for the development of efficient and effective retrieval models in various applications.

Key Points

  • Hybrid models outperform SSMs in data efficiency and extrapolation for information-dense context retrieval.
  • Transformers maintain superiority in position retrieval tasks.
  • SSM-based models develop locality-aware embeddings, forming interpretable structures.

Merits

Strength

The study provides a comprehensive comparison of different architectures for in-context retrieval tasks, offering principled guidance for architecture selection based on task requirements.

Demerits

Limitation

The study is limited to synthetic retrieval tasks, and it is unclear whether the findings will generalize to real-world applications.

Expert Commentary

The study provides a rigorous comparison of different architectures for in-context retrieval tasks and offers valuable insights into the strengths and limitations of each approach. The emergent property of locality-aware embeddings in SSM-based models is particularly noteworthy, as it highlights the potential for SSMs to learn interpretable structures that can inform retrieval tasks. However, the study's limitations, including its focus on synthetic retrieval tasks, suggest that further research is needed to fully understand the implications of these findings. Nevertheless, the study provides a valuable contribution to the field of retrieval models and offers a framework for future research in this area.

Recommendations

  • Future studies should investigate the generalizability of the findings to real-world applications.
  • Researchers should explore the potential of SSM-based models to learn interpretable structures in other domains.

Sources