Academic

Routing Absorption in Sparse Attention: Why Random Gates Are Hard to Beat

arXiv:2603.02227v1 Announce Type: cross Abstract: Can a transformer learn which attention entries matter during training? In principle, yes: attention distributions are highly concentrated, and a small gate network can identify the important entries post-hoc with near-perfect accuracy. In practice, barely. When sparse attention is trained end-to-end, the model's Q/K/V projections co-adapt to whatever mask is imposed, absorbing the routing signal until learned gates perform little better than frozen random gates. We call this routing absorption and present four independent lines of evidence for it in a controlled 31M-parameter transformer: (1) differentiable soft gating converges to nearly the same perplexity whether the gate is learned or random (48.73 +/- 0.60 vs. 49.83 +/- 0.04 over 3 seeds); (2) hard top-k gating receives exactly zero gradient through the mask; (3) a gate distilled onto co-adapted Q/K/V achieves high F1 against oracle masks but catastrophic perplexity when deployed

K
Keston Aquino-Michaels
· · 1 min read · 2 views

arXiv:2603.02227v1 Announce Type: cross Abstract: Can a transformer learn which attention entries matter during training? In principle, yes: attention distributions are highly concentrated, and a small gate network can identify the important entries post-hoc with near-perfect accuracy. In practice, barely. When sparse attention is trained end-to-end, the model's Q/K/V projections co-adapt to whatever mask is imposed, absorbing the routing signal until learned gates perform little better than frozen random gates. We call this routing absorption and present four independent lines of evidence for it in a controlled 31M-parameter transformer: (1) differentiable soft gating converges to nearly the same perplexity whether the gate is learned or random (48.73 +/- 0.60 vs. 49.83 +/- 0.04 over 3 seeds); (2) hard top-k gating receives exactly zero gradient through the mask; (3) a gate distilled onto co-adapted Q/K/V achieves high F1 against oracle masks but catastrophic perplexity when deployed (601.6 vs. 48.6 on mask-agnostic Q/K/V); and (4) stochastic mask randomization during training fails to prevent co-adaptation (78.2 ppl deployed dense vs. 37.3 baseline). We connect routing absorption to the same phenomenon in Mixture-of-Experts, where random routing matches learned routing because experts co-adapt to any router, but show that attention exhibits a structurally more severe form: shared Q/K/V parameters enable cross-layer compensation pathways absent in MoE, where experts are self-contained modules. The implication is that end-to-end sparse attention methods employing per-query token-level gating face absorption pressure proportional to the parameter asymmetry between the gate and the model, and that post-hoc approaches, which decouple representation learning from sparsification, sidestep this entirely.

Executive Summary

This article presents a rigorous analysis of the limitations of end-to-end sparse attention methods in transformer models. The authors uncover a phenomenon called 'routing absorption,' where the model's Q/K/V projections co-adapt to the imposed mask, rendering learned gates ineffective. The study provides four lines of evidence for routing absorption and connects it to a similar phenomenon in Mixture-of-Experts. The implications of this research suggest that end-to-end sparse attention methods face absorption pressure due to parameter asymmetry, while post-hoc approaches may sidestep this issue. The findings have significant implications for the development of transformer models and the design of attention mechanisms.

Key Points

  • Routing absorption is a phenomenon where transformer models co-adapt to imposed masks, rendering learned gates ineffective.
  • Differentiable soft gating converges to nearly the same perplexity with learned or random gates.
  • Hard top-k gating receives zero gradient through the mask, indicating co-adaptation.

Merits

Theoretical insight

The study provides a thorough analysis of the limitations of end-to-end sparse attention methods, shedding light on the underlying mechanisms of routing absorption.

Methodological rigor

The authors employ a controlled 31M-parameter transformer and present four independent lines of evidence for routing absorption, ensuring the robustness of their findings.

Demerits

Limited scope

The study focuses on a specific type of attention mechanism and may not be generalizable to other attention architectures.

Lack of experimental comparisons

The article does not provide direct comparisons with post-hoc approaches, making it difficult to fully evaluate the implications of routing absorption.

Expert Commentary

The article presents a thorough and well-researched analysis of the limitations of end-to-end sparse attention methods. The findings have significant implications for the development of transformer models and the design of attention mechanisms. However, the study's scope is limited to a specific type of attention mechanism, and the lack of experimental comparisons makes it difficult to fully evaluate the implications of routing absorption. Nevertheless, the research provides a valuable contribution to the field and highlights the importance of understanding the underlying mechanisms of attention mechanisms.

Recommendations

  • Future research should focus on developing alternative attention mechanisms that can mitigate routing absorption.
  • Researchers should consider incorporating post-hoc approaches into their attention mechanisms to sidestep the limitations of end-to-end sparse attention methods.

Sources