Linear Predictability of Attention Heads in Large Language Models
arXiv:2603.13314v1 Announce Type: new Abstract: Large language model (LLM) inference is increasingly bottlenecked by the Key-Value (KV) cache, yet the fine-grained structure of attention-head activations remains poorly understood. We show that pretrained Transformers exhibit a pervasive inter-head linear structure: for a given token, the Query, Key, and Value (QKV) vectors of an attention head can often be reconstructed as a linear combination of a small number of peer heads, typically within the same layer. Across Llama-3.1-8B, Falcon3-10B, OLMo-2-7B, and Qwen3-32B, just 2-5 reference heads recover many target heads with high fidelity (e.g., mean R^2 approx 0.76 for Keys on C4 with five references, and frequently R^2 > 0.85 on GSM8K). This predictability is learned rather than architectural: it is largely absent at random initialization, rises rapidly during pretraining as we track through OLMo-2 checkpoints, and is supported by a theoretical lower bound showing high mean-squared err
arXiv:2603.13314v1 Announce Type: new Abstract: Large language model (LLM) inference is increasingly bottlenecked by the Key-Value (KV) cache, yet the fine-grained structure of attention-head activations remains poorly understood. We show that pretrained Transformers exhibit a pervasive inter-head linear structure: for a given token, the Query, Key, and Value (QKV) vectors of an attention head can often be reconstructed as a linear combination of a small number of peer heads, typically within the same layer. Across Llama-3.1-8B, Falcon3-10B, OLMo-2-7B, and Qwen3-32B, just 2-5 reference heads recover many target heads with high fidelity (e.g., mean R^2 approx 0.76 for Keys on C4 with five references, and frequently R^2 > 0.85 on GSM8K). This predictability is learned rather than architectural: it is largely absent at random initialization, rises rapidly during pretraining as we track through OLMo-2 checkpoints, and is supported by a theoretical lower bound showing high mean-squared error for linear prediction at initialization. We further connect this emergence to increasing intra-layer alignment of Key projection subspaces. Finally, we exploit this redundancy for efficiency by caching only reference-head KV states and reconstructing the remaining heads on the fly via lightweight linear maps, achieving 2x KV-cache reduction with model-dependent accuracy trade-offs (4.5-5.5 percentage point average drop on Falcon3-10B and Qwen3-32B across five benchmarks, and larger drops on Llama-3.1-8B), and we find that reconstructing Keys is substantially less harmful than reconstructing Values.
Executive Summary
This article presents a groundbreaking discovery in the field of large language models (LLMs) by demonstrating the linear predictability of attention heads in pre-trained Transformers. The authors show that for a given token, the Query, Key, and Value vectors of an attention head can be reconstructed as a linear combination of a small number of peer heads, typically within the same layer. This phenomenon is learned during pretraining and is supported by a theoretical lower bound. The authors propose a method to exploit this redundancy for efficiency by caching only reference-head KV states and reconstructing the remaining heads on the fly, achieving a 2x KV-cache reduction with model-dependent accuracy trade-offs. The implications of this discovery are significant, as it has the potential to revolutionize the design and optimization of LLMs.
Key Points
- ▸ Pre-trained Transformers exhibit a pervasive inter-head linear structure in attention-head activations.
- ▸ The Query, Key, and Value vectors of an attention head can be reconstructed as a linear combination of a small number of peer heads.
- ▸ The linear predictability of attention heads is learned during pretraining and is supported by a theoretical lower bound.
Merits
Strength
The article presents a novel and insightful discovery that sheds light on the fine-grained structure of attention-head activations in LLMs. The proposed method for exploiting this redundancy is efficient and achieves significant accuracy trade-offs.
Demerits
Limitation
The article assumes that the linear predictability of attention heads is a general property of pre-trained Transformers, but it is unclear whether this holds for all types of LLMs or only for a specific subset. Additionally, the article does not provide a comprehensive analysis of the impact of this phenomenon on downstream tasks and applications.
Expert Commentary
The article presents a groundbreaking discovery in the field of LLMs, which has the potential to revolutionize the design and optimization of these models. The proposed method for exploiting the linear predictability of attention heads is efficient and achieves significant accuracy trade-offs. However, the article assumes that this phenomenon is a general property of pre-trained Transformers, and it is unclear whether this holds for all types of LLMs or only for a specific subset. Additionally, the article does not provide a comprehensive analysis of the impact of this phenomenon on downstream tasks and applications. Nevertheless, the discovery of the linear predictability of attention heads in LLMs is a significant advancement in the field, and it has the potential to lead to significant efficiency gains and improved accuracy in LLM-based applications.
Recommendations
- ✓ Further research is needed to confirm whether the linear predictability of attention heads is a general property of pre-trained Transformers or only holds for a specific subset of LLMs.
- ✓ A comprehensive analysis of the impact of this phenomenon on downstream tasks and applications should be conducted to fully understand its implications.