Skip to main content
Academic

Do LLMs and VLMs Share Neurons for Inference? Evidence and Mechanisms of Cross-Modal Transfer

arXiv:2602.19058v1 Announce Type: new Abstract: Large vision-language models (LVLMs) have rapidly advanced across various domains, yet they still lag behind strong text-only large language models (LLMs) on tasks that require multi-step inference and compositional decision-making. Motivated by their shared transformer architectures, we investigate whether the two model families rely on common internal computation for such inference. At the neuron level, we uncover a surprisingly large overlap: more than half of the top-activated units during multi-step inference are shared between representative LLMs and LVLMs, revealing a modality-invariant inference subspace. Through causal probing via activation amplification, we further show that these shared neurons encode consistent and interpretable concept-level effects, demonstrating their functional contribution to inference. Building on this insight, we propose Shared Neuron Low-Rank Fusion (SNRF), a parameter-efficient framework that tran

arXiv:2602.19058v1 Announce Type: new Abstract: Large vision-language models (LVLMs) have rapidly advanced across various domains, yet they still lag behind strong text-only large language models (LLMs) on tasks that require multi-step inference and compositional decision-making. Motivated by their shared transformer architectures, we investigate whether the two model families rely on common internal computation for such inference. At the neuron level, we uncover a surprisingly large overlap: more than half of the top-activated units during multi-step inference are shared between representative LLMs and LVLMs, revealing a modality-invariant inference subspace. Through causal probing via activation amplification, we further show that these shared neurons encode consistent and interpretable concept-level effects, demonstrating their functional contribution to inference. Building on this insight, we propose Shared Neuron Low-Rank Fusion (SNRF), a parameter-efficient framework that transfers mature inference circuitry from LLMs to LVLMs. SNRF profiles cross-model activations to identify shared neurons, computes a low-rank approximation of inter-model weight differences, and injects these updates selectively within the shared-neuron subspace. This mechanism strengthens multimodal inference performance with minimal parameter changes and requires no large-scale multimodal fine-tuning. Across diverse mathematics and perception benchmarks, SNRF consistently enhances LVLM inference performance while preserving perceptual capabilities. Our results demonstrate that shared neurons form an interpretable bridge between LLMs and LVLMs, enabling low-cost transfer of inference ability into multimodal models. Our code is available at [https://github.com/chenhangcuisg-code/Do-LLMs-VLMs-Share-Neurons](https://github.com/chenhangcuisg-code/Do-LLMs-VLMs-Share-Neurons).

Executive Summary

This article explores the concept of shared neurons between large language models (LLMs) and large vision-language models (LVLMs) for inference tasks. The study reveals a significant overlap in neuron activation between the two model families, indicating a modality-invariant inference subspace. The authors propose a framework called Shared Neuron Low-Rank Fusion (SNRF) to transfer inference ability from LLMs to LVLMs, resulting in enhanced multimodal inference performance with minimal parameter changes.

Key Points

  • LLMs and LVLMs share a large number of neurons for inference tasks
  • The shared neurons encode consistent and interpretable concept-level effects
  • SNRF framework enables low-cost transfer of inference ability into multimodal models

Merits

Efficient Knowledge Transfer

The SNRF framework allows for efficient transfer of inference ability from LLMs to LVLMs, reducing the need for large-scale multimodal fine-tuning

Demerits

Limited Generalizability

The study focuses on specific model architectures and tasks, which may limit the generalizability of the findings to other domains and models

Expert Commentary

The article presents a significant contribution to the field of multimodal learning, highlighting the potential for shared neurons to facilitate knowledge transfer between LLMs and LVLMs. The proposed SNRF framework demonstrates promising results, with potential applications in a range of domains. However, further research is needed to fully explore the generalizability and limitations of the approach. The study's findings have important implications for the development of more effective and efficient multimodal models, and underscore the importance of continued research in this area.

Recommendations

  • Further investigation into the generalizability of the SNRF framework to other model architectures and tasks
  • Exploration of the potential applications of the SNRF framework in real-world scenarios

Sources