Academic

Hierarchical Embedding Fusion for Retrieval-Augmented Code Generation

arXiv:2603.06593v1 Announce Type: new Abstract: Retrieval-augmented code generation often conditions the decoder on large retrieved code snippets. This ties online inference cost to repository size and introduces noise from long contexts. We present Hierarchical Embedding Fusion (HEF), a two-stage approach to repository representation for code completion. First, an offline cache compresses repository chunks into a reusable hierarchy of dense vectors using a small fuser model. Second, an online interface maps a small number of retrieved vectors into learned pseudo-tokens that are consumed by the code generator. This replaces thousands of retrieved tokens with a fixed pseudo-token budget while preserving access to repository-level information. On RepoBench and RepoEval, HEF with a 1.8B-parameter pipeline achieves exact-match accuracy comparable to snippet-based retrieval baselines, while operating at sub-second median latency on a single A100 GPU. Compared to graph-based and iterative

N
Nikita Sorokin, Ivan Sedykh, Valentin Malykh
· · 1 min read · 23 views

arXiv:2603.06593v1 Announce Type: new Abstract: Retrieval-augmented code generation often conditions the decoder on large retrieved code snippets. This ties online inference cost to repository size and introduces noise from long contexts. We present Hierarchical Embedding Fusion (HEF), a two-stage approach to repository representation for code completion. First, an offline cache compresses repository chunks into a reusable hierarchy of dense vectors using a small fuser model. Second, an online interface maps a small number of retrieved vectors into learned pseudo-tokens that are consumed by the code generator. This replaces thousands of retrieved tokens with a fixed pseudo-token budget while preserving access to repository-level information. On RepoBench and RepoEval, HEF with a 1.8B-parameter pipeline achieves exact-match accuracy comparable to snippet-based retrieval baselines, while operating at sub-second median latency on a single A100 GPU. Compared to graph-based and iterative retrieval systems in our experimental setup, HEF reduces median end-to-end latency by 13 to 26 times. We also introduce a utility-weighted likelihood signal for filtering training contexts and report ablation studies on pseudo-token budget, embedding models, and robustness to harmful retrieval. Overall, these results indicate that hierarchical dense caching is an effective mechanism for low-latency, repository-aware code completion.

Executive Summary

The article presents Hierarchical Embedding Fusion (HEF), a novel approach to repository representation for code completion. HEF compresses repository chunks into a reusable hierarchy of dense vectors, reducing online inference cost and noise from long contexts. The approach achieves comparable accuracy to snippet-based retrieval baselines while operating at sub-second median latency on a single A100 GPU. HEF reduces median end-to-end latency by 13 to 26 times compared to graph-based and iterative retrieval systems, making it an effective mechanism for low-latency, repository-aware code completion.

Key Points

  • Hierarchical Embedding Fusion (HEF) for repository representation
  • Two-stage approach: offline cache and online interface
  • Reduced online inference cost and noise from long contexts

Merits

Efficient Repository Representation

HEF's hierarchical dense caching enables efficient repository representation, reducing the need for large retrieved code snippets and associated noise.

Demerits

Limited Contextual Understanding

The approach may struggle with complex, nuanced code snippets that require deeper contextual understanding, potentially limiting its applicability in certain scenarios.

Expert Commentary

The article's presentation of HEF as a viable solution for low-latency, repository-aware code completion is noteworthy. By leveraging hierarchical dense caching, HEF addresses a significant pain point in code generation and retrieval. However, further research is needed to fully explore the approach's limitations and potential applications in diverse software development contexts. The introduction of a utility-weighted likelihood signal for filtering training contexts is also a valuable contribution, highlighting the importance of careful context selection in code generation tasks.

Recommendations

  • Further evaluation of HEF in diverse software development contexts to assess its robustness and applicability
  • Exploration of potential applications in related areas, such as natural language processing and information retrieval

Sources