H2LooP Spark Preview: Continual Pretraining of Large Language Models for Low-Level Embedded Systems Code
arXiv:2603.11139v1 Announce Type: new Abstract: Large language models (LLMs) demonstrate strong code generation abilities in general-purpose programming languages but remain limited in specialized domains such as low-level embedded systems programming. This domain involves hardware register manipulation, vendor-specific SDKs, real-time operating system APIs, and hardware abstraction layers that are underrepresented in standard pretraining corpora. We introduce H2LooP Spark Preview, a continual pretraining (CPT) pipeline that adapts the OLMo-3-7B-a fully open language model to the embedded systems domain using BF16 LoRA with rank-stabilized scaling on 8 NVIDIA H100 GPUs. Our training corpus is constructed from repository-datasheet pairs covering 100B tokens of raw embedded systems data across 117 manufacturers, processed using the hierarchical datasheet-to-code mapping approach proposed in SpecMap (Nipane et al., 2026). The resulting curated dataset split contains 23.5B tokens across 1
arXiv:2603.11139v1 Announce Type: new Abstract: Large language models (LLMs) demonstrate strong code generation abilities in general-purpose programming languages but remain limited in specialized domains such as low-level embedded systems programming. This domain involves hardware register manipulation, vendor-specific SDKs, real-time operating system APIs, and hardware abstraction layers that are underrepresented in standard pretraining corpora. We introduce H2LooP Spark Preview, a continual pretraining (CPT) pipeline that adapts the OLMo-3-7B-a fully open language model to the embedded systems domain using BF16 LoRA with rank-stabilized scaling on 8 NVIDIA H100 GPUs. Our training corpus is constructed from repository-datasheet pairs covering 100B tokens of raw embedded systems data across 117 manufacturers, processed using the hierarchical datasheet-to-code mapping approach proposed in SpecMap (Nipane et al., 2026). The resulting curated dataset split contains 23.5B tokens across 13 embedded domains. Continual pretraining with high-rank LoRA (r=512) yields substantial gains, reducing in-domain perplexity by 70.4% and held-out repository perplexity by 66.1%. On generative code completion benchmarks spanning 13 embedded domains, our 7B model outperforms Claude Opus 4.6 and Qwen3-Coder-30B on 8 categories in token accuracy, showing that targeted continual pretraining enables smaller open-weight models to rival frontier systems on specialized technical tasks. We release the production training checkpoint on Huggingface as an open-source artifact.
Executive Summary
The article introduces H2LooP Spark Preview, a continual pretraining pipeline for adapting large language models to low-level embedded systems programming. The pipeline utilizes the OLMo-3-7B-a model and a curated dataset of 23.5B tokens across 13 embedded domains, resulting in substantial gains in in-domain perplexity and held-out repository perplexity. The model outperforms existing systems on generative code completion benchmarks, demonstrating the effectiveness of targeted continual pretraining for specialized technical tasks.
Key Points
- ▸ Continual pretraining pipeline for adapting large language models to low-level embedded systems programming
- ▸ Utilization of the OLMo-3-7B-a model and a curated dataset of 23.5B tokens across 13 embedded domains
- ▸ Substantial gains in in-domain perplexity and held-out repository perplexity
Merits
Improved Performance
The H2LooP Spark Preview pipeline achieves significant improvements in in-domain perplexity and held-out repository perplexity, demonstrating its effectiveness in adapting large language models to specialized domains.
Demerits
Limited Generalizability
The pipeline's performance may be limited to the specific domain of low-level embedded systems programming, and its generalizability to other domains is uncertain.
Expert Commentary
The H2LooP Spark Preview pipeline represents a significant advancement in the development of large language models for specialized domains. The use of continual pretraining and a curated dataset demonstrates the importance of domain-specific adaptation in achieving state-of-the-art performance. However, further research is needed to address concerns about explainability, transparency, and generalizability. The implications of this research are far-reaching, with potential applications in a range of fields, from autonomous systems to cybersecurity.
Recommendations
- ✓ Further research into the development of explainable and transparent large language models for specialized domains
- ✓ The establishment of regulatory frameworks and standards for the development and deployment of large language models in critical infrastructure and systems