PRECEPT: Planning Resilience via Experience, Context Engineering & Probing Trajectories A Unified Framework for Test-Time Adaptation with Compositional Rule Learning and Pareto-Guided Prompt Evolution
arXiv:2603.09641v1 Announce Type: new Abstract: LLM agents that store knowledge as natural language suffer steep retrieval degradation as condition count grows, often struggle to compose learned rules reliably, and typically lack explicit mechanisms to detect stale or adversarial knowledge. We introduce PRECEPT, a unified framework for test-time adaptation with three tightly coupled components: (1) deterministic exact-match rule retrieval over structured condition keys, (2) conflict-aware memory with Bayesian source reliability and threshold-based rule invalidation, and (3) COMPASS, a Pareto-guided prompt-evolution outer loop. Exact retrieval eliminates partial-match interpretation errors on the deterministic path (0% by construction, vs 94.4% under Theorem~B.6's independence model at N=10) and supports compositional stacking through a semantic tier hierarchy; conflict-aware memory resolves static--dynamic disagreements and supports drift adaptation; COMPASS evaluates prompts through
arXiv:2603.09641v1 Announce Type: new Abstract: LLM agents that store knowledge as natural language suffer steep retrieval degradation as condition count grows, often struggle to compose learned rules reliably, and typically lack explicit mechanisms to detect stale or adversarial knowledge. We introduce PRECEPT, a unified framework for test-time adaptation with three tightly coupled components: (1) deterministic exact-match rule retrieval over structured condition keys, (2) conflict-aware memory with Bayesian source reliability and threshold-based rule invalidation, and (3) COMPASS, a Pareto-guided prompt-evolution outer loop. Exact retrieval eliminates partial-match interpretation errors on the deterministic path (0% by construction, vs 94.4% under Theorem~B.6's independence model at N=10) and supports compositional stacking through a semantic tier hierarchy; conflict-aware memory resolves static--dynamic disagreements and supports drift adaptation; COMPASS evaluates prompts through the same end-to-end execution pipeline. Results (9--10 seeds): PRECEPT achieves a +41.1pp first-try advantage over Full Reflexion (d>1.9), +33.3pp compositional generalization (d=1.55), 100% $P_1$ on 2-way logistics compositions (d=2.64), +40--55pp continuous learning gains, strong eventual robustness under adversarial static knowledge (100% logistics with adversarial SK active; partial recovery on integration), +55.0pp drift recovery (d=0.95, p=0.031), and 61% fewer steps. Core comparisons are statistically significant, often at p<0.001.
Executive Summary
This article introduces PRECEPT, a unified framework for test-time adaptation in Large Language Model (LLM) agents. PRECEPT addresses limitations in existing LLMs by incorporating three components: exact-match rule retrieval, conflict-aware memory, and COMPASS, a Pareto-guided prompt-evolution outer loop. The framework achieves significant improvements in first-try accuracy, compositional generalization, and drift adaptation. The results demonstrate PRECEPT's potential in addressing issues such as partial-match interpretation errors, static--dynamic disagreements, and drift adaptation. The framework's performance is evaluated through a series of experiments, which show statistically significant improvements over existing methods.
Key Points
- ▸ PRECEPT introduces a unified framework for test-time adaptation in LLM agents.
- ▸ The framework consists of three components: exact-match rule retrieval, conflict-aware memory, and COMPASS.
- ▸ PRECEPT achieves significant improvements in first-try accuracy, compositional generalization, and drift adaptation.
Merits
Strength in Addressing Partial-Match Interpretation Errors
PRECEPT's exact-match rule retrieval eliminates partial-match interpretation errors, which is a common issue in existing LLMs.
Compositional Stacking through Semantic Tier Hierarchy
PRECEPT supports compositional stacking through a semantic tier hierarchy, enabling the combination of learned rules in a more structured and reliable manner.
Adaptation to Drift through Conflict-Aware Memory
PRECEPT's conflict-aware memory resolves static--dynamic disagreements and supports drift adaptation, enabling the framework to adapt to changing environments.
Demerits
Complexity of the Framework
The PRECEPT framework consists of multiple components, which may add complexity to the implementation and training process.
Limited Evaluation on Adversarial Knowledge
While PRECEPT shows strong eventual robustness under adversarial static knowledge, the evaluation is limited to a specific scenario and may not generalize to other types of adversarial knowledge.
Expert Commentary
The PRECEPT framework presents a significant advancement in the field of LLMs, addressing several limitations in existing methods. The framework's performance is impressive, with substantial improvements in first-try accuracy, compositional generalization, and drift adaptation. However, the complexity of the framework and limited evaluation on adversarial knowledge are notable limitations. Further research is needed to fully explore the potential of PRECEPT and to address these limitations. The implications of PRECEPT are far-reaching, with potential applications in a wide range of fields, including natural language processing, computer vision, and robotics.
Recommendations
- ✓ Further research is needed to explore the potential of PRECEPT and to address its limitations.
- ✓ The development and deployment of PRECEPT-like frameworks may require updates to existing policies and regulations related to LLMs.