In-Context Planning with Latent Temporal Abstractions
arXiv:2602.18694v1 Announce Type: new Abstract: Planning-based reinforcement learning for continuous control is bottlenecked by two practical issues: planning at primitive time scales leads to prohibitive branching and long horizons, while real environments are frequently partially observable and exhibit regime shifts that invalidate stationary, fully observed dynamics assumptions. We introduce I-TAP (In-Context Latent Temporal-Abstraction Planner), an offline RL framework that unifies in-context adaptation with online planning in a learned discrete temporal-abstraction space. From offline trajectories, I-TAP learns an observation-conditioned residual-quantization VAE that compresses each observation-macro-action segment into a coarse-to-fine stack of discrete residual tokens, and a temporal Transformer that autoregressively predicts these token stacks from a short recent history. The resulting sequence model acts simultaneously as a context-conditioned prior over abstract actions and
arXiv:2602.18694v1 Announce Type: new Abstract: Planning-based reinforcement learning for continuous control is bottlenecked by two practical issues: planning at primitive time scales leads to prohibitive branching and long horizons, while real environments are frequently partially observable and exhibit regime shifts that invalidate stationary, fully observed dynamics assumptions. We introduce I-TAP (In-Context Latent Temporal-Abstraction Planner), an offline RL framework that unifies in-context adaptation with online planning in a learned discrete temporal-abstraction space. From offline trajectories, I-TAP learns an observation-conditioned residual-quantization VAE that compresses each observation-macro-action segment into a coarse-to-fine stack of discrete residual tokens, and a temporal Transformer that autoregressively predicts these token stacks from a short recent history. The resulting sequence model acts simultaneously as a context-conditioned prior over abstract actions and a latent dynamics model. At test time, I-TAP performs Monte Carlo Tree Search directly in token space, using short histories for implicit adaptation without gradient update, and decodes selected token stacks into executable actions. Across deterministic MuJoCo, stochastic MuJoCo with per-episode latent dynamics regimes, and high-dimensional Adroit manipulation, including partially observable variants, I-TAP consistently matches or outperforms strong model-free and model-based offline baselines, demonstrating efficient and robust in-context planning under stochastic dynamics and partial observability.
Executive Summary
This study introduces I-TAP, an offline reinforcement learning framework that addresses two major challenges in planning-based reinforcement learning for continuous control: planning at primitive time scales and dealing with partially observable and regime-shifting environments. I-TAP combines in-context adaptation with online planning in a learned discrete temporal-abstraction space, leveraging a sequence model that acts as a context-conditioned prior over abstract actions and a latent dynamics model. The framework is evaluated across various environments, including deterministic and stochastic MuJoCo, Adroit manipulation, and partially observable variants, demonstrating its efficiency and robustness in planning under stochastic dynamics and partial observability. The results show that I-TAP consistently matches or outperforms strong model-free and model-based offline baselines.
Key Points
- ▸ I-TAP combines in-context adaptation with online planning in a learned discrete temporal-abstraction space.
- ▸ I-TAP uses a sequence model that acts as a context-conditioned prior over abstract actions and a latent dynamics model.
- ▸ I-TAP is evaluated across various environments, including deterministic and stochastic MuJoCo, Adroit manipulation, and partially observable variants.
Merits
Strength in Handling Partial Observability
I-TAP's ability to handle partial observability through its latent dynamics model and context-conditioned prior over abstract actions is a significant strength, making it applicable to real-world environments.
Efficient Planning under Stochastic Dynamics
I-TAP's efficiency in planning under stochastic dynamics is a notable merit, demonstrating its potential for real-world applications where dynamics are uncertain.
Demerits
Limited Evaluation on Complex Environments
The study's evaluation on complex environments, such as those with multiple regime shifts or highly nonlinear dynamics, is limited, and I-TAP's performance in such cases is unclear.
Potential Overfitting to Training Data
Like many offline reinforcement learning methods, I-TAP may be prone to overfitting to the training data, potentially leading to poor performance on unseen environments.
Expert Commentary
While I-TAP demonstrates impressive performance in various environments, its limitations, particularly in handling complex environments and potential overfitting to training data, must be carefully considered. Furthermore, the study's findings on in-context planning and adaptation to new environments have significant implications for the broader reinforcement learning community. To further advance the field, future work should focus on addressing these limitations and exploring the applicability of I-TAP to more complex and uncertain environments.
Recommendations
- ✓ Future research should focus on evaluating I-TAP's performance on complex environments with multiple regime shifts or highly nonlinear dynamics.
- ✓ Developing methods to prevent overfitting to training data, such as data augmentation or regularization techniques, is essential for improving I-TAP's robustness and generalizability.