Academic

Adaptive RAN Slicing Control via Reward-Free Self-Finetuning Agents

arXiv:2603.10564v1 Announce Type: new Abstract: The integration of Generative AI models into AI-native network systems offers a transformative path toward achieving autonomous and adaptive control. However, the application of such models to continuous control tasks is impeded by intrinsic architectural limitations, including finite context windows, the lack of explicit reward signals, and the degradation of the long context. This paper posits that the key to unlocking robust continuous control is enabling agents to internalize experience by distilling it into their parameters, rather than relying on prompt-based memory. To this end, we propose a novel self-finetuning framework that enables agentic systems to learn continuously through direct interaction with the environment, bypassing the need for handcrafted rewards. Our framework implements a bi-perspective reflection mechanism that generates autonomous linguistic feedback to construct preference datasets from interaction history. A

arXiv:2603.10564v1 Announce Type: new Abstract: The integration of Generative AI models into AI-native network systems offers a transformative path toward achieving autonomous and adaptive control. However, the application of such models to continuous control tasks is impeded by intrinsic architectural limitations, including finite context windows, the lack of explicit reward signals, and the degradation of the long context. This paper posits that the key to unlocking robust continuous control is enabling agents to internalize experience by distilling it into their parameters, rather than relying on prompt-based memory. To this end, we propose a novel self-finetuning framework that enables agentic systems to learn continuously through direct interaction with the environment, bypassing the need for handcrafted rewards. Our framework implements a bi-perspective reflection mechanism that generates autonomous linguistic feedback to construct preference datasets from interaction history. A subsequent preference-based fine-tuning process distills long-horizon experiences into the model's parameters. We evaluate our approach on a dynamic Radio Access Network (RAN) slicing task, a challenging multi-objective control problem that requires the resolution of acute trade-offs between spectrum efficiency, service quality, and reconfiguration stability under volatile network conditions. Experimental results show that our framework outperforms standard Reinforcement Learning (RL) baselines and existing Large Language Model (LLM)-based agents in sample efficiency, stability, and multi-metric optimization. These findings demonstrate the potential of self-improving generative agents for continuous control tasks, paving the way for future AI-native network infrastructure.

Executive Summary

This article presents a novel framework for adaptive RAN slicing control using reward-free self-finetuning agents, leveraging generative AI to enable continuous learning without explicit rewards. The framework introduces a bi-perspective reflection mechanism that distills interaction history into preference datasets, enabling autonomous fine-tuning of agent parameters. Evaluated on a complex, multi-objective RAN slicing task, the proposed method outperforms conventional RL baselines and LLM-driven agents in sample efficiency, stability, and multi-metric optimization. The work advances the discourse on AI-native network systems by demonstrating the viability of self-improving agents for continuous control, particularly in dynamic environments.

Key Points

  • Introduction of a reward-free self-finetuning framework
  • Utilization of a bi-perspective reflection mechanism for autonomous feedback generation
  • Demonstrated superiority over RL and LLM-based agents in performance metrics

Merits

Innovation

The framework introduces a novel method to distill long-horizon experiences into model parameters without reliance on handcrafted rewards, offering a scalable solution for continuous control.

Demerits

Validation Scope

While promising, the evaluation is confined to a specific RAN slicing context; broader applicability across diverse network domains remains unverified.

Expert Commentary

The article represents a significant step forward in the evolution of AI-native network systems by proposing a self-sustaining learning mechanism that bypasses the constraints of traditional reward-based frameworks. The bi-perspective reflection model is particularly compelling as it transforms interaction data into structured preference datasets, enabling continuous adaptation without external intervention. The authors rightly identify the limitations of finite context windows and missing explicit rewards as critical barriers, and their solution aligns with emerging trends in generative AI integration. However, the practical scalability of this approach warrants further scrutiny—particularly in heterogeneous networks or under extreme volatility. The performance gains reported are statistically significant, yet the long-term stability and generalizability of the fine-tuning process remain open questions. Overall, this work bridges a foundational gap between generative AI and operational autonomy in network control, and warrants careful consideration by both academic and industry stakeholders.

Recommendations

  • 1. Conduct longitudinal studies to assess long-term stability and adaptability of the self-finetuning process across diverse network scenarios.
  • 2. Expand evaluation to heterogeneous multi-vendor RAN environments to validate generalizability beyond controlled experimental conditions.

Sources