Academic

Influencing LLM Multi-Agent Dialogue via Policy-Parameterized Prompts

arXiv:2603.09890v1 Announce Type: new Abstract: Large Language Models (LLMs) have emerged as a new paradigm for multi-agent systems. However, existing research on the behaviour of LLM-based multi-agents relies on ad hoc prompts and lacks a principled policy perspective. Different from reinforcement learning, we investigate whether prompt-as-action can be parameterized so as to construct a lightweight policy which consists of a sequence of state-action pairs to influence conversational behaviours without training. Our framework regards prompts as actions executed by LLMs, and dynamically constructs prompts through five components based on the current state of the agent. To test the effectiveness of parameterized control, we evaluated the dialogue flow based on five indicators: responsiveness, rebuttal, evidence usage, non-repetition, and stance shift. We conduct experiments using different LLM-driven agents in two discussion scenarios related to the general public and show that prompt

H
Hongbo Bo, Jingyu Hu, Weiru Liu
· · 1 min read · 19 views

arXiv:2603.09890v1 Announce Type: new Abstract: Large Language Models (LLMs) have emerged as a new paradigm for multi-agent systems. However, existing research on the behaviour of LLM-based multi-agents relies on ad hoc prompts and lacks a principled policy perspective. Different from reinforcement learning, we investigate whether prompt-as-action can be parameterized so as to construct a lightweight policy which consists of a sequence of state-action pairs to influence conversational behaviours without training. Our framework regards prompts as actions executed by LLMs, and dynamically constructs prompts through five components based on the current state of the agent. To test the effectiveness of parameterized control, we evaluated the dialogue flow based on five indicators: responsiveness, rebuttal, evidence usage, non-repetition, and stance shift. We conduct experiments using different LLM-driven agents in two discussion scenarios related to the general public and show that prompt parameterization can influence the dialogue dynamics. This result shows that policy-parameterised prompts offer a simple and effective mechanism to influence the dialogue process, which will help the research of multi-agent systems in the direction of social simulation.

Executive Summary

This study presents a novel approach to influencing Large Language Model (LLM) multi-agent dialogue through policy-parameterized prompts. By treating prompts as actions executed by LLMs, the authors develop a framework that dynamically constructs prompts based on the current state of the agent. The framework consists of five components and demonstrates its effectiveness in two discussion scenarios related to the general public. The results show that prompt parameterization can significantly influence dialogue dynamics, highlighting the potential of policy-parameterized prompts in social simulation research. The study's findings have significant implications for the development of more advanced multi-agent systems and contribute to the growing body of research on LLMs in social simulation.

Key Points

  • The study proposes a policy-parameterized prompt framework for influencing LLM multi-agent dialogue.
  • The framework treats prompts as actions executed by LLMs and dynamically constructs prompts based on the current state of the agent.
  • The results show that prompt parameterization can significantly influence dialogue dynamics in two discussion scenarios related to the general public.

Merits

Strength

The study presents a novel and innovative approach to influencing LLM multi-agent dialogue, which is essential for advancing social simulation research.

Methodological rigor

The study employs a robust methodology, including experiment design and data analysis, to evaluate the effectiveness of policy-parameterized prompts.

Demerits

Limitation

The study's findings may be specific to the two discussion scenarios used, limiting the generalizability of the results to other contexts.

Scalability

The framework's ability to handle more complex and dynamic dialogue scenarios remains unclear, which may limit its practical applications.

Expert Commentary

While the study presents a significant contribution to the field of social simulation research, several limitations and areas for further research emerge. The study's findings may be specific to the two discussion scenarios used, and the framework's ability to handle more complex and dynamic dialogue scenarios remains unclear. Furthermore, the study's focus on policy-parameterized prompts may overlook other important factors influencing dialogue dynamics, such as the role of contextual information and human judgment. Nevertheless, the study's innovative approach and robust methodology demonstrate the potential of policy-parameterized prompts to influence dialogue dynamics, which is essential for advancing social simulation research.

Recommendations

  • Future studies should investigate the generalizability of the study's findings to other contexts and dialogue scenarios.
  • Researchers should explore the integration of policy-parameterized prompts with other influential factors, such as contextual information and human judgment, to develop more comprehensive models of dialogue dynamics.

Sources