Academic

Do LLM-Driven Agents Exhibit Engagement Mechanisms? Controlled Tests of Information Load, Descriptive Norms, and Popularity Cues

arXiv:2603.20911v1 Announce Type: new Abstract: Large language models make agent-based simulation more behaviorally expressive, but they also sharpen a basic methodological tension: fluent, human-like output is not, by itself, evidence for theory. We evaluate what an LLM-driven simulation can credibly support using information engagement on social media as a test case. In a Weibo-like environment, we manipulate information load and descriptive norms, while allowing popularity cues (cumulative likes and Sina Weibo-style cumulative reshares) to evolve endogenously. We then ask whether simulated behavior changes in theoretically interpretable ways under these controlled variations, rather than merely producing plausible-looking traces. Engagement responds systematically to information load and descriptive norms, and sensitivity to popularity cues varies across contexts, indicating conditionality rather than rigid prompt compliance. We discuss methodological implications for simulation-ba

arXiv:2603.20911v1 Announce Type: new Abstract: Large language models make agent-based simulation more behaviorally expressive, but they also sharpen a basic methodological tension: fluent, human-like output is not, by itself, evidence for theory. We evaluate what an LLM-driven simulation can credibly support using information engagement on social media as a test case. In a Weibo-like environment, we manipulate information load and descriptive norms, while allowing popularity cues (cumulative likes and Sina Weibo-style cumulative reshares) to evolve endogenously. We then ask whether simulated behavior changes in theoretically interpretable ways under these controlled variations, rather than merely producing plausible-looking traces. Engagement responds systematically to information load and descriptive norms, and sensitivity to popularity cues varies across contexts, indicating conditionality rather than rigid prompt compliance. We discuss methodological implications for simulation-based communication research, including multi-condition stress tests, explicit no-norm baselines because default prompts are not blank controls, and design choices that preserve endogenous feedback loops when studying bandwagon dynamics.

Executive Summary

This article presents a controlled test of large language model (LLM)-driven simulation in the context of information engagement on social media. The study manipulates information load and descriptive norms while allowing popularity cues to evolve endogenously. The results indicate that engagement responds systematically to information load and descriptive norms, with sensitivity to popularity cues varying across contexts. The study's findings have methodological implications for simulation-based communication research, including the use of multi-condition stress tests, explicit no-norm baselines, and design choices that preserve endogenous feedback loops. The study contributes to a deeper understanding of the role of LLMs in agent-based simulation and highlights the importance of evaluating model behavior in a theoretically interpretable manner.

Key Points

  • The study uses a Weibo-like environment to test the engagement mechanisms of LLM-driven agents.
  • The results indicate that engagement responds systematically to information load and descriptive norms.
  • Sensitivity to popularity cues varies across contexts, suggesting conditionality rather than rigid prompt compliance.

Merits

Methodological Rigor

The study employs a well-designed experiment with controlled variations and rigorous analysis, providing a high level of methodological rigor.

Theoretical Contributions

The study sheds light on the engagement mechanisms of LLM-driven agents and highlights the importance of evaluating model behavior in a theoretically interpretable manner.

Demerits

Limited Generalizability

The study's findings may be specific to the Weibo-like environment used in the experiment and may not generalize to other social media platforms or contexts.

Need for Further Research

The study suggests that further research is needed to fully understand the role of LLMs in agent-based simulation and to explore the implications of these findings for real-world applications.

Expert Commentary

This study represents a significant contribution to the field of agent-based simulation and social media engagement. The use of a controlled experiment and rigorous analysis provides a high level of methodological rigor and sheds light on the engagement mechanisms of LLM-driven agents. The study's findings have important implications for the development of LLM-driven simulation models and for policymakers and regulators seeking to understand and mitigate the spread of misinformation on social media. However, the study's limitations, including the need for further research to fully understand the role of LLMs in agent-based simulation and to explore the implications of these findings for real-world applications, are also acknowledged.

Recommendations

  • Future research should explore the implications of the study's findings for real-world applications, including the development of LLM-driven simulation models and the regulation of social media platforms.
  • Researchers should consider using more nuanced approaches to regulation, taking into account the role of LLMs in shaping engagement behavior and the importance of considering the role of information load, descriptive norms, and popularity cues in shaping engagement behavior.

Sources

Original: arXiv - cs.AI