LLM-Agent-based Social Simulation for Attitude Diffusion
arXiv:2604.03898v1 Announce Type: new Abstract: This paper introduces discourse_simulator, an open-source framework that combines LLMs with agent-based modelling. It offers a new way to simulate how public attitudes toward immigration change over time in response to salient events like protests, controversies, or policy debates. Large language models (LLMs) are used to generate social media posts, interpret opinions, and model how ideas spread through social networks. Unlike traditional agent-based models that rely on fixed, rule-based opinion updates and cannot generate natural language or consider current events, this approach integrates multidimensional sociological belief structures and real-world event timelines. This framework is wrapped into an open-source Python package that integrates generative agents into a small-world network topology and a live news retrieval system. discourse_sim is purpose-built as a social science research instrument specifically for studying attitude
arXiv:2604.03898v1 Announce Type: new Abstract: This paper introduces discourse_simulator, an open-source framework that combines LLMs with agent-based modelling. It offers a new way to simulate how public attitudes toward immigration change over time in response to salient events like protests, controversies, or policy debates. Large language models (LLMs) are used to generate social media posts, interpret opinions, and model how ideas spread through social networks. Unlike traditional agent-based models that rely on fixed, rule-based opinion updates and cannot generate natural language or consider current events, this approach integrates multidimensional sociological belief structures and real-world event timelines. This framework is wrapped into an open-source Python package that integrates generative agents into a small-world network topology and a live news retrieval system. discourse_sim is purpose-built as a social science research instrument specifically for studying attitude dynamics, polarisation, and belief evolution following real-world critical events. Unlike other LLM Agent Swarm frameworks, which treat the simulations as a prediction black box, discourse_sim treats it as a theory-testing instrument, which is fundamentally a different epistemological stance for studying social science problems. The paper further demonstrates the framework by modelling the Dublin anti-immigration march on April 26, 2025, with N=100 agents over a 15-day simulation. Package link: https://pypi.org/project/discourse-sim/
Executive Summary
The paper presents *discourse_simulator*, a novel open-source framework integrating Large Language Models (LLMs) with agent-based modeling to simulate the diffusion of public attitudes toward immigration in response to real-world events. Unlike traditional agent-based models, which rely on static rules, this framework incorporates multidimensional sociological belief structures, real-time news retrieval, and natural language generation to simulate social media interactions within a small-world network. The authors demonstrate its utility by simulating a 15-day period surrounding an anti-immigration march in Dublin (April 26, 2025), with 100 agents. The framework is positioned as a theory-testing instrument rather than a predictive black box, offering a distinct epistemological approach for social science research. By bridging computational social science and generative AI, the paper advances methodological innovation in studying attitude polarization and belief evolution.
Key Points
- ▸ LLM-Agent Integration: Combines generative AI with agent-based modeling to simulate social attitude diffusion, enabling natural language generation and dynamic opinion updates.
- ▸ Real-World Event Integration: Incorporates live news retrieval and event timelines to model how salient events (e.g., protests, controversies) influence public opinion.
- ▸ Epistemological Shift: Positions the framework as a theory-testing tool rather than a predictive model, aligning with social science research goals.
- ▸ Open-Source Accessibility: Released as an open-source Python package (*discourse_sim*), enabling reproducibility and broader adoption in computational social science.
- ▸ Case Study Validation: Demonstrates utility through a simulation of a real-world anti-immigration march in Dublin, showcasing the framework’s capability to model polarisation and attitude dynamics.
Merits
Methodological Innovation
The integration of LLMs with agent-based modeling represents a significant advancement over traditional rule-based systems, enabling more nuanced and dynamic simulations of social behavior.
Real-World Relevance
The framework’s incorporation of live news and event timelines ensures simulations are grounded in contemporary social contexts, enhancing their empirical validity.
Epistemological Clarity
By framing the framework as a theory-testing instrument, the authors address critiques of black-box AI models in social science, emphasizing transparency and interpretability.
Open-Source Democratization
The release of *discourse_sim* as an open-source tool lowers barriers to entry for researchers, fostering collaboration and accelerating progress in computational social science.
Demerits
Computational Complexity and Cost
The use of LLMs for agent-based simulations may incur significant computational costs, particularly for large-scale or long-duration simulations, potentially limiting accessibility for some researchers.
Validation Challenges
While the framework is demonstrated through a case study, broader validation across diverse social contexts and events is needed to assess generalizability and robustness.
Ethical and Bias Concerns
LLMs may inherit or amplify biases present in their training data, which could skew simulation outcomes and introduce ethical risks in modeling public attitudes.
Epistemological Assumptions
The claim that the framework is fundamentally a theory-testing instrument, rather than a predictive model, may not fully address concerns about the reliability of LLM-generated outputs in social science research.
Expert Commentary
The *discourse_simulator* framework represents a paradigm shift in computational social science by merging the generative capabilities of LLMs with the structural rigor of agent-based modeling. Its most compelling contribution lies in its epistemological stance: rather than attempting to predict social phenomena with opaque black-box models, the authors position it as a theory-testing instrument. This aligns with the long-standing critique of AI in social science, where predictive accuracy often comes at the expense of interpretability and theoretical grounding. The integration of real-time news retrieval and multidimensional belief structures further enhances the framework’s fidelity to real-world dynamics. However, the reliance on LLMs introduces inherent risks, including bias propagation and computational inefficiency, which must be carefully managed. The open-source release is commendable and democratizes access, but the framework’s long-term impact will depend on its adoption by the social science community and rigorous validation across diverse contexts. If successful, *discourse_sim* could redefine how we study and understand the diffusion of attitudes in an era of generative AI.
Recommendations
- ✓ Conduct extensive validation studies across multiple social contexts and events to assess the framework’s robustness and generalizability beyond the Dublin case study.
- ✓ Develop robust bias mitigation strategies for LLM-generated outputs, including bias audits and the incorporation of adversarial debiasing techniques to ensure fair and accurate simulations.
- ✓ Expand the framework’s scalability by optimizing computational efficiency, such as through model distillation or parallelization, to accommodate larger agent populations and longer simulation durations.
- ✓ Establish interdisciplinary collaborations between computational social scientists, ethicists, and policymakers to explore the ethical implications of AI-driven social simulations and develop governance frameworks for their use.
Sources
Original: arXiv - cs.AI