Skip to main content
Academic

Perceived Political Bias in LLMs Reduces Persuasive Abilities

arXiv:2602.18092v1 Announce Type: new Abstract: Conversational AI has been proposed as a scalable way to correct public misconceptions and spread misinformation. Yet its effectiveness may depend on perceptions of its political neutrality. As LLMs enter partisan conflict, elites increasingly portray them as ideologically aligned. We test whether these credibility attacks reduce LLM-based persuasion. In a preregistered U.S. survey experiment (N=2144), participants completed a three-round conversation with ChatGPT about a personally held economic policy misconception. Compared to a neutral control, a short message indicating that the LLM was biased against the respondent's party attenuated persuasion by 28%. Transcript analysis indicates that the warnings alter the interaction: respondents push back more and engage less receptively. These findings suggest that the persuasive impact of conversational AI is politically contingent, constrained by perceptions of partisan alignment.

M
Matthew DiGiuseppe, Joshua Robison
· · 1 min read · 4 views

arXiv:2602.18092v1 Announce Type: new Abstract: Conversational AI has been proposed as a scalable way to correct public misconceptions and spread misinformation. Yet its effectiveness may depend on perceptions of its political neutrality. As LLMs enter partisan conflict, elites increasingly portray them as ideologically aligned. We test whether these credibility attacks reduce LLM-based persuasion. In a preregistered U.S. survey experiment (N=2144), participants completed a three-round conversation with ChatGPT about a personally held economic policy misconception. Compared to a neutral control, a short message indicating that the LLM was biased against the respondent's party attenuated persuasion by 28%. Transcript analysis indicates that the warnings alter the interaction: respondents push back more and engage less receptively. These findings suggest that the persuasive impact of conversational AI is politically contingent, constrained by perceptions of partisan alignment.

Executive Summary

This study examines the impact of perceived political bias on the persuasive abilities of Large Language Models (LLMs) in conversational AI. The authors conducted a survey experiment with 2144 participants, who engaged in a three-round conversation with ChatGPT about a personally held economic policy misconception. The results show that a perceived bias against the respondent's party reduces persuasion by 28%. The findings suggest that the persuasive impact of LLMs is politically contingent and influenced by perceptions of partisan alignment. This has significant implications for the use of conversational AI in correcting public misconceptions and spreading misinformation.

Key Points

  • Perceived political bias in LLMs reduces their persuasive abilities.
  • A 28% attenuation in persuasion was observed when participants believed the LLM was biased against their party.
  • Transcript analysis reveals that warnings of bias alter the interaction, with respondents pushing back more and engaging less receptively.

Merits

Methodological rigor

The study employed a large sample size (2144 participants) and a preregistered survey experiment design, ensuring a high level of methodological rigor.

Relevance to real-world applications

The study's findings have significant implications for the use of conversational AI in correcting public misconceptions and spreading misinformation.

Demerits

Limited scope

The study focused on a single LLM (ChatGPT) and a specific context (economic policy misconceptions), limiting the generalizability of the findings.

Lack of longitudinal design

The study's cross-sectional design does not allow for an examination of the long-term effects of perceived bias on LLM persuasive abilities.

Expert Commentary

The study provides valuable insights into the complex relationship between perceived bias, partisan alignment, and the persuasive abilities of LLMs. While the findings are compelling, the limitations of the study (e.g., limited scope, cross-sectional design) should be acknowledged. Future research should aim to replicate and extend these findings, exploring the long-term effects of perceived bias and the development of more nuanced LLMs that can adapt to diverse contexts and user needs.

Recommendations

  • Future studies should prioritize longitudinal designs and more diverse, representative samples to better understand the long-term effects of perceived bias on LLM persuasive abilities.
  • Developers and deployers of LLMs should prioritize transparency, explainability, and human oversight to mitigate the effects of perceived bias and ensure that LLMs are used responsibly and ethically.

Sources