Can AI Debias the News? LLM Interventions Improve Cross-Partisan Receptivity but LLMs Overestimate Their Own Effectiveness
Can AI Debias the News? LLM Interventions Improve Cross-Partisan Receptivity but LLMs Overestimate Their Own Effectiveness
AI 能否消除新闻偏见?大语言模型干预可提升跨党派接受度,但模型往往高估了自身效能
Abstract: Partisan news media erode cross-partisan trust, but large language models (LLMs) offer a potential means of debiasing such content at scale. Across two pre-registered experiments, we tested whether LLM-generated debiasing of liberal news headlines could improve conservative readers’ trust-relevant judgments.
摘要: 党派新闻媒体侵蚀了跨党派间的信任,而大语言模型(LLM)为大规模消除此类内容偏见提供了潜在途径。通过两项预注册实验,我们测试了由 LLM 生成的针对自由派新闻标题的去偏见处理,是否能改善保守派读者对新闻可信度的判断。
Study 1 found that subtle lexical debiasing (replacing emotive words with more moderate synonyms) had no effect on any outcome. Study 2 found that a more substantive reframing intervention significantly increased conservatives’ perceived trustworthiness, completeness, and willingness to engage with liberal news headlines, without producing a backfire effect among a sample of liberals.
研究 1 发现,细微的词汇去偏(将情绪化词汇替换为更温和的同义词)对任何结果均无影响。研究 2 发现,更实质性的重构干预显著提升了保守派对自由派新闻标题的信任度、完整性感知以及参与意愿,且未在自由派样本中产生“回旋镖效应”(即适得其反的效果)。
In Study 1, the intervention produced robust effects among LLM-simulated silicon participants, whereas it had no impact on human readers. In Study 2, the intervention’s effects among silicon participants aligned directionally with human responses but were significantly larger in magnitude for some outcomes. Moderation analyses revealed that the model’s implicit theory of who responds to debiasing diverged from the psychological profile that actually predicted human responsiveness.
在研究 1 中,该干预措施在 LLM 模拟的“硅基参与者”中产生了显著效果,但对人类读者却毫无影响。在研究 2 中,干预措施在硅基参与者中的效果与人类反应方向一致,但在某些指标上的影响幅度显著更大。调节分析显示,模型对于“谁会对去偏见干预做出反应”的内隐理论,与实际预测人类反应的心理特征存在偏差。
These findings demonstrate that LLM-based debiasing can improve cross-partisan receptivity when targeting ideological framing rather than surface-level language, but that current models lack both the quantitative accuracy and qualitative psychological fidelity to evaluate their own interventions without human oversight.
这些发现表明,当针对意识形态框架而非表层语言进行处理时,基于 LLM 的去偏见手段能够提升跨党派的接受度;但同时也揭示出,目前的模型在缺乏人类监督的情况下,尚不具备评估自身干预效果所需的定量准确性和定性心理保真度。