A blueprint for using AI to strengthen democracy

A blueprint for using AI to strengthen democracy

利用人工智能加强民主的蓝图

Every few centuries, changes in how information moves reshape how societies govern themselves. The printing press spread vernacular literacy, helping give rise to the Reformation and, eventually, representative government. The telegraph made it possible to administer vast nations like the US, accelerating the growth of the modern bureaucratic state. Broadcast media created shared national audiences, which in turn fueled mass democracy. 每隔几个世纪,信息传播方式的变革就会重塑社会的治理模式。印刷术普及了白话文阅读,推动了宗教改革,并最终促成了代议制政府的诞生。电报使管理像美国这样幅员辽阔的国家成为可能,加速了现代官僚国家的成长。广播媒体创造了共享的全国性受众,进而推动了大众民主。

We are now in the early stages of another such shift. Faster than many realize, AI is becoming the primary interface through which we form beliefs and participate in democratic self-governance. If left unchecked, this shift could further strain America’s already fragile institutions. But it could also help address long-standing problems, like lagging civic engagement and deepening polarization. What happens next depends on design choices that are already being made, whether we know it or not. 我们现在正处于另一次此类变革的初期。人工智能正以超乎许多人预期的速度,成为我们形成信念和参与民主自治的主要界面。如果任其发展,这种转变可能会进一步加剧美国本已脆弱的制度压力。但它也有助于解决长期存在的问题,如公民参与度滞后和日益严重的极化现象。接下来会发生什么,取决于我们是否意识到,目前正在做出的那些设计选择。

Start with what might be called the epistemic layer—how we come to know things. People are increasingly relying on AI to know what is true, what is happening, and whom to trust. Search is already substantially AI-mediated. The next generation of AI assistants will synthesize information, frame it, and present it with authority. For a growing number of people, asking an AI will become the default way to form views on a candidate, a policy, or a public figure. Whoever controls what these models say therefore has increasing influence over what people believe. 首先从所谓的“认知层”说起——即我们如何获取知识。人们越来越依赖人工智能来了解什么是真、发生了什么以及该信任谁。搜索功能在很大程度上已经由人工智能中介。下一代人工智能助手将整合信息、构建框架,并以权威的方式呈现。对于越来越多的人来说,询问人工智能将成为对候选人、政策或公众人物形成观点时的默认方式。因此,谁控制了这些模型的输出,谁就对人们的信仰拥有了越来越大的影响力。

Technology has always shaped the way citizens interact with information. But a new problem will soon arise in the form of personal AI agents, which can change not only how people receive information but how they act on it. These systems will conduct research, draft communications, highlight causes, and lobby on a user’s behalf. They will inform decisions such as how to vote on a ballot measure, which organizations are worth supporting, or how to respond to a government notice. They will, in a meaningful sense, begin to mediate the relationship between individuals and the institutions that govern them. 技术一直以来都在塑造公民与信息互动的方式。但一种新的问题即将以个人人工智能代理的形式出现,它们不仅能改变人们接收信息的方式,还能改变人们据此采取行动的方式。这些系统将代表用户进行研究、起草通讯、突出议题并进行游说。它们将为决策提供参考,例如如何对投票议案进行表决、哪些组织值得支持,或如何回应政府通知。从某种意义上说,它们将开始调解个人与治理他们的机构之间的关系。

We’ve already seen with social media what happens when algorithms optimize for engagement over understanding. Platforms do not need to have an explicit political agenda to produce polarization and radicalization. An agent that knows your preferences and your anxieties—one shaped to keep you engaged—poses the same risks. And in this case the risks may be even more difficult to detect, because an agent presents itself as your advocate. It speaks for you, acts on your behalf, and may earn trust precisely through that intimacy. 我们已经在社交媒体上看到了当算法优先考虑参与度而非理解时会发生什么。平台无需拥有明确的政治议程就能产生极化和激进化。一个了解你的偏好和焦虑、旨在让你保持参与的代理程序,也存在同样的风险。在这种情况下,风险可能更难察觉,因为代理程序表现为你的拥护者。它为你说话,代表你行动,并可能恰恰通过这种亲密感赢得信任。

Now zoom out to the collective. AI agents and humans could soon participate in the same forums, where it may be impossible to tell them apart. Even if every individual AI agent were well-designed and aligned with its user’s interests, the interactions of millions of agents could produce outcomes that no individual wanted or chose. For example, research shows that agents displaying no individual bias can still generate collective biases at scale. And setting aside what agents do to each other, there is what they do for their users. A public sphere in which everyone has a personalized agent attuned to their existing views is not, in aggregate, a public sphere at all. It is a collection of private worlds, each internally coherent but collectively inhospitable to the kind of shared deliberation that democracy requires. 现在将视角放大到集体层面。人工智能代理和人类很快可能参与到同一个论坛中,届时可能无法区分两者。即使每个个人人工智能代理都经过精心设计并符合其用户的利益,数百万个代理的互动也可能产生任何个人都不想要或未选择的结果。例如,研究表明,即使没有个人偏见的代理程序,在大规模互动中仍可能产生集体偏见。撇开代理程序之间相互作用的影响不谈,还有它们为用户所做的事情。一个每个人都拥有根据其现有观点定制的个性化代理的公共领域,从总体上看,根本不是一个公共领域。它是一系列私人世界的集合,每个世界内部逻辑自洽,但作为一个整体,却不利于民主所必需的那种共享审议。

Taken together, these three transformations—in how we know, how we act, and how we engage in collective governance—amount to a fundamental change in the texture of citizenship. In the near future, people will form their political views through AI filters, exercise their civic agency through AI agents, and participate in institutions and public discussions that are themselves shaped by the interactions of millions of such agents. Today’s democracy is not ready for this. Our institutions were designed for a world in which power was exercised visibly, information traveled slowly enough to be contested, and reality felt more shared, if imperfectly. All of this was already fraying long before generative AI arrived. And yet this need not be a story of decline. 总而言之,这三种转变——关于我们如何认知、如何行动以及如何参与集体治理——构成了公民身份本质的根本性变化。在不久的将来,人们将通过人工智能过滤器形成政治观点,通过人工智能代理行使公民权利,并参与到由数百万此类代理互动所塑造的机构和公共讨论中。今天的民主制度还没有为此做好准备。我们的制度是为这样一个世界设计的:权力在可见范围内行使,信息传播速度慢到足以被质疑,现实感虽然不完美,但更具共享性。这一切在生成式人工智能出现之前就已经开始瓦解。然而,这并不必然是一个衰落的故事。

Avoiding that outcome requires us to design for something better. On the informational layer, AI companies must ramp up existing efforts to ensure that models’ outputs are truthful. They should also explore some promising early findings that AI models can help reduce polarization. A recent field evaluation of AI-generated fact checks on X found that people with a variety of political viewpoints deemed AI-written notes more helpful than human-written ones. The paper is yet to be peer-reviewed, but that is a potentially revolutionary finding: AI-assisted fact-checking may be able to achieve the kind of cross-partisan credibility that has eluded most manual human efforts. Greater understanding of and transparency about how models make these assertions and prioritize sources in the process could help build further public trust. 要避免这种结果,我们需要进行更好的设计。在信息层面上,人工智能公司必须加大现有力度,确保模型输出的真实性。他们还应探索一些有前景的早期发现,即人工智能模型有助于减少极化。最近一项针对 X 平台上人工智能生成的事实核查的实地评估发现,拥有不同政治观点的人认为人工智能撰写的注释比人类撰写的更有帮助。该论文尚未经过同行评审,但这可能是一个革命性的发现:人工智能辅助的事实核查或许能够实现大多数人工核查难以企及的跨党派公信力。提高对模型如何做出这些断言以及在过程中如何优先考虑来源的理解和透明度,将有助于进一步建立公众信任。

On the agentic layer, we need ways to evaluate whether AI agents faithfully represent their users. An agent must never have an agenda of its own or misrepresent its user’s views—a technically daunting requirement in domains where users may have not explicitly stated any preferences. But faithful representation also cannot become an accessory to motivated reasoning. An agent that refuses to present uncomfortable information, that shields its user from ever questioning prior beliefs or fails to adjust to a change of heart, is not acting in the person’s best interest. Finally, on the institutional level, policymakers should hurry to harness AI’s potential to make governance more responsive and legitimate. Several states and localities are already using AI-mediated platforms to conduct democratic deliberation at scale, building on research showing that AI mediators can help citizens find common ground. 在代理层面上,我们需要评估人工智能代理是否忠实地代表了其用户的方法。代理程序绝不能有自己的议程或歪曲用户的观点——在用户可能尚未明确表达任何偏好的领域,这是一项技术上艰巨的要求。但忠实代表也不能成为“动机性推理”的帮凶。一个拒绝呈现令人不适的信息、屏蔽用户对既有信念的质疑或无法适应用户心态转变的代理程序,并不是在维护用户的最大利益。最后,在制度层面上,政策制定者应抓紧利用人工智能的潜力,使治理更具响应性和合法性。一些州和地方政府已经在使用人工智能中介平台进行大规模的民主审议,其基础是研究表明人工智能调解员可以帮助公民找到共同点。