Claude is a space to think
Claude is a space to think
Claude 是一个供人思考的空间
Feb 4, 2026 2026年2月4日
There are many good places for advertising. A conversation with Claude is not one of them. 广告有很多合适的投放场所,但与 Claude 的对话绝非其中之一。
Advertising drives competition, helps people discover new products, and allows services like email and social media to be offered for free. We’ve run our own ad campaigns, and our AI models have, in turn, helped many of our customers in the advertising industry. 广告能够推动竞争,帮助人们发现新产品,并使电子邮件和社交媒体等服务能够免费提供。我们自己也投放过广告,而我们的 AI 模型也反过来帮助了广告行业的许多客户。
But including ads in conversations with Claude would be incompatible with what we want Claude to be: a genuinely helpful assistant for work and for deep thinking. 但在与 Claude 的对话中植入广告,与我们对 Claude 的定位背道而驰:我们希望它成为一个真正能辅助工作和深度思考的助手。
We want Claude to act unambiguously in our users’ interests. So we’ve made a choice: Claude will remain ad-free. Our users won’t see “sponsored” links adjacent to their conversations with Claude; nor will Claude’s responses be influenced by advertisers or include third-party product placements our users did not ask for. 我们希望 Claude 能够明确地以用户利益为先。因此我们做出了一个选择:Claude 将保持无广告状态。用户不会在与 Claude 的对话旁看到“赞助”链接;Claude 的回复也不会受到广告商的影响,更不会包含用户未曾要求的第三方产品植入。
The nature of AI conversations
AI 对话的本质
When people use search engines or social media, they’ve come to expect a mixture of organic and sponsored content. Filtering signal from noise is part of the interaction. 当人们使用搜索引擎或社交媒体时,他们已经习惯了自然内容与赞助内容的混合。从噪音中过滤出有效信息已成为交互的一部分。
Conversations with AI assistants are meaningfully different. The format is open-ended; users often share context and reveal more than they would in a search query. This openness is part of what makes conversations with AI valuable, but it’s also what makes them susceptible to influence in ways that other digital products are not. 与 AI 助手的对话则有着本质的不同。这种对话形式是开放式的;用户往往会分享背景信息,并透露出比搜索查询更多的内容。这种开放性正是 AI 对话价值所在,但也使其更容易受到其他数字产品所不会面临的影响。
Our analysis of conversations with Claude (conducted in a way that keeps all data private and anonymous) shows that an appreciable portion involve topics that are sensitive or deeply personal—the kinds of conversations you might have with a trusted advisor. Many other uses involve complex software engineering tasks, deep work, or thinking through difficult problems. The appearance of ads in these contexts would feel incongruous—and, in many cases, inappropriate. 我们对 Claude 对话的分析(在确保所有数据私密且匿名的情况下进行)显示,相当大一部分对话涉及敏感或极其私人的话题——这就像是你与值得信赖的顾问进行的交谈。许多其他用途则涉及复杂的软件工程任务、深度工作或对难题的思考。在这些语境下出现广告会显得格格不入,在许多情况下甚至是不恰当的。
We still have much to learn about the impact of AI models on the people who use them. Early research suggests both benefits—like people finding support they couldn’t access elsewhere—and risks, including the potential for models to reinforce harmful beliefs in vulnerable users. Introducing advertising incentives at this stage would add another level of complexity. Our understanding of how models translate the goals we set them into specific behaviors is still developing; an ad-based system could therefore have unpredictable results. 关于 AI 模型对用户的影响,我们仍有许多需要了解的地方。早期研究表明,它既有益处——例如人们能获得在其他地方无法获取的支持——也存在风险,包括模型可能会强化弱势群体有害观念的风险。在这一阶段引入广告激励机制会增加另一层复杂性。我们对于模型如何将我们设定的目标转化为具体行为的理解尚在发展中;因此,基于广告的系统可能会产生不可预知的结果。
Incentive structures
激励结构
Being genuinely helpful is one of the core principles of Claude’s Constitution, the document that describes our vision for Claude’s character and guides how we train the model. An advertising-based business model would introduce incentives that could work against this principle. “真正提供帮助”是《Claude 宪法》的核心原则之一,这份文件描述了我们对 Claude 性格的愿景,并指导着我们如何训练模型。基于广告的商业模式会引入与这一原则相抵触的激励机制。
Consider a concrete example. A user mentions they’re having trouble sleeping. An assistant without advertising incentives would explore the various potential causes—stress, environment, habits, and so on—based on what might be most insightful to the user. An ad-supported assistant has an additional consideration: whether the conversation presents an opportunity to make a transaction. These objectives may often align—but not always. And, unlike a list of search results, ads that influence a model’s responses may make it difficult to tell whether a given recommendation comes with a commercial motive or not. Users shouldn’t have to second-guess whether an AI is genuinely helping them or subtly steering the conversation towards something monetizable. 以一个具体的例子来看:如果用户提到自己失眠。一个没有广告激励的助手会根据对用户最有启发性的角度,去探讨各种潜在原因——如压力、环境、习惯等。而一个有广告支持的助手则多了一个考量:这段对话是否提供了交易机会。这些目标有时可能一致,但并非总是如此。而且,与搜索结果列表不同,影响模型回复的广告可能会让用户难以分辨某项建议是否带有商业动机。用户不应该去猜测 AI 是在真心帮助他们,还是在潜移默化地将对话引向可变现的方向。
Even ads that don’t directly influence an AI model’s responses and instead appear separately within the chat window would compromise what we want Claude to be: a clear space to think and work. Such ads would also introduce an incentive to optimize for engagement—for the amount of time people spend using Claude and how often they return. These metrics aren’t necessarily aligned with being genuinely helpful. The most useful AI interaction might be a short one, or one that resolves the user’s request without prompting further conversation. 即使是那些不直接影响 AI 模型回复、仅在聊天窗口中单独显示的广告,也会破坏我们对 Claude 的定位:一个清晰的思考与工作空间。此类广告还会引入一种优化用户参与度的激励——即人们使用 Claude 的时长和回访频率。这些指标并不一定与“真正提供帮助”相一致。最有用的 AI 交互可能恰恰是简短的,或者是那种无需进一步对话就能解决用户需求的交互。
We recognize that not all advertising implementations are equivalent. More transparent or opt-in approaches—where users explicitly choose to see sponsored content—might avoid some of the concerns outlined above. But the history of ad-supported products suggests that advertising incentives, once introduced, tend to expand over time as they become integrated into revenue targets and product development, blurring boundaries that were once more clear-cut. We’ve chosen not to introduce these dynamics into Claude. 我们认识到,并非所有的广告形式都一样。更透明或选择加入的方式——即用户明确选择查看赞助内容——或许能避免上述部分担忧。但广告支持产品的历史表明,广告激励一旦引入,往往会随着时间的推移而扩张,因为它们会被整合进收入目标和产品开发中,从而模糊了曾经清晰的界限。我们选择不在 Claude 中引入这些动态。
Our approach
我们的方针
Anthropic is focused on businesses, developers, and helping our users flourish. Our business model is straightforward: we generate revenue through enterprise contracts and paid subscriptions, and we reinvest that revenue into improving Claude for our users. This is a choice with tradeoffs, and we respect that other AI companies might reasonably reach different conclusions. Anthropic 专注于服务企业、开发者,并帮助我们的用户取得成功。我们的商业模式很简单:通过企业合同和付费订阅产生收入,并将这些收入重新投入到为用户改进 Claude 的工作中。这是一个需要权衡的选择,我们也尊重其他 AI 公司可能会得出不同结论。
Expanding access to Claude is central to our public benefit mission, and we want to do it without selling our users’ attention or data to advertisers. To that end, we’ve brought AI tools and training to educators in over 60 countries, begun national AI education pilots with multiple governments, and made Claude available to nonprofits at a significant discount. We continue to invest in our smaller models so that our free offering remains at the frontier of intelligence, and we may consider lower-cost subscription tiers and regional pricing where there is clear demand for it. Should we need to revisit this approach, we’ll be transparent about our reasons for doing so. 扩大 Claude 的使用范围是我们公共利益使命的核心,我们希望在不向广告商出售用户注意力或数据的前提下实现这一目标。为此,我们已为 60 多个国家的教育工作者提供了 AI 工具和培训,与多个政府启动了国家级 AI 教育试点项目,并以大幅折扣向非营利组织提供 Claude。我们持续投资于较小的模型,以确保我们的免费产品保持在智能前沿,并且在有明确需求的情况下,我们可能会考虑更低成本的订阅层级和区域定价。如果我们需要重新审视这一方针,我们将透明地说明原因。
Supporting commerce
支持商业
AI will increasingly interact with commerce, and we look forward to supporting this in ways that help our users. We’re particularly interested in the potential of agentic commerce, where Claude acts on a user’s behalf to handle a purchase or booking end to end. And we’ll continue to build features that enable our users to find, compare, or buy products, connect with businesses, and more. AI 将越来越多地与商业互动,我们期待以帮助用户的方式来支持这一点。我们特别关注代理式商业(agentic commerce)的潜力,即 Claude 代表用户端到端地处理购买或预订。我们将继续构建相关功能,使用户能够查找、比较或购买产品,与企业建立联系等等。