I Am Begging AI Companies to Stop Naming Features After Human Processes
I Am Begging AI Companies to Stop Naming Features After Human Processes
我恳求人工智能公司停止用人类生理过程来命名功能
Anthropic just announced a new feature called “dreaming” at the company’s developer conference in San Francisco. It’s part of Anthropic’s recently launched AI agent infrastructure designed to help users manage and deploy tools that automate software processes. This “dreaming” aspect sorts through the transcript of what an agent recently completed and attempts to glean insights to improve the agent’s performance.
Anthropic 在旧金山举行的开发者大会上刚刚宣布了一项名为“做梦”(dreaming)的新功能。这是 Anthropic 最近推出的 AI 智能体基础设施的一部分,旨在帮助用户管理和部署自动化软件流程的工具。这个“做梦”功能会梳理智能体近期完成任务的记录,并试图从中获取洞察,以提升智能体的表现。
Folks using AI agents often send them on multistep journeys, like visiting a few websites or reading multiple files, to complete online tasks. This new “dreaming” feature allows agents to look for patterns in their activity log and improve their abilities based on those insights.
使用 AI 智能体的用户经常会让它们执行多步骤的任务,比如访问多个网站或阅读多个文件来完成在线工作。这项新的“做梦”功能允许智能体在活动日志中寻找模式,并根据这些洞察来提升自身能力。
The feature’s name immediately calls to mind Philip K. Dick’s seminal sci-fi novel, Do Androids Dream of Electric Sheep?, which explores the qualities that truly separate humans from powerful machines. While our current generative AI tools come nowhere close to the machines in the book, I’m ready to draw the line right here, right now: No more generative AI features with names that rip off human cognitive processes.
这个功能名称立刻让人联想到菲利普·K·迪克的经典科幻小说《仿生人会梦见电子羊吗?》,该书探讨了真正将人类与强大机器区分开来的特质。虽然我们目前的生成式 AI 工具与书中的机器相去甚远,但我现在就要划清界限:不要再用剽窃人类认知过程的词汇来命名生成式 AI 功能了。
“Together, memory and dreaming form a robust memory system for self-improving agents,” reads Anthropic’s blog post about the launch of this research preview for developers. “Memory lets each agent capture what it learns as it works. Dreaming refines that memory between sessions, pulling shared learnings across agents and keeping it up-to-date.”
“记忆和做梦共同构成了一个强大的记忆系统,用于自我进化的智能体,”Anthropic 在关于发布此开发者研究预览版的博客文章中写道。“记忆让每个智能体在工作时捕捉所学内容。而做梦则在会话间隙提炼这些记忆,整合跨智能体的共享学习成果,并保持其更新。”
Since the spark of the chatbot revolution in 2022, leaders at AI companies have gone full tilt into naming aspects of generative AI tools after what goes on in the human brain. OpenAI released its first “reasoning” model in 2024, where the chatbot needed “thinking” time. The company described this release at the time as “a new series of AI models designed to spend more time thinking before they respond.” Numerous startups also refer to their chatbots as having “memories” about the user. Rather than the fast storage that’s typically referred to as a computer’s “memories,” these are much more humanlike nuggets of information: He lives in San Francisco, enjoys afternoon baseball games, and hates eating cantaloupe.
自 2022 年聊天机器人革命爆发以来,AI 公司的领导者们便全力以赴,用人类大脑的运作过程来命名生成式 AI 工具的各个方面。OpenAI 在 2024 年发布了首个“推理”模型,该模型需要“思考”时间。当时该公司将此次发布描述为“一系列旨在响应前花费更多时间进行思考的新型 AI 模型”。许多初创公司也称其聊天机器人拥有关于用户的“记忆”。这并非计算机通常所指的快速存储“内存”,而是更具人性化的信息片段:他住在旧金山,喜欢下午的棒球比赛,讨厌吃哈密瓜。
It’s a consistent marketing approach used by AI leaders, who have continued to lean into branding that blurs the line between what humans do and what machines can. Even the ways these companies develop chatbots, like Claude, with distinct “personalities,” can make users feel as if they are talking with something that has the potential for a deep inner life, something that would potentially have dreams even when my laptop is closed.
这是一种 AI 领军企业一贯采用的营销策略,他们持续倾向于通过品牌塑造来模糊人类行为与机器能力之间的界限。即使是这些公司开发聊天机器人(如 Claude)的方式,赋予其独特的“个性”,也会让用户感觉仿佛是在与某种拥有深层内心世界潜力的事物交谈——一种即便在我的笔记本电脑合上时,也可能在做梦的事物。
At Anthropic, this anthropomorphizing runs deeper than just marketing strategies. “We also discuss Claude in terms normally reserved for humans (e.g., ‘virtue,’ ‘wisdom’),” reads a portion of Anthropic’s constitution describing how it wants Claude to behave. “We do this because we expect Claude’s reasoning to draw on human concepts by default, given the role of human text in Claude’s training; and we think encouraging Claude to embrace certain humanlike qualities may be actively desirable.” The company even employs a resident philosopher to try to make sense of the bot’s “values.”
在 Anthropic,这种拟人化不仅仅是营销策略。“我们也会用通常仅用于人类的词汇(例如‘美德’、‘智慧’)来讨论 Claude,”Anthropic 的宪章中描述其期望 Claude 如何表现的部分写道。“我们这样做是因为考虑到人类文本在 Claude 训练中的作用,我们期望 Claude 的推理默认借鉴人类概念;并且我们认为鼓励 Claude 拥抱某些类人特质可能是积极且可取的。”该公司甚至聘请了一位驻场哲学家,试图去解读该机器人的“价值观”。
And this isn’t just me being nitpicky about wording. How we talk about these machines impacts what we think they can achieve. “As a fallacy, anthropomorphism is shown to distort moral judgments about AI, such as those concerning its moral character and status, as well as judgments of responsibility and trust,” reads a research paper published in the AI & Ethics journal. By not using more distanced language about bots, users run the risk of overly trusting the tools and projecting qualities onto them that aren’t really there.
这不仅仅是我在措辞上吹毛求疵。我们谈论这些机器的方式会影响我们对它们能力的认知。《AI 与伦理》期刊发表的一篇研究论文指出:“作为一种谬误,拟人化被证明会扭曲对 AI 的道德判断,例如关于其道德品格和地位的判断,以及关于责任和信任的判断。”如果不使用更具距离感的语言来描述机器人,用户就有过度信任这些工具的风险,并将它们本不具备的特质投射到它们身上。
Much like our AI overlords need to spend more time actually watching the sci-fi movies they allude to, I think the powerful people leading these companies should spend more time reading these classic sci-fi novels as well.
就像我们的 AI 巨头们需要花更多时间真正去观看他们所引用的科幻电影一样,我认为领导这些公司的权势人物也应该花更多时间去阅读这些经典的科幻小说。
Near the end of Dick’s book, the protagonist returns to his apartment with a rare toad he is convinced is a living animal, until his wife proves it’s just a machine by flipping open the control panel. “Crestfallen, he gazed mutely at the false animal; he took it back from her, fiddled with the legs as if baffled—he did not seem quite to understand,” reads a passage from the novel. Similarly, tech leaders seem to be unable, or at least unwilling, to accept the limitations of their own inhuman tools.
在迪克小说的结尾附近,主角带着一只他深信是活物的珍稀蟾蜍回到公寓,直到他的妻子翻开控制面板证明它只是一台机器。“他垂头丧气地默默注视着这只假动物;他从她手中接回它,困惑地摆弄着它的腿——他似乎完全无法理解,”小说中这样写道。同样,科技领袖们似乎也无法,或者至少是不愿接受他们自己那些非人工具的局限性。