Anthropic's Claude Managed Agents can now "dream," sort of
Anthropic’s Claude Managed Agents can now “dream,” sort of
Anthropic 的 Claude 托管智能体现在可以“做梦”了,某种程度上
SAN FRANCISCO—At its Code with Claude developers’ conference, Anthropic has introduced what it calls “dreaming” to Claude Managed Agents. Dreaming, in this case, is a process of going over recent events and identifying specific things that are worth storing in “memory” to inform future tasks and interactions.
旧金山——在“Code with Claude”开发者大会上,Anthropic 为其 Claude 托管智能体(Managed Agents)引入了一项名为“做梦”(dreaming)的功能。在这种情况下,“做梦”是一个回顾近期事件并识别出值得存入“记忆”以指导未来任务和交互的具体内容的过程。
Dreaming is a feature that is currently in research preview and limited to Managed Agents on the Claude Platform. Managed Agents are a higher-level alternative to building directly on the Messages API that Anthropic describes as a “pre-built, configurable agent harness that runs in managed infrastructure.” It’s intended for situations where you want multiple agents working on a task or project to some end point over several minutes or hours.
“做梦”目前处于研究预览阶段,仅限于 Claude 平台上的托管智能体使用。托管智能体是直接基于 Messages API 构建的一种更高级的替代方案,Anthropic 将其描述为“一种运行在托管基础设施上的预构建、可配置的智能体框架”。它旨在满足需要多个智能体在数分钟或数小时内协作完成某项任务或项目的场景。
Anthropic describes dreaming as a scheduled process, in which sessions and memory stores are reviewed, and specific memories are curated. This is important because context windows are limited for LLMs, and important information can be lost over lengthy projects. On the chat side of things, many models use a process called compaction, whereby lengthy conversations are periodically analyzed, and the models attempt to remove irrelevant information from the context window while keeping what’s actually important for the ongoing conversation, project, or task.
Anthropic 将“做梦”描述为一个定期执行的过程,在此过程中,系统会审查会话和记忆存储,并筛选出特定的记忆。这一点至关重要,因为大语言模型(LLM)的上下文窗口有限,在漫长的项目中,重要信息可能会丢失。在聊天方面,许多模型使用一种称为“压缩”(compaction)的过程,即定期分析冗长的对话,模型会尝试从上下文窗口中删除无关信息,同时保留对当前对话、项目或任务真正重要的内容。
However, that process, as I described it, is usually limited to a specific conversation with a single agent. “Dreaming” is a periodically recurring process in which past sessions and memory stores can be analyzed across agents, and important patterns are identified and saved to memory for the future. Users will be able to choose between an automatic process, or reviewing changes to memory directly.
然而,正如我所描述的那样,该过程通常仅限于与单个智能体的特定对话。“做梦”则是一个周期性循环的过程,它可以在多个智能体之间分析过去的会话和记忆存储,识别出重要的模式并将其保存到记忆中以备将来使用。用户可以选择自动处理,也可以选择直接审查对记忆所做的更改。
Says Anthropic: Dreaming surfaces patterns that a single agent can’t see on its own, including recurring mistakes, workflows that agents converge on, and preferences shared across a team. It also restructures memory so it stays high-signal as it evolves. This is especially useful for long-running work and multiagent orchestration. Dreaming is in research preview and is not available to all developers; developers can request access.
Anthropic 表示:“做梦”能够浮现出单个智能体自身无法察觉的模式,包括重复出现的错误、智能体趋向的工作流程以及团队共享的偏好。它还会重构记忆,使其在演进过程中保持高价值信息。这对于长期运行的工作和多智能体编排尤为有用。“做梦”目前处于研究预览阶段,并未向所有开发者开放;开发者可以申请访问权限。
Anthropic additionally announced that two previously revealed research preview features—outcomes and multi-agent orchestration—have become more widely available. Further, Anthropic will be doubling five-hour usage limits for subscribers to its Pro and Max subscription plans, responding to a lot of user frustration as the company’s compute infrastructure has struggled to keep up with demand.
此外,Anthropic 还宣布,此前披露的两项研究预览功能——“成果”(outcomes)和“多智能体编排”(multi-agent orchestration)——已扩大了可用范围。此外,Anthropic 将把 Pro 和 Max 订阅计划用户的五小时使用限额翻倍,以回应许多用户的挫败感,此前该公司的计算基础设施一直难以满足需求。