LLMs corrupt your documents when you delegate

LLMs corrupt your documents when you delegate

当你委派任务时,大语言模型会损坏你的文档

Abstract: Large Language Models (LLMs) are poised to disrupt knowledge work, with the emergence of delegated work as a new interaction paradigm (e.g., vibe coding). Delegation requires trust - the expectation that the LLM will faithfully execute the task without introducing errors into documents.

摘要: 大语言模型(LLMs)正准备颠覆知识工作,委派工作作为一种新的交互范式(例如“氛围编程”/vibe coding)应运而生。委派任务需要信任——即期望大语言模型能够忠实地执行任务,而不会在文档中引入错误。

We introduce DELEGATE-52 to study the readiness of AI systems in delegated workflows. DELEGATE-52 simulates long delegated workflows that require in-depth document editing across 52 professional domains, such as coding, crystallography, and music notation.

我们引入了 DELEGATE-52 来研究人工智能系统在委派工作流中的就绪程度。DELEGATE-52 模拟了长期的委派工作流,这些工作流需要在 52 个专业领域(如编程、晶体学和乐谱记谱法)进行深入的文档编辑。

Our large-scale experiment with 19 LLMs reveals that current models degrade documents during delegation: even frontier models (Gemini 3.1 Pro, Claude 4.6 Opus, GPT 5.4) corrupt an average of 25% of document content by the end of long workflows, with other models failing more severely.

我们针对 19 个大语言模型进行的大规模实验表明,当前模型在委派过程中会降低文档质量:即使是前沿模型(Gemini 3.1 Pro、Claude 4.6 Opus、GPT 5.4),在长工作流结束时也会平均损坏 25% 的文档内容,而其他模型的表现则更为糟糕。

Additional experiments reveal that agentic tool use does not improve performance on DELEGATE-52, and that degradation severity is exacerbated by document size, length of interaction, or presence of distractor files.

额外的实验表明,使用智能体工具并不能提高在 DELEGATE-52 上的表现,且文档大小、交互时长或干扰文件的存在会加剧损坏的严重程度。

Our analysis shows that current LLMs are unreliable delegates: they introduce sparse but severe errors that silently corrupt documents, compounding over long interaction.

我们的分析显示,当前的大语言模型是不可靠的委派对象:它们会引入零星但严重的错误,从而悄无声息地损坏文档,并且这些错误会随着交互时间的延长而不断累积。