AI Needs RNA, Not Just Weights

AI Needs RNA, Not Just Weights

AI 需要的不仅是权重,还有 RNA

There is a creature at the bottom of the ocean that solves intelligence differently than every other animal on Earth. The octopus has no centralized command-and-control architecture. Two-thirds of its five hundred million neurons live not in its brain, but distributed across eight semi-autonomous arms — each capable of local decision-making, sensation, and response without a round trip to headquarters. 在深海中有一种生物,它解决智能问题的方式与地球上其他所有动物都不同。章鱼没有集中的指挥控制架构。它五亿个神经元中的三分之二并不位于大脑中,而是分布在八条半自主的触手中——每一条触手都具备局部决策、感知和反应的能力,无需向“总部”汇报。

More remarkably, the octopus edits its own RNA in real time, reconfiguring the proteins that make its neurons fire differently depending on water temperature, prey, threat, and experience. It does not reboot. It does not retrain. It edits its expression of what it already knows. 更引人注目的是,章鱼能实时编辑自身的 RNA,通过重构蛋白质来改变神经元的放电方式,以适应水温、猎物、威胁和经验的变化。它不需要重启,也不需要重新训练。它只是在编辑它已知信息的表达方式。

We are building AI systems that share almost none of these properties. This article is not a claim that AI literally needs ribonucleic acid. It is a proposal for a biologically inspired architecture principle — one grounded in the gap between how living intelligence actually works and how our current AI systems are engineered. The argument is simple: we have built very good DNA. We have not yet built the RNA. 我们目前构建的 AI 系统几乎不具备这些特性。本文并非声称 AI 真的需要核糖核酸,而是提出一种受生物学启发的架构原则——这一原则基于生命智能的实际运作方式与当前 AI 系统工程实现之间的差距。论点很简单:我们已经构建了非常优秀的 DNA,但我们尚未构建出 RNA。

Part I — The Frozen Model Problem

第一部分:冻结模型问题

A large language model is trained once. Over weeks or months, on hardware consuming megawatts of power, billions of parameters are adjusted by gradient descent until the model can predict the next token in a sequence with remarkable accuracy. Then training ends. The weights are frozen. The model is deployed. From that moment forward, the model is static. 大型语言模型只训练一次。在耗电量达兆瓦级的硬件上,经过数周或数月的训练,数十亿个参数通过梯度下降进行调整,直到模型能以惊人的准确度预测序列中的下一个标记。随后训练结束,权重被冻结,模型被部署。从那一刻起,模型就变成了静态的。

It can respond to new prompts, but it cannot truly adapt to them. Its knowledge is bounded by its training cutoff. Its personality is fixed by its alignment fine-tune. Its competencies are whatever emerged from the pre-training distribution. Asking a deployed LLM to learn something new is like asking a photograph to move. “We have created extraordinarily capable static systems and mistaken their fluency for adaptability.” 它可以响应新的提示,但无法真正适应它们。它的知识受限于训练截止日期,其个性由对齐微调所固定,其能力则源于预训练分布中涌现出的内容。要求一个已部署的 LLM 学习新事物,就像要求一张照片动起来一样。“我们创造了能力非凡的静态系统,却误将它们的流畅性当作了适应性。”

The workarounds we reach for reveal the depth of the problem. Context windows provide temporary, session-scoped information — but they are ephemeral. Once cleared, the model reverts entirely. Retrieval-augmented generation (RAG) pipes external knowledge into the prompt — but the model does not actually learn from it; it merely reads it. Fine-tuning provides genuine adaptation, but at costs measured in time, compute, and the constant risk of catastrophic forgetting: the phenomenon where adapting to new information overwrites prior knowledge in ways that cannot be predicted or controlled. 我们所采取的变通方法揭示了问题的严重性。上下文窗口提供临时的、会话范围内的信息,但它们是短暂的。一旦清除,模型就会完全恢复原状。检索增强生成(RAG)将外部知识注入提示词中,但模型并没有真正从中学习,它只是在“阅读”这些信息。微调虽然提供了真正的适应性,但代价是时间、算力,以及持续存在的“灾难性遗忘”风险:即适应新信息的过程以不可预测或不可控的方式覆盖了先前的知识。

Prompt engineering — the art of coaxing behavior through carefully structured inputs — is our most widely used adaptation mechanism. It is also the most revealing limitation. The fact that we have built an entire subdiscipline around phrasing instructions differently to get different behavior from a model that cannot actually change is a sign that something fundamental is missing from the architecture. 提示词工程——通过精心构建的输入来诱导模型行为的艺术——是我们使用最广泛的适应机制。这也是最能暴露局限性的地方。我们围绕“如何通过不同的指令措辞来获取不同行为”建立了一整套子学科,而模型本身却无法真正改变,这恰恰说明我们的架构缺失了某种根本性的东西。

The Core Constraints

核心约束

  • Frozen weights — parameters locked at deployment; no modification during inference
  • 冻结权重 — 参数在部署时锁定;推理过程中无法修改
  • Ephemeral context — session memory evaporates; nothing persists across interactions
  • 短暂的上下文 — 会话记忆会消失;交互之间没有任何持久性
  • Expensive adaptation — fine-tuning requires significant compute and risks stability
  • 昂贵的适应成本 — 微调需要大量算力且存在稳定性风险
  • Monolithic architecture — one model serves all tasks, contexts, and users identically
  • 单体架构 — 一个模型以相同方式服务于所有任务、场景和用户
  • No runtime self-modification — the model cannot change itself in response to what it encounters
  • 无运行时自我修改 — 模型无法根据所遇到的情况改变自身

None of these constraints are fundamental laws of computation. They are engineering choices — choices shaped by what was tractable, measurable, and deployable at scale. But if we look at how biological intelligence solves the same problems, it becomes clear we may have optimized for the wrong layer of the stack. 这些约束都不是计算的基本定律。它们是工程选择——这些选择取决于什么是可处理、可衡量且可大规模部署的。但如果我们观察生物智能如何解决同样的问题,就会发现我们可能在错误的层级上进行了优化。

Part II — What the Octopus Figured Out

第二部分:章鱼的发现

The study that first drew widespread attention to octopus RNA editing was published in Cell in 2017. Researchers at the Marine Biological Laboratory found that the octopus, unlike virtually all other animals, edits the majority of its RNA transcripts — the working copies of genetic instructions used to build proteins. Where humans edit perhaps one or two percent of protein-coding transcripts, the octopus edits approximately sixty percent. 2017 年发表在《细胞》(Cell)杂志上的一项研究首次引起了人们对章鱼 RNA 编辑的广泛关注。海洋生物实验室的研究人员发现,与几乎所有其他动物不同,章鱼会编辑其大部分 RNA 转录本——即用于构建蛋白质的遗传指令的工作副本。人类可能只编辑百分之一或二的蛋白质编码转录本,而章鱼编辑的比例约为百分之六十。

To understand why this matters, a brief detour into molecular biology is warranted. 为了理解这为何重要,有必要简要回顾一下分子生物学。

DNA, RNA, and the Difference Between Blueprint and Production DNA、RNA 以及蓝图与生产之间的区别

DNA is the master blueprint of a living cell. It encodes the instructions for building every protein the organism will ever need. But DNA does not directly build proteins — it is transcribed into RNA first. RNA is the working copy: a temporary, single-stranded molecule that carries the genetic message from the nucleus to the ribosomes where proteins are assembled. DNA 是活细胞的总蓝图。它编码了生物体所需每一种蛋白质的构建指令。但 DNA 并不直接构建蛋白质,它首先被转录为 RNA。RNA 是工作副本:一种临时的单链分子,将遗传信息从细胞核携带到蛋白质组装的核糖体中。

In most organisms, this process is relatively faithful. The RNA copy closely matches the DNA template. RNA editing changes this. Specific enzymes called ADARs (adenosine deaminases acting on RNA) can chemically alter individual nucleotides in the RNA transcript after it has been copied from the DNA but before it has been translated into protein. A single nucleotide change can alter which amino acid gets incorporated into the resulting protein — changing its shape, its electrical properties, its function. 在大多数生物体中,这一过程相对忠实。RNA 副本与 DNA 模板高度匹配。而 RNA 编辑改变了这一点。特定的酶(称为 ADAR,即作用于 RNA 的腺苷脱氨酶)可以在 RNA 转录本从 DNA 复制出来之后、翻译成蛋白质之前,对其单个核苷酸进行化学修饰。单个核苷酸的改变就能改变最终蛋白质中结合的氨基酸,从而改变其形状、电学特性和功能。

The DNA is untouched. The gene itself is unchanged. But the protein that gets built is different. The octopus edits the expression of what it already knows — without altering the underlying source code. DNA 保持不变,基因本身也没有改变,但构建出来的蛋白质却不同了。章鱼在不改变底层源代码的情况下,编辑了它已知信息的表达方式。

In the octopus, this mechanism is used to tune neural proteins in real time. As water temperature changes, the octopus edits RNA transcripts for ion channel proteins in its neurons — keeping its nervous system functional across a temperature range that would otherwise cause it to either seize or shut down. It is not evolving. It is not retraining. It is performing a targeted, reversible modification of its own neural hardware, at the molecular level, in response to its immediate environment. The octopus trades evolutionary flexibility for operational flexibility. 在章鱼体内,这种机制被用于实时调节神经蛋白质。随着水温变化,章鱼会编辑其神经元中离子通道蛋白的 RNA 转录本,使其神经系统在原本会导致其抽搐或停机的温度范围内保持正常运作。它不是在进化,也不是在重新训练。它是在分子层面对其自身的神经硬件进行有针对性的、可逆的修改,以响应即时环境。章鱼用进化灵活性换取了操作灵活性。