Elon Musk’s lawsuit is putting OpenAI’s safety record under the microscope

Elon Musk’s lawsuit is putting OpenAI’s safety record under the microscope

埃隆·马斯克的诉讼将 OpenAI 的安全记录置于显微镜下

Elon Musk’s legal effort to dismantle OpenAI may hinge on how its for-profit subsidiary enhances or detracts from the frontier lab’s founding mission of ensuring that humanity benefits from artificial general intelligence. 埃隆·马斯克旨在拆解 OpenAI 的法律行动,其关键可能在于该公司的营利性子公司究竟是促进了还是损害了这家前沿实验室的创始使命——即确保人类从通用人工智能(AGI)中受益。

On Thursday, a federal court in Oakland, California, heard a former employee and board member say the company’s efforts to push AI products into the marketplace compromised its commitment to AI safety. Rosie Campbell joined the company’s AGI readiness team in 2021, and she left OpenAI in 2024 after her team was disbanded. Another safety-focused team, the Super Alignment team, was shut down in the same time period. 周四,加利福尼亚州奥克兰市的一家联邦法院听取了一位前员工兼董事会成员的证词,她表示该公司将人工智能产品推向市场的努力损害了其对人工智能安全的承诺。罗西·坎贝尔(Rosie Campbell)于 2021 年加入该公司的 AGI 准备就绪团队,并在 2024 年团队解散后离开了 OpenAI。另一个专注于安全的团队——“超级对齐”(Super Alignment)团队——也在同一时期被关闭。

“When I joined, it was very research-focused and common for people to talk about AGI and safety issues,” she testified. “Over time it became more like a product-focused organization.” Under cross-examination, Campbell acknowledged that significant funding was likely necessary for the lab’s goal of building AGI but said creating a super-intelligent computer model without the right safety measures in place wouldn’t fit with the mission of the organization she originally joined. “当我加入时,公司非常注重研究,人们经常讨论 AGI 和安全问题,”她作证说,“随着时间的推移,它变得越来越像一个以产品为导向的组织。”在交叉询问中,坎贝尔承认,实验室实现构建 AGI 的目标可能需要大量资金,但她表示,在没有适当安全措施的情况下创建超级智能计算机模型,并不符合她最初加入该组织时的使命。

Campbell pointed to an incident where Microsoft deployed a version of the company’s GPT-4 model in India through its Bing search engine before the model had been evaluated by the company’s Deployment Safety Board (DSB). The model itself did not present a huge risk, she said, but the company needed “to set strong precedents as the technology gets more powerful. We want to have good safety processes in place we know are being followed reliably.” 坎贝尔指出了一个案例:微软在 OpenAI 的部署安全委员会(DSB)评估该模型之前,就通过其必应(Bing)搜索引擎在印度部署了 GPT-4 模型的一个版本。她说,模型本身并没有带来巨大的风险,但公司需要“随着技术变得越来越强大,树立强有力的先例。我们希望建立良好的安全流程,并确保这些流程得到可靠的执行。”

OpenAI’s attorneys also had Campbell admit that in her “speculative opinion,” OpenAI’s safety approach is superior to that at xAI, the AI company that Musk founded that was acquired by SpaceX earlier this year. OpenAI 的律师还让坎贝尔承认,按照她的“推测性观点”,OpenAI 的安全方法优于马斯克创立的 AI 公司 xAI(该公司于今年早些时候被 SpaceX 收购)。

OpenAI releases evaluations of its models and shares a safety framework publicly, but the company declined to comment on its current approach to AGI alignment. Dylan Scandinaro, its current head of preparedness, was hired from Anthropic in February. Altman said the hire would let him “sleep better tonight.” OpenAI 会发布其模型的评估结果并公开分享安全框架,但该公司拒绝就其目前在 AGI 对齐方面的做法发表评论。其现任准备就绪负责人迪伦·斯堪迪纳罗(Dylan Scandinaro)于二月份从 Anthropic 被聘请过来。奥特曼表示,这次聘用让他“今晚能睡得更安稳”。

The deployment of GPT-4 in India, however, was one of the red flags that led OpenAI’s non-profit board to briefly fire CEO Sam Altman in 2023. That incident took place after employees, including then-chief scientist Ilya Sutskever and then-CTO Mira Murati, complained about Altman’s conflict-averse management style. 然而,GPT-4 在印度的部署是导致 OpenAI 非营利性董事会在 2023 年短暂解雇首席执行官萨姆·奥特曼(Sam Altman)的危险信号之一。该事件发生时,包括时任首席科学家伊利亚·苏茨克维(Ilya Sutskever)和时任首席技术官米拉·穆拉蒂(Mira Murati)在内的员工,曾对奥特曼回避冲突的管理风格表示不满。

Tasha McCauley, a member of the board at the time, testified about concerns that Altman was not forthcoming enough with the board for its unusual structure to function. McCauley also discussed a widely reported pattern of Altman misleading the board. Notably, Altman lied to another board member about McCauley’s intention to remove Helen Toner, a third board member who published a white paper that included some implied criticism of OpenAI’s safety policy. 时任董事会成员塔莎·麦考利(Tasha McCauley)作证称,她担心奥特曼对董事会不够坦诚,导致其独特的治理结构无法正常运作。麦考利还讨论了媒体广泛报道的奥特曼误导董事会的模式。值得注意的是,奥特曼曾向另一位董事会成员撒谎,称麦考利打算罢免第三位董事会成员海伦·托纳(Helen Toner),而托纳曾发表过一份白皮书,其中包含对 OpenAI 安全政策的一些含蓄批评。

Altman also failed to inform the board about the decision to launch ChatGPT publicly, and members were concerned about his lack of disclosure of potential conflicts of interest. “We are a non-profit board and our mandate was to be able to oversee the for-profit underneath us,” McCauley told the court. “Our primary way to do that was being called into question. We did not have a high degree of confidence at all to trust that the information being conveyed to us allowed us to make decisions in an informed way.” 奥特曼还未向董事会通报公开发布 ChatGPT 的决定,成员们对他未能披露潜在利益冲突感到担忧。“我们是一个非营利性董事会,我们的职责是监督旗下的营利性实体,”麦考利告诉法庭,“我们实现这一目标的主要方式受到了质疑。我们根本没有足够的信心去相信,传达给我们的信息能让我们做出知情的决策。”

However, the decision to boot Altman came at the same time as a tender offer to the company’s employees. McCauley said that when OpenAI’s staff started to side with Altman and Microsoft worked to restore the status quo, the board ultimately reversed course, with the members opposed to Altman stepping down. 然而,罢免奥特曼的决定恰逢公司向员工发出要约收购之时。麦考利表示,当 OpenAI 的员工开始站在奥特曼一边,且微软努力恢复现状时,董事会最终改变了方针,反对奥特曼的成员选择了辞职。

The apparent failure of the non-profit board to influence the for-profit organization goes directly to Musk’s case that the transformation of OpenAI from research organization into one of the largest private companies in the world broke the implicit agreement of the organization’s founders. 非营利性董事会未能影响营利性组织,这一明显失败直接支持了马斯克的诉讼理由,即 OpenAI 从研究机构转型为全球最大的私营公司之一,违背了该组织创始人之间的默契协议。

David Schizer, a former dean of Columbia Law School who is being paid by Musk’s team to act as an expert witness, echoed McCauley’s concerns. “OpenAI has emphasized that a key part of its mission is safety and they are going to prioritize safety over profits,” Schizer said. “Part of that is taking safety rules seriously, if something needs to be subject to safety review, it needs to happen. What matters is the process issue.” 哥伦比亚大学法学院前院长大卫·希泽(David Schizer)受马斯克团队聘请担任专家证人,他也表达了与麦考利相同的担忧。“OpenAI 强调其使命的关键部分是安全,他们将把安全置于利润之上,”希泽说,“这其中一部分就是认真对待安全规则,如果某件事需要进行安全审查,那就必须进行。重要的是流程问题。”

With AI already deeply embedded in for-profit companies, the issue goes far beyond a single lab. McCauley said the failures of internal governance at OpenAI should be a reason to embrace stronger government regulation of advanced AI — “[if] it all comes down to one CEO making those decisions, and we have the public good at stake, that’s very suboptimal.” 随着人工智能已经深入嵌入营利性公司,这个问题的影响远超单一实验室。麦考利表示,OpenAI 内部治理的失败应该成为拥抱更强有力的政府监管先进人工智能的理由——“如果一切都归结为由一位首席执行官来做决定,而公共利益又处于危急之中,那是非常次优的选择。”