Using AI for Just 10 Minutes Might Make You Lazy and Dumb, Study Shows
Using AI for Just 10 Minutes Might Make You Lazy and Dumb, Study Shows
研究显示:使用 AI 仅 10 分钟就可能让你变懒、变笨
Using AI chatbots for even just for 10 minutes may have a shockingly negative impact on people’s ability to think and problem-solve, according to a new study from researchers at Carnegie Mellon, MIT, Oxford, and UCLA. 根据卡内基梅隆大学、麻省理工学院、牛津大学和加州大学洛杉矶分校研究人员的一项新研究,使用 AI 聊天机器人哪怕只有 10 分钟,也可能对人们的思考和解决问题的能力产生令人震惊的负面影响。
Researchers tasked people with solving various problems, including simple fractions and reading comprehension, through an online platform that paid them for their work. They conducted three experiments, each involving several hundred people. Some participants were given access to an AI assistant capable of solving the problem autonomously. When the AI helper was suddenly taken away, these people were significantly more likely to give up on the problem or flub their answers. The study suggests that widespread use of AI might boost productivity at the expense of developing foundational problem-solving skills. 研究人员通过一个付费在线平台,让参与者解决包括简单分数运算和阅读理解在内的各种问题。他们进行了三项实验,每项实验都有数百人参与。部分参与者可以使用能够自主解决问题的 AI 助手。当 AI 助手突然被撤走时,这些人明显更容易放弃问题或给出错误答案。研究表明,AI 的广泛使用可能会以牺牲基础解决问题能力的培养为代价,来换取生产力的提升。
“The takeaway is not that we should ban AI in education or workplaces,” says Michiel Bakker, an assistant professor at MIT involved with the study. “AI can clearly help people perform better in the moment, and that can be valuable. But we should be more careful about what kind of help AI provides, and when.” “结论并不是说我们应该在教育或工作场所禁用 AI,”参与该研究的麻省理工学院助理教授 Michiel Bakker 表示,“AI 显然可以在当下帮助人们表现得更好,这很有价值。但我们应该更谨慎地对待 AI 提供何种帮助以及何时提供帮助。”
I recently met up with Bakker, who has chaotic hair and a wide grin, on MIT’s campus. Originally from the Netherlands, he previously worked at Google DeepMind in London. He told me that a well-known essay on the way AI may disempower humans over time inspired him to think about how the technology could already be eroding people’s abilities. The essay makes for slightly bleak reading, because it suggests that disempowerment is inevitable. That said, perhaps figuring out how AI can help people develop their own mental capabilities should be part of how models are aligned with human values. 最近,我在麻省理工学院校园里见到了 Bakker,他留着凌乱的头发,笑容满面。他来自荷兰,此前曾在伦敦的 Google DeepMind 工作。他告诉我,一篇关于 AI 如何随时间推移削弱人类能力的著名文章启发了他,让他开始思考这项技术是否已经在侵蚀人们的能力。那篇文章读起来有些令人沮丧,因为它暗示这种能力的丧失是不可避免的。话虽如此,或许弄清楚 AI 如何帮助人们发展自身心智能力,应该成为模型与人类价值观对齐的一部分。
“It is fundamentally a cognitive question—about persistence, learning, and how people respond to difficulty,” Bakker tells me. “We wanted to take these broader concerns about long-term human-AI interaction and study them in a controlled experimental setting.” “这从根本上是一个认知问题——关于毅力、学习以及人们如何应对困难,”Bakker 告诉我,“我们希望将这些关于长期人机交互的更广泛担忧,置于受控的实验环境中进行研究。”
The resulting study seems particularly concerning, says Bakker, because a person’s willingness to persist with problem-solving is crucial to acquiring new skills and also predicts their capacity to learn over time. Bakker 说,研究结果看起来尤其令人担忧,因为一个人坚持解决问题的意愿对于获取新技能至关重要,这也预示着他们长期的学习能力。
Bakker says it may be necessary to rethink how AI tools work so that—like a good human teacher—models sometimes prioritize a person’s learning over solving a problem for them. “Systems that give direct answers may have very different long-term effects from systems that scaffold, coach, or challenge the user,” Bakker says. He admits, however, that balancing this kind of “paternalistic” approach could be tricky. Bakker 表示,可能有必要重新思考 AI 工具的工作方式,以便像优秀的老师一样,模型有时应将“促进人的学习”置于“直接代为解决问题”之上。“直接给出答案的系统,与那些提供引导、辅导或挑战用户的系统,在长期影响上可能截然不同,”Bakker 说。但他承认,平衡这种“家长式”的方法可能会很棘手。
AI companies do already think about the more subtle effects that their models can have on users. The sycophancy of some models—or how likely they are to agree with and patronize users—is something that OpenAI has sought to tone down with newer releases of GPT. AI 公司已经在考虑其模型对用户产生的更微妙的影响。一些模型的“谄媚”倾向——即它们迎合用户并表现出屈尊俯就的可能性——正是 OpenAI 在新版 GPT 中试图减弱的问题。
Putting too much faith in AI would seem especially problematic when the tools may not behave as you expect. Agentic AI systems are particularly unpredictable because they do complex chores independently and can introduce odd errors. It makes you wonder what Claude Code and Codex are doing to the skills of coders who may sometimes need to fix the bugs they introduce. 当工具的表现不如预期时,过度信任 AI 似乎尤其成问题。代理式(Agentic)AI 系统尤其不可预测,因为它们独立处理复杂的任务,并可能引入奇怪的错误。这不禁让人怀疑,Claude Code 和 Codex 对那些有时需要修复 AI 所引入 Bug 的程序员的技能产生了什么影响。
I recently got a lesson in the danger of offloading critical thinking to AI myself. I’ve been using OpenClaw (with Codex inside) as a daily helper, and I’ve found it to be remarkably good at solving configuration issues on Linux. Recently, however, after my Wi-Fi connection kept dropping, my AI assistant suggested running a series of commands in order to tweak the driver talking to the Wi-Fi card. The result was a machine that refused to boot no matter what I did. 最近,我亲身体会到了将批判性思维外包给 AI 的危险。我一直使用 OpenClaw(内置 Codex)作为日常助手,发现它在解决 Linux 配置问题方面非常出色。然而最近,在我的 Wi-Fi 连接不断中断后,我的 AI 助手建议运行一系列命令来调整与 Wi-Fi 网卡通信的驱动程序。结果导致我的电脑无论如何都无法启动。
Perhaps, instead of simply trying to solve the problem for me, OpenClaw should have paused to teach me how to fix the issue for myself. I might have a more capable computer—and brain—as a result. 或许,OpenClaw 不应该只是试图帮我解决问题,而应该停下来教我如何自己修复它。如果那样的话,我可能不仅会拥有一台性能更好的电脑,还会拥有一个更强大的大脑。
This is an edition of Will Knight’s AI Lab newsletter. Read previous newsletters here. 本文选自 Will Knight 的 AI Lab 通讯。点击此处阅读往期通讯。