Hackers Hate AI Slop Even More Than You Do

Hackers Hate AI Slop Even More Than You Do

黑客比你更讨厌 AI 生成的垃圾内容

The complaint sounds familiar. “I’m disappointed that you are working to incorporate AI garbage into the site,” one annoyed person, posting anonymously, said in an online message. “No-one is asking for this—we want you to improve the site, stop charging for new features.”

这种抱怨听起来很耳熟。“我很失望你们竟然要把 AI 垃圾内容整合到网站里,”一位匿名用户在在线留言中愤怒地写道。“根本没人想要这个——我们希望你们改进网站,而不是对新功能收费。”

Only, this is not a regular internet user moaning about AI being forced into their favorite app. Instead, they are complaining about a cybercrime forum’s plans to introduce more generative AI. Like millions of others, scammers, grifters, and low-level hackers are getting annoyed about AI encroaching into their lives and the rise of low-quality AI slop being posted in their online communities.

但这并不是普通网民在抱怨自己喜爱的应用被强行植入 AI。相反,他们是在抗议某个网络犯罪论坛引入更多生成式 AI 的计划。和数以百万计的普通人一样,诈骗者、骗子和低端黑客也开始对 AI 侵入他们的生活以及在线社区中充斥的低质量 AI 垃圾内容感到厌烦。

“People don’t like it,” says Ben Collier, a security researcher and senior lecturer at the University of Edinburgh. As part of a recent study into how low-level cybercriminals are using AI, Collier and fellow researchers spotted an increasing pushback over the use of generative AI in underground cybercrime forums and hacking groups.

“人们不喜欢它,”爱丁堡大学安全研究员兼高级讲师 Ben Collier 说。作为近期一项关于低端网络罪犯如何使用 AI 的研究的一部分,Collier 和同事们发现,地下网络犯罪论坛和黑客组织对使用生成式 AI 的抵触情绪日益高涨。

During the generative AI boom and hype cycles of the past couple of years, some people posting on hacking forums have moved from being positive about how AI can help hacking to a greater skepticism about the technology, according to the study, which also involved researchers from the University of Cambridge and the University of Strathclyde.

根据这项由剑桥大学和斯特拉斯克莱德大学研究人员共同参与的研究显示,在过去几年生成式 AI 的繁荣和炒作周期中,一些在黑客论坛上发帖的人,对 AI 助力黑客攻击的态度已从最初的积极转变为对该技术的深度怀疑。

The researchers analyzed 97,895 AI-related conversations on cybercrime forums since the launch of ChatGPT in 2022 until the end of last year. They found complaints about people dumping “bullet-pointed explainers” of basic cybersecurity concepts, moaning about the number of low quality posts, and concerns about Google’s AI search overviews driving down the number of visitors to the forums.

研究人员分析了自 2022 年 ChatGPT 发布到去年年底期间,网络犯罪论坛上 97,895 条与 AI 相关的对话。他们发现,用户们抱怨有人大量发布基础网络安全概念的“要点解释”,吐槽低质量帖子的泛滥,并担心谷歌的 AI 搜索概览会导致论坛访问量下降。

For decades cybercrime message boards and marketplaces, often Russian in origin, have allowed scammers to do business together. They are places where stolen data can be traded, hacking jobs are advertised, and fraudsters shitpost about their rivals. While scammers often try to scam each other, the forums also have a sense of community. For example, users build up reputations for being reliable, and forum owners hold writing competitions.

几十年来,网络犯罪留言板和交易市场(通常起源于俄罗斯)一直是诈骗者进行交易的场所。这些地方可以买卖被盗数据、发布黑客任务,骗子们也会在上面发帖嘲讽竞争对手。虽然诈骗者之间经常互相欺诈,但这些论坛也具有一定的社区感。例如,用户会通过表现可靠来建立声誉,论坛所有者还会举办写作比赛。

“These are essentially social spaces. They really hate other people using [AI] on the forums,” Collier says. He says the social dynamic of the groups can be messed up by potential cybercriminals trying to gain a better reputation by posting AI-generated hacking explainers. “I think a lot of them are a bit ambivalent about AI because it undermines their claim to be a skilled person.”

“这些本质上是社交空间。他们非常讨厌别人在论坛上使用 [AI],”Collier 说。他表示,一些潜在的网络罪犯试图通过发布 AI 生成的黑客技术解释来获取声誉,这破坏了群体的社交动态。“我认为他们中的许多人对 AI 的态度有些矛盾,因为它削弱了他们作为‘技术大牛’的自我标榜。”

Posts reviewed by WIRED on Hack Forums, a self-styled space for those interested in talking about hacking and sharing techniques, show an irritation caused by people creating posts with AI. “I see a lot of members using AI for making their threads/posts and it pisses me off since they don’t even take the time to write a simple sentence or two,” one poster wrote. Another put it more bluntly: “Stop posting AI shit.”

《连线》杂志查阅了 Hack Forums(一个自称供黑客交流和分享技术的空间)上的帖子,发现人们对使用 AI 发帖的行为感到恼火。“我看到很多成员用 AI 来制作主题帖,这让我很生气,因为他们甚至懒得花时间写一两句简单的话,”一位用户写道。另一位则更直白地说:“别再发 AI 垃圾了。”

In several instances, Collier says, users of multiple forums appear to be irritated by AI posts as they want to make friends. “If I wanted to talk to an AI chatbot, there are many websites for me to do so … I come here for human interaction,” one post cited in the research says.

Collier 指出,在多个案例中,论坛用户似乎对 AI 帖子感到厌烦,因为他们来到这里是为了结交朋友。“如果我想和 AI 聊天机器人说话,有很多网站可以满足我……我来这里是为了人与人之间的互动,”研究中引用的一篇帖子这样写道。

Since ChatGPT emerged toward the end of 2022, there has been significant interest in AI-hacking capabilities and how the technology can transform online crime. Both sophisticated hackers and those less capable have been trying to use AI in their attacks. While some organized fraudsters have boosted their operations with ever-more realistic AI face-swapping technology and social engineering messages translated using AI, a lot of attention has been on generative AI’s capabilities to write malicious code and discover vulnerabilities.

自 2022 年底 ChatGPT 问世以来,人们对 AI 黑客攻击能力以及该技术如何改变网络犯罪产生了浓厚兴趣。无论是资深黑客还是技术平平者,都在尝试将 AI 用于攻击。虽然一些有组织的诈骗团伙利用更逼真的 AI 换脸技术和 AI 翻译的社会工程学信息提升了作案效率,但外界的关注点更多集中在生成式 AI 编写恶意代码和发现漏洞的能力上。

“More sophisticated threat actors are aware of the shortfalls of commercial models that have guardrails, and they know ways to jailbreak those prompts,” says Ian Gray, vice president of intelligence at the security company Flashpoint, referring to the safety mechanisms put in place by OpenAI, Anthropic, and Google. “They’re also cautious of AI-generated projects in forums or marketplaces—there are weaknesses and vulnerabilities, sometimes exposing the underlying infrastructure,” Gray says.

“更高级的威胁行为者意识到商业模型中安全护栏的局限性,他们知道如何越狱这些提示词,”安全公司 Flashpoint 的情报副总裁 Ian Gray 说,他指的是 OpenAI、Anthropic 和谷歌设置的安全机制。“他们对论坛或市场上的 AI 生成项目也持谨慎态度——这些项目存在弱点和漏洞,有时会暴露底层基础设施,”Gray 说。

Flashpoint has seen hackers recently talking about the potential capabilities of Claude Mythos Preview, Anthropic’s latest frontier AI model, which has thrown some in the cybersecurity industry into a panic. Some cybercriminals have also disparaged others for allegedly using AI in their hacking operations—“all they can do is use AI,” one group said, according to Flashpoint’s analysis.

Flashpoint 最近观察到黑客们在讨论 Anthropic 最新前沿 AI 模型 Claude Mythos Preview 的潜在能力,这让网络安全行业的一些人感到恐慌。据 Flashpoint 分析,一些网络罪犯还鄙视那些在黑客行动中使用 AI 的人——一个组织甚至嘲讽道:“他们只会用 AI。”

Collier says that so far, among the lower-level cybercriminals that his study tracked—not sophisticated or nation-state-backed hackers—there hasn’t been any obvious signs of “real disruption” caused by AI. “It has not significantly reduced the skill barrier to entry, nor has it led to serious disruptions to established business models or practices,” the study says. “Instead, its main impact has been on already highly automated areas such as SEO fraud, social media bots, and some forms of romance scam.”

Collier 表示,到目前为止,在他研究追踪的低端网络罪犯(非高级黑客或国家支持的黑客)中,还没有出现 AI 造成“实质性破坏”的明显迹象。“它并没有显著降低进入门槛,也没有对既定的商业模式或实践造成严重干扰,”研究指出。“相反,它的主要影响集中在 SEO 欺诈、社交媒体机器人和某些形式的杀猪盘等本已高度自动化的领域。”

Despite the frosty reception of AI being used on cybercrime forums, others see potential. Some posters on Hack Forums have said they would perhaps welcome an AI assistant that would “help” them structure their posts and improve grammar, but they draw the line at an AI that can fully post for them. “An AI generator for posts would turn this into a clanker forum of AI’s talking to each other,” one person wrote.

尽管 AI 在网络犯罪论坛上受到冷遇,但也有人看到了潜力。Hack Forums 上的一些用户表示,他们或许欢迎能“帮助”他们组织帖子结构和改进语法的 AI 助手,但他们拒绝接受能完全代发帖的 AI。“如果用 AI 生成帖子,这儿就会变成一个 AI 互相对话的破烂论坛,”有人写道。

Meanwhile, Flashpoint researchers have spotted hackers discussing the idea of building an “AI-enhanced” cybercrime market, which was touted as a way to help people to buy stolen data and online accounts more quickly. Not everyone was on board. As one person wrote, “IT’S A STUPID FUCKING IDEA TO PUT AI INTO YOUR MARKET.”

与此同时,Flashpoint 的研究人员发现,黑客们正在讨论建立一个“AI 增强型”网络犯罪市场的想法,并宣称这能帮助人们更快地购买被盗数据和在线账户。但并非所有人都买账。正如一人所言:“把 AI 塞进市场简直是个他妈的蠢主意。”