RIP social media. What comes next is messy.

RIP social media. What comes next is messy.

社交媒体已死,接下来将是一片混乱。

Last fall, we featured an extensive interview with Petter Törnberg of the University of Amsterdam, who studies the underlying mechanisms of social media that give rise to its worst aspects: the partisan echo chambers, the concentration of influence among a small group of elite users (attention inequality), and the amplification of the most extreme divisive voices. He wasn’t optimistic about social media’s future. 去年秋天,我们刊登了对阿姆斯特丹大学 Petter Törnberg 的深度专访。他专门研究社交媒体背后的运作机制,这些机制导致了社交媒体最糟糕的一面:党派回声室效应、影响力向少数精英用户集中(注意力不平等),以及对极端分裂言论的放大。他并不看好社交媒体的未来。

Törnberg’s research showed that, while numerous platform-level intervention strategies have been proposed to combat these issues, none are likely to be effective. And it’s not the fault of much-hated algorithms, non-chronological feeds, or our human proclivity for seeking out negativity. Rather, the dynamics that give rise to all those negative outcomes are structurally embedded in the very architecture of social media. So we’re probably doomed to endless toxic feedback loops unless someone hits upon a brilliant fundamental redesign that manages to change those dynamics. Törnberg 的研究表明,尽管人们提出了许多平台层面的干预策略来解决这些问题,但似乎都难以奏效。这并非那些备受诟病的算法、非时间线信息流或人类寻求负面信息的倾向之过。相反,导致所有这些负面结果的动力机制,在结构上就嵌入在社交媒体的底层架构中。因此,除非有人能提出一种天才般的根本性重构方案来改变这些动力机制,否则我们注定会陷入无休止的有毒反馈循环中。

Törnberg has been very busy since then, producing two new papers and one new preprint building on this realization that social media is structured quite differently than the physical world, with unexpected downstream consequences. The first new paper, published in PLoS ONE, specifically focused on the echo chamber effect, using the same combined standard agent-based modeling with large language models (LLMs)—essentially creating little AI personas to simulate online social media behavior. Those simulated users were randomly programmed to either hold an opinion or its opposite and then interact randomly with selected members of a simulated online community. And if the proportion of community members who disagreed with those simulated users exceeded a given threshold, those agents were programmed to leave and join a different online community. 自那以后,Törnberg 一直非常忙碌,他基于“社交媒体的结构与物理世界截然不同,并会产生意想不到的后续后果”这一认知,又发表了两篇新论文和一篇预印本。第一篇发表在《PLoS ONE》上的新论文专门研究了回声室效应,它结合了标准的基于代理的建模与大语言模型(LLM)——本质上是创建了小型 AI 人格来模拟在线社交媒体行为。这些模拟用户被随机设定为持有某种观点或其对立观点,然后与模拟在线社区中的选定成员进行随机互动。如果社区中不同意这些模拟用户观点的成员比例超过了特定阈值,这些代理程序就会被设定为离开该社区并加入另一个社区。

Filter bubbles: Not a culprit, but a cure

过滤气泡:并非罪魁祸首,而是解药

Consistent with last year’s results, echo chambers emerge naturally from the basic architecture of social media platforms. “One surprising finding is the fact that we get echo chambers even without any filter bubbles, even if people really love being in diverse spaces,” said Törnberg. “You don’t need an algorithmic nudge. You can still get these highly segregated spaces. The other surprising finding is that filter bubbles, which have been blamed for homogeneity, can be a cure.” 与去年的研究结果一致,回声室效应是社交媒体平台基本架构自然产生的产物。“一个令人惊讶的发现是,即使没有任何过滤气泡,即使人们非常喜欢身处多元化的空间,回声室效应依然会出现,”Törnberg 说。“你不需要算法的推动,依然会形成这些高度隔离的空间。另一个令人惊讶的发现是,一直被指责导致同质化的过滤气泡,反而可能是一种解药。”

It doesn’t take much to destabilize or stabilize the system, Törnberg found. Even if the threshold for disagreement was quite low, disagreements were amplified to the point that each random interaction was increasingly likely to exceed the threshold. More and more users were pushed to relocate until what was once a community with a solid diversity of opinion rapidly became polarized and/or overly homogenous. Conversely, if just 10 percent of users in a given social media community largely agree with your stances, you will be more tolerant toward diverse opinions that contradict your own. Törnberg 发现,要破坏或稳定这个系统并不需要太多条件。即使分歧阈值设置得很低,分歧也会被放大,以至于每一次随机互动都越来越可能超过该阈值。越来越多的用户被迫迁移,直到原本拥有稳固观点多样性的社区迅速变得两极分化和/或过度同质化。相反,如果一个社交媒体社区中只要有 10% 的用户在很大程度上同意你的立场,你就会对那些与你相左的多元观点表现出更高的包容度。

“There’s a certain chance that some users will end up in communities where it’s very homogenous and 99 percent of users are disagreeing with them,” said Törnberg. “That will cause them to leave, and you get this feedback effect just because of the structure of interaction. But if you have a filter bubble effect, where everyone is shown 10 percent of their own type, that creates a possibility for you to find the people who you agree with within the community. And that stabilizes the entire dynamics so it doesn’t tip over to one side or the other and become extreme or overly homogenous.” “某些用户确实有可能进入一个高度同质化、且 99% 的用户都不同意他们的社区,”Törnberg 说。“这会导致他们离开,仅仅因为互动的结构,你就得到了这种反馈效应。但如果你有过滤气泡效应,即每个人都能看到 10% 与自己类型相同的内容,这就会让你有可能在社区内找到与你观点一致的人。这稳定了整个动力机制,使其不会向某一方倾斜,从而变得极端或过度同质化。”

Törnberg found some confirmation of those dynamics when he analyzed an actual online echo chamber: the subreddit r/MensRights. He found that members of the subreddit were more likely to leave if their posts diverged too far, linguistically, from the community’s center of gravity. “Who are the users leaving the community?” said Törnberg. “The users that are more ideologically distant are more likely to leave. So it captures the same mechanism of feedback dynamics, where the community becomes more homogenous and more extreme because users leave—[and they leave] because they feel it’s becoming too homogenous and extreme. Eventually it tips over to one direction. And of course, as the community becomes more extreme, there’s this boiling the frog effect where the users who stay are influenced by the community and become more extreme.” Törnberg 在分析一个真实的在线回声室——Reddit 上的 r/MensRights 版块时,证实了这些动力机制。他发现,如果成员的帖子在语言上与社区的重心偏离太远,他们就更有可能离开。“谁在离开社区?”Törnberg 说。“意识形态上距离较远的用户更有可能离开。这捕捉到了同样的反馈动力机制:社区因为用户的离开而变得更加同质化和极端——而他们离开的原因正是因为他们感到社区变得太同质化和极端了。最终,社区会向一个方向倾斜。当然,随着社区变得越来越极端,还会出现‘温水煮青蛙’效应,留下的用户受到社区影响,变得更加极端。”

In principle, it could be possible to exploit these feedback effects to preserve viewpoint diversity—but there are caveats. “Ultimately, it’s about changing the fundamental rules of what people are seeing and being mindful of the feedback effects that always play out in any complex system,” said Törnberg. “That being said, do I want to tell [Mark] Zuckerberg to implement more filter bubbles on Facebook? I think I’d want a little bit more evidence before going that far. But it does highlight that we need to have a little more humility when it comes to our design of these systems and what the downstream consequences are. We tend to maybe think one step ahead, but miss the fact that these are highly complex systems, full of feedback effects that often do the exact opposite of what you intend.” 原则上,利用这些反馈效应来保持观点的多样性是可能的,但也有一些注意事项。“归根结底,这关乎改变人们所见内容的根本规则,并留意在任何复杂系统中都会出现的反馈效应,”Törnberg 说。“话虽如此,我是否想告诉扎克伯格在 Facebook 上实施更多的过滤气泡?我想在采取这一步之前,我需要更多的证据。但这确实强调了我们在设计这些系统及其后续后果时,需要保持更多的谦逊。我们往往倾向于只考虑下一步,却忽略了这些是高度复杂的系统,充满了反馈效应,往往会产生与你初衷完全相反的结果。”

The “botification” of social media

社交媒体的“机器人化”

For his second new paper, published in the Journal of Quantitative Description: Digital Media (JQD:DM), Törnberg relied on nationally representative data from the 2020 and 2024 American National Election Studies surveys, covering US citizens from all 50 states and Washington, DC. The objective was to learn more about shifting trends in how people were using (or not using) social media across all platforms, demographics, and political affiliations. Törnberg found that visits and posting activity on Facebook, YouTube, and Twitter/X—what one might consider legacy social media platforms—showed marked declines. 在他的第二篇新论文中(发表于《定量描述:数字媒体》期刊),Törnberg 采用了 2020 年和 2024 年美国全国选举研究调查的全国代表性数据,涵盖了美国所有 50 个州和华盛顿特区的公民。其目的是深入了解人们在所有平台、人口统计学特征和政治派别中,使用(或不使用)社交媒体的趋势变化。Törnberg 发现,Facebook、YouTube 和 Twitter/X(人们可能认为的传统社交媒体平台)的访问量和发帖活跃度均出现了显著下降。