
RIP social media. What comes next is messy. – Image for illustrative purposes only (Image credits: Unsplash)
Advanced computer models now show that the most divisive features of social media arise from its basic structure, not from algorithms or user behavior that could be adjusted. Researchers at the University of Amsterdam have tested this idea through repeated simulations that combine standard agent-based modeling with large language models acting as realistic online users. Their latest work confirms earlier conclusions while adding fresh detail on how echo chambers form and persist even when platforms try to intervene.
Core Design Creates Persistent Divides
The new studies focus on three recurring problems: tight partisan groups that rarely interact, influence concentrated among a small number of accounts, and the steady rise of extreme voices. These patterns appear in the simulations regardless of changes to feed order or content ranking rules. The models treat each user as an independent agent that responds to visible posts, likes, and replies in ways that mirror real platform data. Over thousands of simulated interactions, the same clusters and imbalances emerge every time.
Earlier research had already ruled out simple explanations such as negativity bias or chronological versus algorithmic feeds. The new papers go further by showing that the same outcomes occur across different network sizes and topic areas. This consistency points to deeper rules about how attention flows and how connections form online.
Testing Interventions Through AI Personas
One paper, published in PLoS ONE, examined echo chambers specifically. Researchers created thousands of AI personas, each with its own set of beliefs and response patterns drawn from real social media studies. They then ran controlled experiments that introduced common fixes, such as showing users more diverse content or limiting the visibility of highly shared posts. None of these changes broke the clusters that formed early in the simulations.
A second paper and an accompanying preprint extend the same method to attention inequality and the amplification of extreme content. In each case, the structural features of the network, rather than any single rule, kept producing the unwanted results. The approach allows researchers to observe long-term dynamics that would be difficult to track in live platforms without ethical or practical barriers.
Comparison of Common Assumptions and Model Results
| Common Assumption | Simulation Outcome |
|---|---|
| Changing feed algorithms reduces polarization | Clusters form at similar rates under multiple feed rules |
| Users simply prefer negative or extreme content | Extreme voices rise even when neutral content is equally available |
| Platform rules can spread influence more evenly | A small group of agents still dominates attention in every run |
What Remains Unknown and What Comes Next
The models highlight clear limits to current platform strategies, yet they also leave open the possibility that entirely new architectures could alter the dynamics. No tested intervention has succeeded so far, but the researchers note that their simulations cover only a subset of possible designs. Real-world platforms add layers of moderation, advertising incentives, and user tools that the current models simplify.
Future work will need to explore whether changes at the level of how connections are formed, rather than how content is ranked, could produce different long-term patterns. Until such redesigns are tested at scale, the same feedback loops are expected to continue shaping online discussion.
Implications for Platforms and Users
These findings suggest that incremental tweaks will not resolve the core issues that have drawn criticism to social media. Policymakers and developers may need to consider more fundamental shifts in how networks are built and governed. At the same time, the research underscores that human tendencies alone do not explain the scale of division observed today; the environment in which those tendencies operate plays a decisive role.
Continued modeling work can help identify which changes are most likely to matter, but the path to healthier online spaces remains uncertain and will require sustained experimentation beyond existing platform frameworks.