Sensemaking for a plural world

Perspective Map

Algorithmic Recommendation and Radicalization: What Each Position Is Protecting

March 2026

In February 2018, a former YouTube software engineer named Guillaume Chaslot published a series of tweets documenting something he had helped build: the recommendation algorithm that selects which video plays next. Chaslot had left Google in 2013. He told reporters that while he was there, the algorithm had been explicitly optimized to maximize "watch time" — the total number of minutes a user spent on the platform. He said he had raised concerns internally about the content trajectories this optimization produced and been dismissed. After leaving, he built a tool called AlgoTransparency to audit what YouTube's algorithm was actually recommending at scale. What he found, and what journalist Zeynep Tufekci would describe in a March 2018 New York Times op-ed as YouTube's "rabbit hole," was a system that reliably pushed users from mainstream content toward progressively more extreme content — because extreme content, it turned out, kept people watching longer.

That account became one of the defining narratives of the platform era. It has also become one of the most contested. The empirical record on whether recommendation algorithms cause radicalization — and what "radicalization" even means — is genuinely uncertain, actively disputed, and entangled with interests on every side. The companies that build recommendation systems have financial reasons to minimize the harm claims. Researchers who study those systems have publication incentives to find significant effects. Regulators have political incentives to respond to moral panics. Critics of content moderation have ideological reasons to cast doubt on harm claims. None of this makes the empirical questions less important. It makes them harder to answer cleanly.

This map is not about social media's effects on democracy broadly — that's the terrain of the Social Media and Democracy map. It is not primarily about teen mental health — that's covered in Social Media and Teen Mental Health. This map is specifically about the recommendation engine debate: what happens when a machine decides what you see next, optimized not for your long-term wellbeing or the epistemic health of the polity but for the behavioral metric that makes the platform money. What are the different camps in this debate actually trying to protect? And what would it mean to take each of them seriously?

Some numbers first. YouTube reports that over 500 hours of video are uploaded to its platform every minute. As of 2024, its recommendation algorithm drives approximately 70% of all watch time — meaning that the vast majority of what people watch on YouTube was not searched for but served to them by a machine. Facebook's News Feed, Instagram's Explore, TikTok's For You Page, and X's algorithmic timeline operate on analogous principles: a behavioral model of the user generates predictions about what content will produce engagement — likes, shares, comments, time-on-screen — and surfaces content accordingly. These are not passive pipes. They are active editorial systems that make hundreds of millions of decisions per day about whose ideas reach whom. The debate over algorithmic recommendation is, among other things, a debate about whether that editorial power carries editorial responsibility.

What platform engineers and optimization defenders are protecting

The argument that recommendation algorithms are preference aggregators, not preference creators — that they reflect what users actually want rather than manufacturing desires they didn't have — and that the paternalism implicit in "radicalization" claims treats users as incapable of evaluating content for themselves. The core defense of recommendation-by-engagement is that engagement is a revealed preference: if people keep watching, they want to watch. A system that recommends content users find compelling is not manipulating them; it is serving them. The alternative — editorial curation by humans who decide what is good for users — reintroduces the gatekeeping that the internet was supposed to dismantle. Eli Pariser's "filter bubble" critique assumed that people's authentic preferences were different from what algorithms showed them; defenders of recommendation engines invert this: the filter bubble is the world before the algorithm, when elite gatekeepers decided what was worth seeing. What platform engineers are protecting, in this framing, is user autonomy and the democratization of information access.

The private governance argument: that platforms have the right — and in some readings the constitutional protection — to make editorial decisions about content amplification without government interference, and that demands for algorithmic transparency and mandatory ranking changes represent an unprecedented incursion of state power into private editorial decisions. The Supreme Court has not definitively resolved whether the First Amendment protects platforms' algorithmic choices, but the cases brought in 2024 (NetChoice v. Paxton, Moody v. NetChoice) indicated significant judicial skepticism about state laws that would require platforms to carry or promote content against their editorial judgment. What this position is protecting is a vision of the internet as a space of private editorial discretion — platforms as publishers, not utilities — whose curatorial choices should be as legally protected as a newspaper editor's decision about placement.

The technical complexity argument: that algorithmic recommendation systems are sufficiently intricate that the confident causal claims made about their effects vastly outrun the actual evidence, and that policy interventions based on uncertain science risk large unintended costs. YouTube made significant changes to its recommendation algorithm in 2019, reducing recommended content from what it called "borderline" channels — a category it defined internally. These changes reduced some measurable radicalization metrics while also, critics argued, reducing the visibility of legitimate heterodox political commentary. Whether the pre-2019 algorithm actually caused radicalization — or whether the changes worked as intended — remains empirically unclear. What this position is protecting is appropriate epistemic caution: the demand that policy interventions be proportionate to the evidence for the harm they claim to address.

What radicalization researchers are protecting

The empirical case that recommendation systems systematically direct users toward progressively more extreme content because extremity drives engagement — and that this is not a bug but a predictable consequence of optimizing for behavioral metrics without regard for content trajectories. The foundational academic study is Ribeiro et al.'s "Auditing Radicalization Pathways on YouTube" (2020), which analyzed 330,925 videos across 360 YouTube channels and found statistically significant pathways from mainstream conservative media to "alt-lite" channels (anti-feminist, anti-immigration) to "alt-right" channels. The paper documented not just that these channels existed but that the recommendation graph connected them: users watching mainstream conservative content were being served progressively more extreme alternatives. Chaslot's AlgoTransparency audits produced complementary evidence at scale, finding that YouTube's recommendation engine disproportionately surfaced a small number of high-engagement channels — many of which produced conspiratorial or extremist content — regardless of the content a user had been watching. What radicalization researchers are protecting is the evidentiary record: the claim that something was actually happening, not merely that critics were concerned something might be.

The accountability argument: that internal platform research has repeatedly documented harms that platforms chose not to act on — and that the gap between what platforms knew and what they disclosed is itself a harm that requires redress. Frances Haugen's 2021 disclosure of internal Facebook documents included research showing that Facebook's own integrity team had identified specific ways the News Feed's engagement-optimization was amplifying divisive and outrage-inducing content — and that recommendations to change the ranking algorithm had been rejected or deprioritized. One internal presentation described Facebook's algorithm as offering users "more and more divisive content in an effort to gain user attention and increase time on the platform." Facebook disputed the characterization and framing of these documents. What the accountability argument is protecting is the principle that companies that understand the harms their systems cause — and document that understanding internally — cannot claim ignorance when those harms materialize. The issue is not merely what recommendation systems do to users. It is what companies knew and when, and what obligations follow.

The public health framing: that the question is not whether every individual user who watches extreme content becomes radicalized, but whether systems that expose millions of people to progressively extreme content produce population-level shifts in beliefs, behavior, and political violence — and whether "small effect sizes" in individual-level studies are large effects in aggregate. Max Fisher's The Chaos Machine (2022) marshals reporting from around the world — Germany, India, Ethiopia, Myanmar — documenting correlations between the spread of Facebook and increases in ethnic violence, disinformation, and mob attacks. The Sri Lanka anti-Muslim riots of 2018, the Indian lynching wave tied to WhatsApp forwards, the Rohingya genocide — in each case, Facebook's amplification of inflammatory content was implicated in real-world violence. Researchers who work in the public health tradition argue that population-level harms do not require that every individual be strongly affected; they require only that a large system push a large population slightly in a harmful direction. What they are protecting is the distinction between whether the algorithm caused your radicalization specifically and whether algorithmic systems made political violence more likely at scale.

What radicalization thesis skeptics are protecting

Methodological rigor: the argument that the most widely cited evidence for algorithmic radicalization is weaker than its reception implies, and that the policy consensus has outrun the science. In 2020, Mark Ledwich and Anna Zaitsev published "Algorithmic Extremism: Examining YouTube's Rabbit Hole of Radicalization," directly challenging the Ribeiro et al. findings. Their audit of YouTube recommendations found that the algorithm actually underrepresented alt-right channels relative to their presence in the platform ecosystem — that the recommendation system was, if anything, directing users away from the most extreme content rather than toward it. In 2023, a set of studies published in Nature and Science, conducted by researchers with access to Facebook's internal data (through an academic-platform collaboration), found that reducing the share of like-minded news in users' feeds had minimal effect on political polarization, and that exposure to cross-partisan content did not reliably reduce polarization either. These findings did not disprove that algorithmic amplification has ever caused harm. They did suggest that the mechanistic story — algorithm feeds you extreme content, you become radicalized — is considerably more complicated than the popular narrative allows. What skeptics are protecting is the norm that claims proportionate to the evidence, that regulatory interventions be proportionate to the science, and that "we should do something" is not a sufficient evidentiary standard for restructuring the information architecture of the internet.

The concern that "radicalization" is a politically applied label — that the content moderation and de-recommendation regimes built to prevent radicalization have disproportionately targeted conservative, heterodox, and dissident voices under the framing of preventing extremism — and that the cure may be worse than the disease. When YouTube announced its 2019 changes to reduce "borderline" content recommendations, independent audits found that the changes reduced the visibility of conservative political commentary more than progressive commentary — including content from mainstream Republican politicians alongside genuinely extreme channels. Critics from the right — including Senator Ted Cruz, who has made platform bias a recurring legislative focus — argue that "radicalization prevention" is a cover for viewpoint discrimination. Critics from outside the traditional right make a different version of the argument: that the epistemic categories used to define "extreme" content encode a particular set of institutional assumptions about which ideas are acceptable, and that this encoding has been deployed against labor organizers, climate activists, and Palestinian rights advocates alongside actual neo-Nazis. These are not the same concern. But they share a structure: that the definitional power embedded in anti-radicalization systems is itself a form of ideological authority — and that who wields it matters enormously.

The media effects skepticism: the argument that people are not nearly as susceptible to algorithmic manipulation as the "rabbit hole" narrative implies — that prior beliefs, social environments, and real-world grievances drive political radicalization far more powerfully than recommendation engines — and that overemphasizing platform effects displaces attention from the structural conditions that actually produce extremism. Political scientists studying radicalization — Brendan Nyhan, Joshua Tucker, and others — have consistently found that the most politically engaged, most partisan users consume the most news and the most partisan content, but that this correlation reflects selection, not causation: people who are already politically activated seek out more extreme content. The radicalization thesis, in this reading, mistakes the messenger for the cause. Researchers who study far-right movements — including J.M. Berger, whose work on extremist movements predates the social media era — note that radicalization in offline contexts follows pathways very similar to online ones: grievance, identity threat, social belonging, incremental commitment. Algorithms may accelerate and scale these pathways. They may not be their origin. What this position protects is the demand to look upstream: at economic anxiety, status threat, political alienation — the conditions that make people susceptible to extremist appeals before any algorithm enters the picture.

What algorithmic accountability advocates are protecting

The right to understand and contest the systems that shape you — the argument that algorithmic recommendation is not a natural process but a designed one, and that users, researchers, and regulators are entitled to know how it works. The opacity of recommendation systems is a choice, not a technical inevitability. Platforms have made detailed public commitments about the values their algorithms embody — relevance, quality, safety — while keeping the actual implementation proprietary. This means users cannot verify whether the system works as described, cannot diagnose why they are being shown particular content, and cannot meaningfully contest recommendations they find harmful. Researchers cannot conduct independent audits without either scraping data (which platforms legally threaten) or working through access arrangements that compromise independence. What accountability advocates are protecting is the epistemic infrastructure of democratic accountability: the principle that systems of this social consequence must be auditable by someone other than the companies that profit from them. The EU's Digital Services Act (2022) established limited audit rights for vetted researchers and significant transparency requirements for very large platforms — the first major legal framework to operationalize algorithmic accountability at scale.

User agency: the argument that individuals should be able to meaningfully shape the systems that shape them — to understand what signals drive recommendations, to opt out of particular optimizations, and to choose between different recommendation paradigms rather than accepting whichever one the platform has decided is default. Current platforms offer users limited choices: mute, block, "not interested" signals, and in some cases the option to use a chronological feed. But these are surface controls over a black box. Users cannot, for example, tell a platform to optimize for "content I'm likely to agree with" versus "content likely to change my mind" versus "content most popular with my demographic." The design of the recommendation environment is the platform's choice — not the user's — and it is made to serve the platform's business model. What user agency advocates are protecting is the principle that people who use these systems should have genuine control over the epistemic environment they inhabit — not the illusion of control that comes from clicking thumbs up or thumbs down.

What attention economy critics are protecting

The structural argument: that the problem is not bad algorithms but good ones — systems that are working exactly as designed, maximizing engagement by surfacing content that triggers strong emotional responses, and that the externalities this produces (outrage amplification, epistemic polarization, addictive use patterns) are not incidental side effects but predictable consequences of an optimization target that was always wrong. Tristan Harris, former design ethicist at Google and co-founder of the Center for Humane Technology, has argued for years that the core issue is not any particular piece of content but the business model that determines which content gets amplified. Advertising-supported platforms make money on attention. The most attention-capturing content tends to be emotionally activating: outrage, fear, novelty, moral disgust. These emotions are not incidentally produced by extreme content; they are the mechanism by which all high-engagement content operates. A recommendation system optimized for engagement will therefore structurally favor content that activates these responses — not because the engineers wanted radicalization but because radicalization-adjacent content happens to be highly engaging. What attention economy critics are protecting is the principle that you cannot solve this problem by moderating individual pieces of content or adjusting individual weights while leaving the underlying optimization intact. Shoshana Zuboff's The Age of Surveillance Capitalism (2019) provides the theoretical scaffolding: platforms do not just reflect human behavior; they modify it as a byproduct of predicting it. The product being sold to advertisers is not eyeballs. It is behavioral surplus — the predictive value extracted from observing what people do at scale. What this means for recommendation systems: a platform that can predict that outrage content keeps you watching has a financial incentive to serve outrage content. The problem is structural before it is algorithmic.

The labor argument: that platform recommendation systems extract economic value from creators while creating a competitive environment that rewards the content most likely to trigger engagement — generating perverse incentives for creators to produce increasingly sensational, divisive, or extreme material regardless of their own values or intentions. YouTube's Partner Program pays creators based on a combination of views and ad revenue. The recommendation algorithm determines which creators get views. This creates a feedback loop: creators who understand the algorithm produce content optimized for it; content optimized for engagement tends toward sensationalism; sensational content drives more recommendations; creators are rewarded for escalation. Guillaume Chaslot has described this as an "outrage machine" in which individual creators are not choosing to be extreme but are responding rationally to an incentive structure they did not design and cannot escape. What this position is protecting is the recognition that both users and creators are responding to structures — not simply expressing preferences — and that holding individual actors responsible while leaving the structure intact produces accountability without change.

Structural tensions in this debate
  • The measurement problem. "Radicalization" has no settled definition in this literature. Some researchers define it as exposure to extreme content; others require attitude change; others require behavioral change (violence, organizational membership). Studies that find "radicalization effects" and studies that find "no effect" are often measuring different things. The policy debate has proceeded as if the concept were agreed when it is in fact one of the primary contested variables.
  • The counterfactual problem. The most important question in the radicalization debate — would people who were radicalized online have been radicalized through other means? — is also the hardest to answer. Algorithmic systems do not run in a vacuum; they operate on populations who have other information sources, social relationships, and pre-existing beliefs. The marginal contribution of the algorithm to any individual's radicalization trajectory is nearly impossible to isolate. Studies that find effects compare treatment to control conditions that often do not capture what users would actually do in the absence of the algorithm.
  • The internal research asymmetry. Platforms have access to observational data at a scale no outside researcher can match. The academic-platform collaborations that produced the 2023 Nature/Science studies gave researchers access to this data — but on terms set by the platforms, with restrictions on what could be studied and published. Internal research that documented harm (the Haugen documents, Twitter's internal studies on algorithmic amplification of right-wing content) has tended to become public through leaks rather than disclosure. The information asymmetry between what platforms know and what researchers can learn means the empirical debate is fundamentally constrained by corporate decisions about what to share.
  • The partisan asymmetry problem. Multiple independent audits — including a 2023 Twitter study conducted by the platform's own researchers — have found that algorithmic amplification on major platforms disproportionately favors right-wing political content. The proposed explanations vary: right-wing content may generate more engagement due to affective intensity; the platforms may have structural features that reward the style of communication common in right-wing media. This asymmetry creates a political valence to the radicalization debate that makes it difficult to reason about on purely epistemic grounds: critics of radicalization claims are sometimes protecting principled skepticism; they are sometimes protecting partisan advantage. These are not always distinguishable from the outside.
  • The platform change problem. YouTube in 2026 is not YouTube in 2018. Facebook today is not Facebook when the Haugen documents were written. TikTok's For You Page operates on different principles than the follow-graph-based feeds that dominated earlier platforms. Much of the research that established the radicalization narrative was conducted on systems that have since been substantially modified — which means that both "algorithms cause radicalization" and "algorithms don't cause radicalization" may be true in different periods and on different platforms. The evidence base is a moving target, and studies that lag platform changes by two to four years may be documenting systems that no longer exist.

Further Reading

  • Max Fisher, The Chaos Machine: The Inside Story of How Social Media Rewired Our Minds and Our World (Little, Brown, 2022) — the most comprehensive journalistic account of how engagement optimization produced real-world harm at global scale; Fisher draws on years of reporting across multiple countries to document the feedback loop between platform design, content radicalization, and ethnic violence; the most detailed case for the structural argument that the problem is the optimization target, not individual pieces of content.
  • Zeynep Tufekci, "YouTube, the Great Radicalizer," New York Times, March 10, 2018 — the op-ed that named the rabbit hole phenomenon and gave the radicalization thesis its popular form; Tufekci observes that YouTube's system seems to "keep leading [viewers] toward more extreme content" regardless of the starting point — health videos toward anti-vaccine content, jogging videos toward ultramarathons, vegetarian videos toward veganism; the piece is historically important as the formulation that structured the subsequent decade of policy debate.
  • Manoel Horta Ribeiro et al., "Auditing Radicalization Pathways on YouTube," Proceedings of the 13th International AAAI Conference on Web and Social Media (2020) — the academic foundation of the YouTube radicalization argument; documents recommendation pathways from mainstream conservative media through alt-lite to alt-right channels; also the study most directly challenged by Ledwich and Zaitsev and most important to read alongside its critics to understand where the empirical dispute actually lies.
  • Mark Ledwich and Anna Zaitsev, "Algorithmic Extremism: Examining YouTube's Rabbit Hole of Radicalization," First Monday 25, no. 3 (2020) — the primary counter-study to Ribeiro et al.; Ledwich and Zaitsev find that YouTube's algorithm actually underrepresents alt-right channels relative to their prevalence in the platform ecosystem; essential for anyone who wants to understand why the empirical debate is genuinely open rather than simply a case of denialism versus evidence.
  • Shoshana Zuboff, The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power (PublicAffairs, 2019) — the theoretical foundation of the attention economy critique; Zuboff argues that the behavioral modification implicit in prediction-and-amplification systems represents a new economic logic that has no democratic legitimacy and no historical precedent; dense but the most thorough philosophical account of what is at stake in the platform design debate.
  • Frances Haugen, testimony before the Senate Commerce Committee, October 5, 2021 (transcript and documents available via the Senate Commerce Committee record) — the primary source for the accountability argument; Haugen's disclosure includes internal Facebook research on how engagement optimization amplified divisive content, internal discussion of the tradeoffs between integrity interventions and engagement metrics, and the structural reasons why decisions that platform leadership knew were harmful were repeatedly deprioritized; essential primary source material for anyone engaging the "what did they know" argument.
  • Brendan Nyhan et al., "Like-Minded Sources on Facebook Are Prevalent but Not Polarizing," Nature 620 (2023): 137–144 — one of the landmark 2023 studies conducted with Facebook data access; finds that reducing like-minded content in users' feeds had minimal effects on political attitudes and polarization measures; the most significant challenge to simple causal stories about feed curation and political beliefs; must be read alongside its companion papers and their critiques to understand what was and wasn't measured.
  • Kevin Roose, Rabbit Hole (New York Times audio series, 2020) — Roose's eight-part reporting project, of how YouTube's recommendation system changed one person's political views; the most readable narrative account of the radicalization pathway at the individual level; valuable less as evidence of a population-level effect than as a detailed phenomenology of what the experience actually feels like from the inside.

See Also