Sensemaking for a plural world

Perspective Map

Social Media and Democracy: What Each Position Is Protecting

March 2026

A journalist who covered the Arab Spring in 2011 still remembers the specific moment she understood what Facebook and Twitter had made possible. Tunisian activists were coordinating across a country where state media controlled every other channel — sharing footage the state was suppressing, reaching each other, reaching the world. She watched something that had never existed before: a decentralized information network that state power couldn't simply shut down. She thought she was watching the future of freedom.

She's spent the years since watching with a different kind of attention. The same platforms that amplified civil society in Cairo ran the disinformation campaign that preceded the Rohingya genocide in Myanmar. The same architecture that connected opposition voices in authoritarian regimes connected white nationalists in the United States and conspiracy theorists in Brazil. The same design logic that let dissidents reach global audiences let harassment campaigns reach specific people's home addresses. She hasn't changed her mind about what she saw in 2011. She has become much less sure it was the whole story.

A software engineer who has spent seven years building content moderation tooling at a major platform can tell you with exact numbers how many pieces of content are reviewed per day, how many are removed, and how many appeals are filed and won. She's attended every internal debate about transparency, safety, and political neutrality. What she's observed in practice: every moderation decision that gets described as "neutral" or "technical" is actually a judgment call. When her platform decided not to apply hate speech rules to politicians because politicians' speech is "newsworthy," that was a political decision. When it decided to down-rank "borderline content" that doesn't quite violate the rules, that was an editorial decision. She thinks the fiction of neutrality is the most dangerous thing about how her employer presents itself. She also thinks most of the proposed regulations would make things worse.

These two people are responding to something real. The social media and democracy debate has been flattened into a false binary — "free speech vs. censorship" — that obscures at least four distinct positions, each protecting something genuine, each diagnosing a different problem. Understanding why they keep talking past each other requires seeing what each is actually trying to preserve.

What platform libertarians protect

The people who most resist regulating social media — free speech absolutists, platform libertarians, many technologists, and a significant cross-partisan coalition that includes civil liberties advocates on the left and libertarians on the right — are protecting something concrete and historically grounded.

They're protecting the most democratically accessible public sphere in human history. Before social media, the barriers to reaching a large public audience were high and unequally distributed. You needed a broadcast license, a newspaper, a publishing deal, an institutional platform. Those resources were not randomly distributed. They skewed toward existing centers of power — toward capital, institutional credibility, geography, and connections. Social platforms collapsed those barriers. A factory worker in West Virginia can now reach millions with a phone. A nurse in Nigeria can now document what's happening in her hospital and reach global publics. A teenager in rural Kansas can find, for the first time, other people who share her specific experience of the world. This is genuinely new and genuinely good, and any regulatory intervention that reimposes barriers to entry — even in the name of safety — will not do so equally. The costs will fall disproportionately on the people and communities with the least existing institutional access.

They're protecting a track record about who gets silenced. The case for free speech isn't purely principled — it's empirical. Historically, the power to define acceptable speech has consistently been used against exactly the groups most in need of protection: civil rights organizers, labor activists, gay and lesbian communities, political minorities. The NAACP was called subversive. The early labor movement was called dangerous. Giving any authority — including private corporate authority — the power to determine what speech is amplifiable is a power that cannot be assumed to remain in friendly hands. The platform libertarian position is not naive about online harms; it is weighted by this history.

They're protecting an empirical challenge to the standard narrative. The claim that social media is the primary driver of political polarization is weaker than its confidence level suggests. Research by Levi Boxell, Matthew Gentzkow, and Jesse Shapiro found that the sharpest rise in affective polarization in the United States occurred among demographic groups with the lowest social media usage — people over 65 — while being smaller among heavy social media users. If the algorithm were the primary cause of polarization, you'd expect the opposite pattern. The drive to regulate social media may be solving for the wrong variable.

What algorithmic accountability advocates protect

The people who demand transparency and accountability from platform recommendation systems — researchers, journalists, some former platform employees, and a growing body of technologists who've worked on these systems — are protecting honesty about a mechanism that operates invisibly.

They're protecting the distinction between what speech is permitted and how it is amplified. This is the move the free speech frame misses. The platforms are not neutral pipes. When Facebook, YouTube, and TikTok describe their function as "connecting people" or "giving everyone a voice," they are describing themselves in a way that obscures the actual mechanism. The platforms are making billions of daily decisions about what content to show, to whom, in what order, with what amplification, and with what effect on subsequent sharing. Those decisions are made by recommendation systems optimized primarily for engagement — for the behaviors that generate advertising revenue. And those behaviors are not evenly distributed across the content landscape. Content that triggers strong emotional responses — outrage, moral indignation, fear, tribal solidarity — generates more engagement than content that is accurate, nuanced, or boring. This is not a conspiracy; it is a documented systematic incentive.

They're protecting whistleblower findings that the platforms themselves confirmed. Frances Haugen, who worked on civic integrity at Facebook, released internal research documents in 2021 showing that Facebook's own studies found their 2018 algorithm change — designed to increase "meaningful social interactions" — systematically amplified divisive, emotional, tribal content. Internal researchers flagged this. They were overruled. The platform chose engagement over epistemic health. That was not a neutral technical decision. It was a choice, made by people, with consequences that Facebook's own data predicted.

They're protecting the specific people and communities most harmed by what the algorithm rewards. Research on YouTube's recommendation algorithm by Guillermo Chaslot — who worked on it and then raised concerns — documented systematic radicalization pathways: the algorithm recommended increasingly extreme content because extreme content generates more watch time. The harm from algorithmic amplification is not uniformly distributed across the political spectrum or across communities. Demanding algorithmic accountability is not a demand for censorship. It is a demand that platforms be honest about the editorial choices embedded in systems that shape what billions of people believe.

What epistemic commons advocates protect

There is a third position, often conflated with the algorithmic accountability position but distinct from it: people who are primarily worried not about amplification mechanics but about the erosion of the shared factual environment that democratic self-governance requires.

They're protecting the minimal shared information environment that collective decision-making depends on. Democracy, at its most basic, requires that people be able to have arguments that are ultimately resolvable — because they share enough of a common factual foundation to disagree productively. When the underlying facts become contested — when large portions of the population live in information environments where election fraud was documented, where vaccines cause autism, where climate change is a hoax, where a pandemic was staged — democratic deliberation doesn't just become harder. It becomes structurally impossible. You cannot reason together toward policy when you cannot agree on what counts as evidence.

They're protecting the ability of institutions to function at all. The epistemic commons argument is not paternalistic — it is not saying people are stupid and need to be protected from ideas. It is saying that social institutions are fragile, that they depend on shared ground-level agreement about what kinds of things count as evidence, and that an information ecosystem optimized for engagement-at-any-cost is corroding that shared ground in ways that have real political consequences. When a significant portion of the population cannot agree on whether an election was legitimate, the platforms hosting and amplifying content that claims otherwise are not being neutral. They are participants in a political outcome. The epistemic commons position refuses the "neutral pipe" fiction not because it wants censorship but because it wants honesty about what's actually happening.

They're protecting something that media literacy alone cannot fix. danah boyd's research on how media literacy interventions can backfire points to a counterintuitive problem: teaching people to "question everything" in an environment where authoritative sources are systematically discredited can increase susceptibility to misinformation. If you've been trained to distrust mainstream media, expert consensus, and institutional authority, and you live in an information ecosystem that rewards tribal epistemics, "think critically" functions as an invitation to trust your in-group and distrust everyone else. The epistemic commons advocates are pointing at a structural problem that individual-level solutions cannot address.

What structural critics protect

A fourth position — less prominent in mainstream debate but increasingly developed in academic and policy circles — challenges the underlying frame shared by all three other positions.

They're protecting democratic accountability over concentrated private power in the public sphere. The free speech vs. censorship debate assumes that the choice is between speech being allowed or suppressed. What this frame obscures is the question of who holds the power to make that choice and under what accountability structures they operate. The most consequential decisions about what billions of people see and believe — which ideas get amplified, which communities get protected, which get harassed — are currently made by executives at a handful of private companies, accountable primarily to shareholders. That this power is private doesn't make it neutral. It makes it less accountable.

They're protecting the recognition that the business model is load-bearing. Yochai Benkler's work on the network information economy argues that what we've called "free speech online" has produced not the democratization of communication it promised, but a new concentration of communicative power in the hands of a few global platforms. Shoshana Zuboff's analysis of "surveillance capitalism" locates the problem in the advertising model itself: platforms make money by maximizing attention, attention is maximized by content that provokes strong emotional responses, and there is no content moderation intervention that changes this incentive structure without changing the business model. The structural critics are arguing that you cannot fix the information environment by tweaking what content is allowed if the underlying incentive is still: stay engaged, at any epistemic cost.

They're protecting the possibility of thinking about the information commons as a public good. Tim Wu's history of media consolidation documents a recurring pattern: new communication technologies emerge, promise decentralization and access, and then concentrate into oligopolies that serve advertiser and shareholder interests over public ones. Radio, television, the internet — each followed this arc. The structural critics are arguing that social media followed it too, and that the debate about what the platforms should be allowed to publish is happening entirely inside a frame the platforms set. What kinds of information infrastructure would be organized around democratic accountability rather than attention extraction? That's a question the free speech vs. censorship debate cannot ask.

Where the real disagreement lives

When you push all four positions, the deepest fault lines appear in three places that rarely get named directly.

What is the primary problem? Each position has a different diagnosis. For platform libertarians, the problem is government overreach and elite control over acceptable speech. For algorithmic accountability advocates, it's invisible editorial power exercised by profit-seeking systems. For epistemic commons advocates, it's the breakdown of shared information environments. For structural critics, it's concentrated private power over public discourse. These diagnoses are not mutually exclusive — but they point to different interventions. Accepting one as primary tends to make the others look like symptoms rather than causes. The debate often proceeds as if everyone agrees on the diagnosis and disagrees only on the cure. They don't.

Is platform neutrality possible, and is it desirable? The free speech position tends to treat neutrality as an achievable and desirable goal — if platforms would just stop intervening, the marketplace of ideas would function. Every other position disputes this. The platforms are already making billions of editorial decisions; the algorithm is not neutral, it just isn't labeled as editorial. The real choice is not between neutral and non-neutral platforms. It is between platforms whose non-neutrality is invisible and unaccountable, and platforms whose non-neutrality is visible and subject to oversight. That reframing changes the argument significantly: you can be in favor of free speech and in favor of algorithmic accountability at the same time, because they're addressing different parts of the problem.

What is the counterfactual we're comparing to? The platform libertarian position assumes that the alternative to current platforms is a world with more government control and less free expression. The structural critics assume the alternative is a regulated public utility or a commons-based information infrastructure. The epistemic commons advocates worry that reform won't work if the underlying incentive structure remains the same. None of these counterfactuals is obviously correct. The policy debate often proceeds as if one of them is — which is why conversations across positions feel like they're about completely different problems.

What sensemaking surfaces

The platform libertarian position is right that the democratization of communicative access is a genuine and non-trivial achievement. Before social media, the barriers to public expression were high and distributed along existing lines of power. The collapse of those barriers has enabled organizing, documentation, solidarity, and voice for communities that previously had none. Any intervention that reimposes barriers — even in the name of safety — will not do so equally. The costs will fall disproportionately on the people with the least existing institutional power, which is exactly where caution is most warranted.

The algorithmic accountability position is right that the free speech frame is the wrong frame for the actual mechanism at issue. The platforms are not neutral pipes. They are active editorial participants in the information environment, making consequential decisions that shape what billions of people believe. The fact that those decisions are made by algorithms rather than editors doesn't make them less political — it makes them less accountable. Demanding transparency about how recommendation systems work is not a demand for censorship. It is a demand that we be honest about what's actually happening in the infrastructure of public discourse.

The epistemic commons position is right that democracy requires a minimal shared factual environment, and that this environment is genuinely under stress in ways that individual choices and media literacy cannot fix. The question of whether an election was legitimate, whether vaccines are safe, whether human activity is changing the climate — these are not matters of opinion in the sense that reasonable people examining the same evidence reach different conclusions. Platforms that treat them as such are not being neutral. They are taking a position with political consequences.

The structural critics are right that the business model is load-bearing. You cannot fix the information environment by tweaking content moderation policies if the incentive structure remains: platforms make money by maximizing engagement, and engagement is maximized by content that provokes strong emotional responses. The platforms are not evil; they are rational actors responding to their incentive structures. Changing what they amplify requires changing what they are rewarded for. That is a structural intervention, not a content moderation one.

What none of the positions quite reaches: the technology is not the only variable. The polarization, epistemic fragmentation, and erosion of shared information environments that social media is blamed for are also symptoms of conditions that predate it — growing inequality, the collapse of shared institutions (unions, local newspapers, civic organizations, neighborhood anchors), decades of wage stagnation that have given large portions of the population very good reasons to distrust institutions that have consistently failed them. Social media didn't create those conditions. It found them and optimized for them. A reformed information environment, by itself, won't repair the underlying fractures it exploits. But that doesn't mean the information environment doesn't matter. It means that fixing it is necessary but not sufficient — and that any account of the problem that treats technology as the primary cause misses where most of the weight actually sits.

Patterns at work in this piece

Several of the recurring patterns named in What sensemaking has taught Ripple so far appear here with particular sharpness.

  • The frame is load-bearing. "Free speech vs. censorship" is a frame, not a neutral description of the debate. Accepting it forecloses a set of questions — about algorithmic amplification, about business models, about who holds power over the public sphere — that are at least as important as questions about what content is permitted. The structural critics' key contribution is pointing at the frame itself, rather than arguing within it.
  • Diagnosis shapes prescription. The four positions aren't just disagreeing about solutions; they're disagreeing about what the problem is. Platform libertarians think the problem is censorship risk. Algorithmic accountability advocates think it's invisible editorial power. Epistemic commons advocates think it's epistemic fragmentation. Structural critics think it's concentrated private power. Each diagnosis makes a different set of interventions look sensible and a different set look misguided. This is why the debate tends to produce arguments that sail past each other.
  • The gap between individual and systemic. Many social media interventions are aimed at the individual level — teach media literacy, encourage slow reading, build platform accountability into users' choices. The structural critique makes a systemic argument: individual-level interventions cannot fix systemic incentive problems. This is the same structure as many other debates in this collection: you can believe both that individuals have responsibility and that systemic conditions shape the space within which individual choices happen, without those two beliefs contradicting each other.
  • Whose track record counts. Both the platform libertarian position and the epistemic commons position invoke historical track records — but different ones. Platform libertarians cite the history of free speech restrictions being used against marginalized groups. Epistemic commons advocates cite the history of what happens to democratic governance when information environments fracture. Both track records are real. Which one you weight shapes whether you see regulation as protection or threat.

See also

  • Who bears the cost? — the framing essay for the burden-sharing conflict underneath platform politics: when engagement-driven systems corrode trust, intensify harassment, and make democratic life harder to inhabit, the costs do not land evenly, and the real dispute is over whether users, vulnerable communities, public institutions, or the platforms themselves should absorb them.
  • Who gets to decide? — the framing essay for the authority conflict underneath platform politics: whether privately governed communication systems should be allowed to shape democratic attention and speech by engagement logic, or whether some more legible public authority has to constrain that power.
  • Digital Privacy and Surveillance: What Each Position Is Protecting — the surveillance architecture that platforms use to generate behavioral data is the same infrastructure that enables the political manipulation this map addresses. The democracy critique and the privacy critique are attacking the same system from different angles — one asking what it does to the information environment, the other asking what it does to individuals.
  • Free Speech on Campus: What Each Side Is Protecting — the closest terrain to this map: what constitutes harmful speech, who decides, and what the costs of silencing are — explored in the specific context of university platforms.
  • Technology and Attention: What Both Sides Are Protecting — adjacent territory: what the alarm about smartphones and the defense of digital connection are each protecting. The attention question and the democracy question are related but distinct; this map addresses the psychological mechanism, this one addresses the political structure.
  • AI Consciousness: What Both Sides Are Protecting — the question of what AI systems are "really doing" runs under this debate too; algorithmic recommendation systems are one domain where the gap between what the system appears to do and what it actually does has political consequences.
  • AI and Labor: What Both Sides Are Protecting — AI systems and social media platforms share a structural similarity: both embed consequential decisions in technical systems that appear neutral, and both raise questions about democratic accountability over automated power.
  • Surveillance Capitalism: What Each Position Is Protecting — the economic model that funds the information environment this map examines; the filter bubble and political manipulation debates are downstream of the behavioral data extraction model that surveillance capitalism critics diagnose; the structural antitrust critique of platform power applies directly to the concentration concerns in the democracy debate.
  • Progress and Declinism: What Both Sides Are Protecting — the social media debate tracks the progress/declinism fault line closely: those who see platforms as net-positive for human freedom and connection, and those who see them as symptoms or accelerants of decline.
  • Electoral Reform and Ranked Choice Voting: What Each Position Is Protecting — the companion lever to this map's concerns: where this map asks what the information environment does to voters before they reach the booth, the electoral reform map asks what happens at the booth itself — whether the voting system design (FPTP, RCV, proportional representation) can or should compensate for the polarization and misinformation dynamics this map traces.
  • Platform Accountability and Content Moderation: What Each Position Is Protecting — the institutional complement to this map: where the social media and democracy map asks what platforms do to the information environment, the platform accountability map asks who should decide what speech is permissible and under what framework; the governance gap this map identifies between platform power and democratic accountability is the governance gap the platform accountability map is directly trying to close.
  • AI and Democracy: What Each Position Is Protecting — the AI-specific dimension of this map's concerns: where this map traces the structural conditions that make democratic discourse vulnerable to manipulation, the AI and democracy map examines what AI adds to those conditions — and introduces the epistemic authority trap, the pattern by which governance proposals for AI disinformation create a new political battleground in exactly the domain they are trying to protect; the structural critique in that map draws directly on the Network Propaganda framework this map also depends on.

Further reading

  • Yochai Benkler, Robert Faris, and Hal Roberts, Network Propaganda: Manipulation, Disinformation, and Radicalization in American Politics (Oxford University Press, 2018) — the most comprehensive empirical study of the 2016 U.S. election information environment, combining quantitative media analysis with qualitative case studies. Complicates the "algorithms cause polarization symmetrically" thesis: finds that the asymmetry in misinformation runs more strongly from the partisan right-wing media ecosystem than from foreign manipulation or platform algorithm effects alone. Oxford University Press
  • Shoshana Zuboff, The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power (PublicAffairs, 2019) — the foundational text for the structural critique of platform business models. Zuboff argues that the core product of social media companies is not connection but prediction — the extraction and sale of behavioral data to predict and modify human behavior at scale. The implication for democracy: an information environment designed to predict and modify behavior is not the same as a free information environment, regardless of what content it permits.
  • Zeynep Tufekci, Twitter and Tear Gas: The Power and Fragility of Networked Protest (Yale University Press, 2017) — an essential corrective to both techno-optimism and techno-pessimism. Tufekci argues that social media has changed the logistics of organizing in ways that enable large movements to form quickly but also make them fragile in specific ways — and that understanding what platforms enable and what they don't requires looking carefully at the specific mechanisms, not the technology as a whole. Book site
  • Eli Pariser, The Filter Bubble: What the Internet Is Hiding from You (Penguin Press, 2011) — the book that introduced the "filter bubble" concept into mainstream debate. Pariser argues that personalization algorithms, by showing each user what they're most likely to engage with, create information environments that systematically reduce exposure to challenging or disconfirming information. The thesis has been substantially complicated by subsequent research — but the concept it introduced is still doing real work in the debate.
  • Jonathan Haidt and Tobias Rose-Stockwell, "The Dark Psychology of Social Networks," The Atlantic, December 2019 — the most accessible account of how the specific design choices of social media — public reaction counts, sharing mechanics, algorithmic amplification of moral outrage — interact with evolved human psychology in ways that were not anticipated and have been difficult to reverse. Haidt's subsequent book The Anxious Generation (2024) extends the argument to youth mental health with longitudinal data. The Atlantic
  • Levi Boxell, Matthew Gentzkow, and Jesse Shapiro, "Cross-Country Trends in Affective Polarization," NBER Working Paper 26669 (2021) — the empirical study that found affective polarization rising most sharply among demographic groups with the lowest social media usage, complicating the claim that platforms are the primary driver of political division. Important not as a dismissal of platform effects but as a corrective to monocausal explanations: polarization is happening in places and among people where the algorithmic explanation doesn't fit. NBER
  • Tim Wu, The Attention Merchants: The Epic Scramble to Get Inside Our Heads (Knopf, 2016) — a history of the attention economy from early newspaper advertising to social media, tracing the recurring pattern by which new communication technologies emerge, promise democratization, and then concentrate into oligopolies optimized for advertiser revenue. Essential for the structural critics' argument that the platform business model — not malicious intent — is the load-bearing cause of information environment pathologies.
  • Siva Vaidhyanathan, Antisocial Media: How Facebook Disconnects Us and Undermines Democracy (Oxford University Press, 2018) — the most comprehensive platform-specific analysis of how Facebook's design choices, revenue model, and governance failures combine to degrade public deliberation. Vaidhyanathan's argument complements Zuboff's theoretical framework with case-by-case analysis: the specific mechanics of News Feed curation, the incentive structure that makes outrage more algorithmically rewarded than nuance, and Facebook's repeated cycles of expansion-then-apology when harms become visible. Particularly strong on the political economy of content moderation — why platforms consistently under-regulate rather than over-regulate, and why the "marketplace of ideas" frame obscures that there is only one marketplace but many different buyers with very different budgets. The book makes the regulatory argument concrete: not "break up Big Tech" as slogan, but a specific account of what Facebook's market position enables and why competition alone cannot correct it.