Sensemaking for a plural world

Perspective Map

Platform Accountability and Content Moderation: What Each Position Is Protecting

March 2026

In 2021 the Facebook Oversight Board ordered the company to review its decision to indefinitely suspend Donald Trump's accounts following the January 6th Capitol attack. Facebook had created the Board — a body of twenty law professors, former prime ministers, and human rights experts — specifically to handle decisions too consequential or contested for the company to make alone. The Board ruled that an indefinite suspension without specified criteria was not a legitimate penalty under Facebook's own rules, and that Facebook needed to either restore the accounts or apply a defined sanction. Facebook imposed a two-year suspension. The Board had no power to enforce anything beyond that.

The episode illustrated a governance question that has become one of the defining institutional problems of the digital age: who decides what speech is permissible on platforms that function as essential public infrastructure, under what legal framework, with what transparency and appeals process, and with what accountability to the people most affected? This question is distinct from the debate about how platforms affect democratic discourse — which concerns the downstream effects of speech — and from the debate about surveillance capitalism — which concerns how behavioral data is extracted. The content moderation governance question is about institutional design: not what should be said, but who should decide, and how.

Four positions have emerged that are genuinely distinct and each protecting something real.

What platform self-governance advocates are protecting

The strongest argument for allowing platforms to govern their own speech policies draws on two principles that pull in the same direction: First Amendment editorial discretion and the practical claim that platforms are better positioned than governments to make speech decisions at the speed and scale the internet requires.

They are protecting the First Amendment principle that private entities have a right to editorial discretion. When Florida and Texas passed laws in 2021 and 2022 requiring large platforms to carry speech they would otherwise remove — targeting what legislators called anti-conservative "viewpoint discrimination" — the laws were challenged on First Amendment grounds. The platforms argued, and federal courts in most circuits agreed, that platforms exercise editorial judgment analogous to newspaper curation: choosing what to host, amplify, and remove is protected expressive activity. The Supreme Court's 2024 decisions in Moody v. NetChoice and NetChoice v. Paxton affirmed that must-carry mandates raise serious First Amendment concerns. Self-governance advocates argue that government mandates to carry or remove specific speech categories represent precisely the state control of private expression that the First Amendment was designed to prevent.

They are protecting the innovation-enabling legal architecture that Section 230 created. Section 230 of the Communications Decency Act (1996) — the twenty-six words that immunized platforms from liability for user-generated content — was designed to encourage platforms to moderate speech without the chilling effect of defamation liability for every removed or retained post. The immunity was not an accident but a policy judgment: Congress wanted platforms to have the freedom to experiment with content governance without being treated as publishers liable for everything they host. Self-governance advocates argue that the current wave of reform proposals — narrowing Section 230 immunity, requiring algorithmic transparency, mandating appeals processes — will produce exactly the chilling effect the statute was designed to prevent, either driving platforms toward over-removal to avoid liability or toward under-moderation to avoid being deemed responsible for what they host.

They are protecting the practical argument that companies know their platforms better than regulators do. Content moderation at scale — billions of pieces of content across hundreds of languages, involving context-dependent questions of satire, irony, coded speech, and evolving slang — requires institutional knowledge that no external regulator has acquired. Facebook's Community Standards, Twitter's (now X's) Rules, YouTube's Community Guidelines are the product of years of iterative decision-making about hard cases: how to handle satire that looks like misinformation, how to treat political speech that uses the language of violence, how to balance local law compliance with global free expression norms. The self-governance position holds that external mandates will substitute cruder rules for more nuanced ones, and that the right mechanism for accountability is transparency about policies and appeals processes rather than regulatory supervision.

What democratic accountability advocates are protecting

The EU's Digital Services Act (DSA), enacted in 2022 and effective in 2024, represents the most developed institutional expression of a different view: that platforms above a certain scale exercise quasi-governmental power over speech and democratic discourse, and that democratic societies are entitled to impose accountability requirements commensurate with that power.

They are protecting the principle that democratic accountability must extend to whoever exercises power over democratic participation. Marietje Schaake, former European Parliament member and author of The Tech Coup (Princeton, 2024), argues that the governance gap between platform power and democratic institutional capacity is itself a democratic emergency: the decisions that shape what information citizens can access, what speech is amplified or suppressed, and what political actors can communicate to their constituents are being made by private entities answerable only to shareholders. The DSA model responds to this by imposing proportionate obligations — risk assessments for systemic risks, algorithmic transparency reports, researcher data access, robust user appeals — without mandating specific speech outcomes. The accountability is procedural, not substantive: platforms retain editorial discretion over what to remove, but must operate according to transparent, consistently applied rules subject to audit.

They are protecting the distinction between prohibiting speech outcomes and requiring process accountability. Democratic accountability advocates argue that the First Amendment objection to platform regulation proves too much: it would also immunize newspapers from fair housing advertising laws, television networks from equal-time requirements, and telecommunications companies from common carrier obligations. At sufficient scale, some companies acquire responsibilities that cannot be discharged by appeal to private editorial discretion alone. The DSA imposes no speech mandates: it does not require platforms to carry or remove specific content categories. It requires that platforms explain how they decide, maintain appeals processes for removal decisions, and submit to audit of their systemic risk assessments. These are accountability requirements, not editorial mandates, and democratic accountability advocates argue the First Amendment does not foreclose them.

They are protecting the evidentiary record that platforms have refused to provide. The Facebook Papers (2021), the Twitter Files (2022), and years of academic research requesting platform data have demonstrated that internal platform knowledge about the effects of algorithmic design, the reach of misinformation, and the political distribution of enforcement decisions is systematically unavailable to the public and to researchers. The DSA's provisions requiring researcher data access — and its enforcement mechanism through the European Commission — are designed to close this evidentiary gap. Democratic accountability advocates argue that a governance debate conducted without data about what platforms are actually doing is a governance debate conducted in the dark, and that transparency requirements are the prerequisite for any other form of accountability.

What algorithmic governance advocates are protecting

A third position holds that the content moderation debate is asking the wrong question. The harm most associated with platform speech is not the speech platforms remove — the posts that are taken down — but the speech platforms amplify. What platforms do that matters most is not their moderation decisions but their recommendation systems, and algorithmic amplification raises governance questions that the Section 230 / speech removal framework was never designed to address.

They are protecting the distinction between hosting speech and recommending speech. Section 230 immunizes platforms for acting as distributors of user-generated content. But when a platform's recommendation algorithm surfaces a piece of content to ten million people who did not search for it, is the platform still merely a neutral distributor? Legal scholars, including Daphne Keller of Stanford's Program on Platform Regulation, have argued that algorithmic recommendation is an editorial act distinct from hosting — that the same platform both hosts content (protected) and curates feeds (an active editorial choice that may not receive the same protection). The Gonzalez v. Google (2023) Supreme Court case raised this question directly — whether Section 230 immunizes recommendation algorithms — and the Court declined to resolve it, leaving the legal question open.

They are protecting the empirical finding that amplification, not hosting, is the primary driver of harm. Sinan Aral's The Hype Machine (Currency, 2020) synthesizes the research on how misinformation spreads: it spreads faster, wider, and deeper than true information — not primarily because people seek it out, but because the emotional activation it produces (novelty, outrage, fear) generates engagement signals that recommendation algorithms are calibrated to optimize. The problem is not that platforms host false content; it is that their engagement-maximizing recommendation systems systematically amplify it. Algorithmic governance advocates argue that content removal policies are largely irrelevant to this dynamic: removing a piece of false content after it has been amplified to millions of people is closing the barn door after the horse has left, and the recommendation system that amplified it remains intact.

They are protecting the possibility of governance that addresses the actual mechanism of harm without requiring platforms to become speech police. Reforms targeted at algorithmic amplification — requiring platforms to offer algorithmic feed options without engagement optimization, mandating transparency about what signals recommendation systems optimize for, prohibiting the amplification of content that has been flagged as disputed — would address the harms that content removal misses while avoiding the over-removal dynamic that Section 230 reform risks producing. Algorithmic governance advocates argue that this frame reorients the debate from a fight over what speech is permissible to a question about what business models and recommendation architectures are permissible — a question with better answers.

What global equity advocates are protecting

A fourth position holds that the entire debate — conducted primarily by US tech companies, EU regulators, US courts, and Western academics — systematically ignores the populations most affected by platform governance failures and least represented in the institutions making governance decisions.

They are protecting the record of what platform governance failures look like when they occur outside the Western media ecosystem. In Myanmar between 2017 and 2018, Facebook's platform was used to circulate calls for violence against the Rohingya Muslim minority, contributing to a campaign of ethnic cleansing that the UN characterized as genocide. Facebook had almost no Burmese-language content moderators. The platform's recommendation algorithms amplified anti-Rohingya content because it generated engagement. The platform was effectively the country's primary public sphere, while its governance was calibrated for a different context entirely. Global equity advocates argue that the Myanmar case is not exceptional but paradigmatic: the harms of platform governance failures fall disproportionately on populations in the Global South, who lack the legal resources, political representation, and media visibility to hold platforms accountable.

They are protecting the claim that governance legitimacy requires representing the people most subject to governance. The Facebook Oversight Board's composition — heavy on European and American law professors and former officials — has been criticized on the grounds that the people most affected by platform governance decisions are structurally underrepresented in the institutions making those decisions. The communities bearing the costs of platform failure (content moderators in Kenya, Philippines, and elsewhere who develop PTSD from reviewing violent content at low wages; populations whose languages are systematically under-resourced in moderation systems; journalists and activists in authoritarian states whose accounts are targeted for removal through coordinated reporting campaigns) have no meaningful avenue for participation in governance processes. Global equity advocates argue that legitimacy requires not just transparency and appeals but participation by affected communities in rule-making.

They are protecting the critique of the national security frame that has come to dominate Western platform governance debates. The US-China technology competition framing — which treats platform governance primarily as a national security question, prioritizes competitiveness and technological leadership, and treats DSA-style accountability requirements with suspicion as barriers to innovation — systematically deprioritizes the governance questions that matter most to affected communities globally. Global equity advocates, drawing on the UN AI Advisory Body's framework for AI governance and scholars like Safiya Umoja Noble and Joy Buolamwini who have documented racially and linguistically biased moderation outcomes, argue that a governance debate structured primarily by the interests of US firms and US national security is producing institutions that will entrench the existing distribution of platform power rather than democratize it.

Where the real disagreement lives

The content moderation governance debate is structured by several genuine disagreements that are rarely made explicit.

The who-decides question. Each position embeds a theory of institutional legitimacy: platforms should self-govern because they have the knowledge and editorial rights; democratic legislatures should impose accountability because they represent affected citizens; independent oversight bodies should adjudicate because they combine expertise and insulation from competitive pressure; affected communities should participate because legitimacy requires representing those governed. These theories are not always compatible. The DSA imposes regulatory accountability that self-governance advocates find incompatible with First Amendment principles. Oversight boards created by platforms are, on the global equity view, still accountable primarily to the platform. Democratic legislatures in wealthy nations represent wealthy-nation interests. There is no institutional arrangement that satisfies all four theories simultaneously.

The removal vs. amplification distinction. Whether the primary governance object should be content removal decisions or algorithmic amplification design determines what kind of accountability is relevant. The entire Section 230 / DSA / Oversight Board conversation is primarily about removal. The algorithmic governance critique holds that this conversation is addressing a secondary problem while ignoring the primary mechanism. Resolving this requires empirical claims about what actually drives harm — which platforms have resisted providing data to test.

The First Amendment as domestic vs. global constraint. The First Amendment applies to US government action, not to private platform decisions, and does not apply extraterritorially. The US-centric nature of platform governance debates means that a constitutional framework designed for a particular political context is being applied as a de facto global standard — not through legal requirement but through the dominance of US platforms in global speech infrastructure. Global equity advocates argue that treating First Amendment principles as the default frame for global governance is itself a form of governance capture by one national tradition.

The scale question. Many of these disagreements reduce to disagreements about what follows from scale. Platform self-governance advocates hold that scale is not a reason to treat private entities differently than smaller editorial actors — a large newspaper is not subject to different First Amendment constraints than a small one. Democratic accountability advocates hold that scale is precisely what changes the calculus: a platform with two billion users is exercising a different kind of power than a newspaper with two million readers, and institutional accountability must be proportionate to power. This is an argument about political philosophy — about whether democratic accountability obligations arise from the nature of the entity or the extent of its influence — that positional debates about content moderation rarely name as the underlying dispute.

See also

  • Who belongs here? — the framing essay for the membership conflict inside platform speech governance: moderation rules are never only about abstract speech rights, but about which people can participate without being driven out, whose vulnerability gets treated as real, and whose norms define the shared space.
  • Who gets to decide? — the framing essay for the broader authority question beneath this map: when private platforms govern public speech environments, what legitimates that power, who can contest it, and what institutional form should accountability take when rulemaking happens outside the state?
  • Platform Moderation and Free Expression: What Each Position Is Protecting — the companion map that asks the prior question this map assumes answered: before deciding who should govern speech at scale, what values are in conflict about what should be governed? The four positions there — free expression absolutism, harm-based governance, communitarian speech norms, and anti-colonial critique — operate at the philosophical level beneath this map's institutional design questions; a self-governance advocate and a democratic accountability advocate can both be harm-based governance advocates, or both be free expression absolutists; separating the values question from the institutional question clarifies what each debate is actually about.
  • Social Media and Democracy: What Both Sides Are Protecting — the upstream question this map assumes: how platforms affect democratic discourse, political polarization, and the quality of public deliberation; the content moderation governance debate is about how to respond institutionally to whatever those effects are; the two maps address complementary questions.
  • Surveillance Capitalism: What Each Position Is Protecting — the economic model that makes the algorithmic amplification problem structural: recommendation systems are optimized for engagement because engagement generates behavioral data that can be monetized; the governance question about amplification cannot be separated from the business model question about what platforms are optimizing for and why.
  • Free Speech on Campus: What Both Sides Are Protecting — a structurally similar debate about speech governance in institutions that function as public spheres; the campus debate raises First Amendment questions, institutional legitimacy questions, and the question of who decides what speech is harmful in a shared space; the institutional design questions map onto the platform governance debate.
  • AI Governance and Regulation: What Each Position Is Protecting — a parallel governance gap: AI systems, like content moderation systems, exercise power at scale under governance frameworks designed for smaller entities; the frame divergence pattern named in that map — positions debating different problems — also appears here, where self-governance advocates and global equity advocates are partly addressing different governance questions.
  • Digital Privacy and Surveillance: What Each Position Is Protecting — overlapping regulatory terrain: the DSA and its associated privacy regulations address both content governance and data collection; the accountability frameworks being built for platform governance are partly continuous with those being built for surveillance; both debates are structured by the question of what obligations follow from the scale of platform power.
  • AI and Democracy: What Each Position Is Protecting — the electoral context where the governance legitimacy question becomes most urgent: AI-generated content in election campaigns raises the same institutional design problem this map centers — who holds epistemic authority over political speech — but in a domain where the stakes of getting it wrong are highest; the "epistemic authority trap" pattern named in that map is a specific instance of the governance legitimacy problem this map identifies.
  • Childhood and Technology: What Each Position Is Protecting — the children's digital rights debate as a consumer-protection application of the governance legitimacy problem: children's online safety legislation (KOSA, the UK Children's Code) attempts to define what platforms are obligated to do differently when the affected user is a minor who cannot meaningfully consent; the accountability arguments in this map — what obligations follow from platform power — take on distinctive urgency when the users who cannot opt out are children.

Further reading

  • Kate Klonick, "The New Governors: The People, Rules, and Processes Governing Online Speech", Harvard Law Review 131, no. 6 (2018) — the foundational academic account of how platform content moderation actually works: Klonick conducted extensive interviews with platform trust and safety teams to document how policy is made, how cases are decided, and what values structure the process; her account complicates both the self-governance defense (moderation decisions are often inconsistent and poorly theorized internally) and the regulatory critique (external rule-making is even less equipped to handle the edge cases platform teams navigate daily); essential for understanding what "self-governance" actually means in practice.
  • Tarleton Gillespie, Custodians of the Internet: Platforms, Content Moderation, and the Hidden Decisions That Shape Social Media (Yale University Press, 2018) — a sociological study of how platform moderation policies develop over time: Gillespie argues that platforms are not passive conduits but active custodians of public discourse, that their moderation decisions are never neutral, and that the fiction of neutrality has prevented platforms from being held accountable for the editorial choices they inevitably make; his account of the organizational and political pressures that shape moderation policy is the best available examination of self-governance as it actually operates.
  • Evelyn Douek, "Governing Online Speech: From 'Posts-as-Trumps' to Proportionality and Probability", Columbia Law Review 121, no. 3 (2021) — the most sophisticated normative framework for what good content moderation governance should look like: Douek argues that both the absolute free-speech position (speech is a trump that overrides competing considerations) and the standard content moderation defense (clearly harmful speech should be removed) are inadequate; she proposes a proportionality framework that asks platforms to conduct genuine risk assessments of the speech they govern and to match the severity of intervention to the probability and magnitude of harm; her framework is the basis for the DSA's systemic risk assessment requirements.
  • Sinan Aral, The Hype Machine: How Social Media Disrupts Our Elections, Our Economy, and Our Health — and How We Must Adapt (Currency, 2020) — the empirical foundation for the algorithmic amplification argument: Aral synthesizes the research on how false information spreads online, showing that misinformation consistently spreads faster, wider, and deeper than true information; his analysis of the recommendation system dynamics that produce this pattern — and his policy proposals for algorithmic governance rather than content removal — is the strongest available case for the third governance position; required reading for anyone who wants to understand why the moderation debate is often addressing a secondary problem.
  • Daphne Keller, "Who Do You Sue? State and Platform Hybrid Power Over Online Speech", Hoover Institution Aegis Paper No. 1902 (2019) — the legal architecture argument for why content moderation governance is harder than it looks: Keller examines the complex interaction between state speech law and platform speech law, arguing that platforms simultaneously enforce state demands (court orders, law enforcement requests, FOSTA-SESTA mandates) and their own policies, and that users cannot easily identify which authority is responsible for any given removal decision; her "hosting versus recommending" distinction is the most influential legal argument for treating algorithmic amplification differently from content hosting under Section 230.
  • Jeff Kosseff, The Twenty-Six Words That Created the Internet (Cornell University Press, 2019) — the definitive history of Section 230: Kosseff traces the statute from its origins in a 1996 congressional compromise through the court decisions that shaped its interpretation, examining what the statute was designed to achieve, how it has been applied, and what reform proposals would actually change; essential for understanding why Section 230 was designed the way it was and why defenders of the statute argue that reform risks producing worse outcomes than the status quo.
  • Marietje Schaake, The Tech Coup: How to Save Democracy from Silicon Valley (Princeton University Press, 2024) — the most forceful statement of the democratic accountability position: Schaake, a former member of the European Parliament who helped develop the DSA's conceptual foundation, argues that the governance gap between platform power and democratic institutional capacity is itself a crisis for democracy; her account of how EU digital regulation developed — and why the US-centric debate has systematically resisted comparable accountability requirements — is essential for understanding what the democratic accountability position is actually protecting and what institutional model it proposes.
  • UN Special Rapporteur on Freedom of Expression, reports on content moderation and human rights (2018–2024) — the international human rights law framework for platform governance: the Special Rapporteur's reports apply the International Covenant on Civil and Political Rights (ICCPR) standards — which protect free expression but permit restrictions that are lawful, necessary, and proportionate — to platform moderation decisions; this framework is neither the US First Amendment model (which permits almost no government restriction) nor the European regulatory model (which permits proportionate accountability requirements) but a third framework that applies to global platforms operating across different constitutional traditions; essential for the global equity position, which holds that international human rights law should constrain platform governance where domestic constitutional law does not reach.