Sensemaking for a plural world

Perspective Map

Misinformation and the Epistemic Crisis: What Each Position Is Protecting

March 2026

In the summer of 2024, the Stanford Internet Observatory — one of the leading academic institutions studying online disinformation — announced it was shutting down its Election Integrity Partnership under intense political pressure. Its director cited coordinated harassment campaigns from figures including Republican members of Congress who had sent letters demanding explanations for the lab's work flagging potentially false content to social media platforms. The House Judiciary Committee's "Twitter Files" hearings had framed academic-platform partnerships as a "censorship-industrial complex." The lab's closure was treated as a victory by some and a catastrophic loss by others. Both were sincere.

In January 2025, Mark Zuckerberg announced Meta would end its third-party fact-checking program in the United States. He framed it as a return to free speech: "The recent elections also feel like a cultural tipping point towards once again prioritizing speech." Researchers who had studied the program's effectiveness said the decision was driven by political calculation, not evidence — the program had measurably reduced exposure to false content. Zuckerberg argued it had introduced too much bias and suppressed too much legitimate speech. Both claims had supporting evidence.

In December 2024, the Global Engagement Center — the U.S. State Department unit established to counter foreign propaganda — closed when Congress allowed its authorization to lapse. Critics had labeled it a censorship bureau. Supporters noted it had tracked Russian information operations targeting NATO allies. The EU's European Digital Media Observatory continued operating without equivalent political challenge.

Throughout this period, the debate produced its own recursive problem. Every claim about misinformation became subject to challenge as misinformation. Researchers studying algorithmic radicalization found themselves algorithmically targeted. Fact-checkers were labeled the facts that needed checking. The lab leak hypothesis — initially removed from platforms as COVID misinformation — became a legitimate scientific inquiry. The epistemic crisis is, in part, a crisis about who has the standing to diagnose it.

The question underneath this is not whether false and misleading content causes harm — the evidence on that is substantial and well-documented. The question is whether any institution — platform, government, academia, civil society — has the legitimacy to adjudicate claims about truth at the scale of a billion-person information environment. And if not, what democratic self-governance looks like in its absence.

What the platform accountability position is protecting

The argument that social media platforms built profit-maximizing systems that amplify emotionally provocative content regardless of its accuracy, that this was an engineering choice made for business reasons, and that platforms therefore bear structural responsibility for the epistemic environment they engineered. This position is associated with Kate Starbird (University of Washington Center for an Informed Public), Renée DiResta (Stanford Internet Observatory, author of Invisible Rulers), Joan Donovan (formerly Harvard Kennedy School), Philip N. Howard (Oxford Internet Institute), and the Frances Haugen whistleblowing disclosures.

The foundational study is Soroush Vosoughi, Deb Roy, and Sinan Aral's 2018 analysis in Science: tracking every piece of verified true and false news on Twitter from 2006 to 2017, they found that false news spread roughly six times faster than true news, reached more people, and was 70% more likely to be retweeted. Crucially, this was not driven by bots — it was driven by human behavior. People found novelty and emotional arousal in false information more than in true information, and platforms rewarded this tendency with the attention metrics that drive algorithmic amplification.

The Facebook Papers, leaked by whistleblower Frances Haugen in 2021, made the internal calculus visible. Company researchers had found that algorithm changes to reduce "civic" content — political misinformation, outrage-inducing posts — also reduced user engagement. The changes were reverted. Internal documents showed the company knew its systems were contributing to ethnic violence in Ethiopia and Myanmar; action was constrained by what executives described as business considerations. The "angry" reaction, which drove five times more algorithmic distribution than "likes," disproportionately amplified divisive content. This was not an accident. It was a design outcome of a system optimized for time-on-platform.

From this position, the "censorship-industrial complex" critique misframes the problem by focusing on the response to information pollution rather than its production. Platforms did not accidentally become amplification engines for false content. They were architected to maximize engagement, engagement correlates with emotional arousal, emotional arousal correlates with outrage and novelty, and false content reliably delivers both. Asking platforms to simply stop moderating — without changing the underlying incentive structure — is like asking a factory to stop polluting while continuing to use the production process that generates the waste.

What this position is protecting: the proposition that information pollution is not a natural phenomenon but an engineered one, produced by specific design decisions made for specific profit motives. It protects the claim that concentrated private power over information infrastructure carries public accountability obligations — that the right to reach a billion people with algorithmically amplified content is not the same as the right to print a pamphlet. And it protects the researchers, journalists, and institutions trying to understand and respond to a problem whose scale is genuinely without precedent.

What the free speech and anti-moderation position is protecting

The argument that the real epistemic threat is institutional overreach — that governments, platforms, and academic researchers have constructed a censorship apparatus in the name of "disinformation" that suppresses legitimate dissent, marginalizes heterodox viewpoints, and constitutes a more serious threat to democratic discourse than the content it targets. This position is associated with Matt Taibbi and Michael Shellenberger (Twitter Files), Jay Bhattacharya (Stanford professor and co-author of the Great Barrington Declaration), Eugene Volokh (UCLA law, First Amendment), and a broad coalition of civil libertarians and political conservatives.

The Twitter Files — internal documents obtained from Elon Musk by Taibbi, Shellenberger, and other journalists beginning in late 2022 — showed that Twitter had maintained "do not amplify" lists affecting accounts including sitting politicians, had processed removal requests from the FBI and DHS, and had coordinated content decisions with academic researchers through the Election Integrity Partnership and the Virality Project. Taibbi's framing: a "censorship-industrial complex" linking government agencies, NGOs, and platforms in a systematic effort to suppress disfavored political speech under the guise of "safety." The documents were real. Their interpretation was contested.

The COVID-era examples are the position's most compelling evidence. The lab leak hypothesis — the possibility that SARS-CoV-2 originated in a laboratory rather than through natural zoonotic transmission — was labeled "misinformation" by fact-checkers and removed from platforms in 2020 and 2021, based substantially on a letter in The Lancet organized by Peter Daszak, who had direct financial interests in the Wuhan Institute of Virology's research programs. By 2023, the hypothesis was being taken seriously by the FBI, the Department of Energy, and the Wall Street Journal's reporting on classified assessments. The Great Barrington Declaration — signed by Jay Bhattacharya, Sunetra Gupta, and Martin Kulldorff, advocating focused protection of vulnerable populations rather than broad lockdowns — was characterized by NIH Director Francis Collins in internal emails as requiring a "quick and devastating" public response. Bhattacharya later discovered he had been placed on Twitter's "do not amplify" list.

The position does not require believing that all moderated content is legitimate, or that misinformation causes no harm. It requires only accepting that the power to label speech "misinformation" is an inherently political power — that no institution, however expert, is politically neutral, and that concentrating this power in platforms, governments, and academic institutions that share ideological commitments will produce viewpoint discrimination that looks like fact-checking. The lab leak hypothesis was not fringe science. It was a reasonable scientific position that was suppressed through institutional channels and later rehabilitated. The precedent this sets for the next contested scientific or political question is the actual concern.

What this position is protecting: the proposition that epistemic pluralism — the coexistence of genuinely different interpretations of evidence — is not a bug in a functioning democracy but a feature. It protects the historical record demonstrating that heterodox positions labeled misinformation have sometimes turned out to be correct. It protects the standing claim that any moderation regime will be applied unevenly, and that the political valence of that unevenness will follow the political valence of those who hold moderation power. Most fundamentally, it protects the proposition that the cure — institutional authority to decide what is true — may be worse than the disease, because the disease at least distributes epistemically; the cure concentrates power.

What the structural and media ecosystem position is protecting

The argument that "misinformation" as a frame misdirects attention — focusing on individual bad content rather than the structural conditions that produced a broken information ecosystem: the collapse of local journalism, the attention economy's business model, media consolidation, and decades of manufactured distrust in public institutions. This position is associated with Emily Bell (Columbia Tow Center for Digital Journalism), Victor Pickard (University of Pennsylvania, Democracy Without Journalism?), Nikki Usher (University of San Diego), and the sustained work of organizations like Free Press and the Knight Foundation.

The numbers are not disputed. Between 2005 and 2024, approximately 3,200 American newspapers closed. More than 200 counties in the United States now have no local news outlet at all. The "news deserts" — disproportionately concentrated in rural areas and communities of color — are not entertainment gaps. Research by Pengjie Gao, Chang Lee, and Dermot Murphy (2018) found that the closure of local newspapers correlates with increases in municipal borrowing costs — because without local journalism, bond market participants have less information about local government behavior. Local journalism was accountability infrastructure. Its collapse left something — and what fills the vacuum isn't nothing. It is rumor, partisan media, and the emotional dynamics of social platforms algorithmically optimized for engagement.

The attention economy diagnosis follows from this: the problem isn't specific pieces of false content. It's that the business model of social media monetizes attention, attention is most cheaply generated by outrage and novelty, and false and misleading content reliably produces both. Fact-checking individual claims within this architecture is treating a symptom. The structural intervention would address the business model — through platform regulation, new funding models for public interest journalism, or both. The EU's Digital Services Act moves in this direction by imposing algorithmic transparency requirements on very large platforms; public media models like the BBC or NPR provide an alternative funding structure that decouples journalism from engagement metrics.

The position has a specific critique of the "censorship-industrial complex" framing: both the platform accountability camp and the anti-moderation camp are, from this vantage point, treating content moderation as the central lever for improving the information environment. They disagree about which direction to push it. But the structural analysis suggests that content moderation at the margin cannot address a problem produced by the architecture itself. A community with a functioning local newspaper, a high school media literacy curriculum, and a municipal government that holds press conferences is structurally more resilient to false information than one without any of those things — not because its residents are smarter, but because the ecosystem around them functions differently.

What this position is protecting: the systemic diagnosis against the individualist framing — the claim that epistemics are social infrastructure, not individual cognitive hygiene. It protects the proposition that the debate about content moderation may be a proxy war that both sides can sustain indefinitely while neither addresses the ecosystem conditions that make false content so adhesive in the first place. And it protects the argument that democratic information environments have historically required public investment — in journalism, in public broadcasting, in civic education — that market systems do not reliably produce on their own.

What the epistemic security and information warfare position is protecting

The argument that the epistemic crisis is not primarily a domestic political failure but a deliberate strategic attack — that adversarial states have identified the information environment as a domain of conflict and are systematically exploiting democratic openness to undermine institutional trust, amplify social division, and weaken collective action capacity. This position is associated with Renée DiResta, Camille François (Graphika), Ben Nimmo (formerly Atlantic Council and Facebook), Thomas Rid (Johns Hopkins SAIS, Active Measures), Nina Jankowicz (Wilson Center, How to Lose the Information War), and Clint Watts (former FBI).

The Internet Research Agency operation is the primary empirical anchor. The Russian troll farm created and operated networks of authentic-seeming American social media accounts between 2014 and 2018. The Senate Intelligence Committee's bipartisan report — 966 pages, based on documents from Facebook, Twitter, Google, and other platforms — documented the operation reaching 126 million Facebook users, 20 million Instagram users, and 1.4 million Twitter users. The targeting was not random. It targeted Black Americans with demobilization messaging ("this election is stolen anyway, don't bother"), targeted White conservatives with amplified anger, and targeted Bernie Sanders supporters with messages designed to suppress general election turnout. This was not organic domestic politics. It was engineered by a foreign intelligence service that understood American social fissures and knew how to apply pressure.

The pattern has extended and diversified since 2016. Chinese-origin influence operations targeting Taiwan's elections have been documented by Graphika and the Stanford Internet Observatory. Iranian networks have targeted Jewish and Arab communities in the United States simultaneously. Domestic operations interweave with and amplify foreign-origin content until attribution becomes practically impossible — which is partly the point. The closure of the Global Engagement Center, from this position's perspective, was not a victory for free speech. It was the United States disarming in an active conflict at the adversary's request.

The position carries a genuine dilemma that it largely acknowledges. Identifying and countering coordinated influence operations requires identifying accounts that look, from the outside, like organic domestic speech. The tools for doing so — behavioral labeling, network attribution, platform-government coordination — are structurally identical to the infrastructure that the anti-moderation position describes as censorship. There may not be a way to maintain effective epistemic defenses against adversarial operations without accepting some risk of misapplication against legitimate speech. The question the epistemic security position asks in return: what is the cost of accepting no risk at all?

What this position is protecting: the proposition that democratic epistemics are not self-regulating — that an open information environment can be weaponized by well-resourced adversaries who face no reciprocal vulnerability. China's information environment is not open to U.S. influence operations. Russia's is not. The epistemic asymmetry is real, and ignoring it in the name of free speech principles does not make it less real. Most fundamentally, it protects the claim that the existing institutions of democratic self-governance — elections, legislatures, courts — depend on citizens being able to form beliefs through something other than foreign psychological operations, and that this capacity requires active maintenance, not passive defense.

Tensions that don't resolve
  • The content moderation apparatus and the censorship apparatus may be the same apparatus. Whether it functions as protection or suppression depends on who controls it and what values it encodes. This isn't an accusation against either side — it's a structural feature of the technology. The same algorithmic system that reduces reach for Russian disinformation can reduce reach for heterodox scientific arguments. Neither the platform accountability position nor the epistemic security position has a clean answer to this. Acknowledging it isn't the anti-moderation position's exclusive property.
  • The lab leak hypothesis and the IRA operation are simultaneously the strongest evidence for opposite sides. The lab leak case demonstrates that institutional consensus can be wrong, that platforms can suppress legitimate inquiry, and that the label "misinformation" has been applied to things that turned out to be reasonable questions. The IRA case demonstrates that foreign states can engineer synthetic social movements that are indistinguishable from organic domestic opinion and that can measurably shift political behavior at scale. Both are real. They generate incompatible policy responses from identical evidence about the inadequacy of the current information environment.
  • The structural diagnosis and the content moderation debate talk past each other in a revealing way. Platform accountability advocates and anti-moderation advocates both treat the moderation lever as the key intervention. The structural position argues this is the wrong lever entirely — that you can optimize moderation in either direction forever without touching the ecosystem conditions that make the problem persistent. The structural position has a credibility problem, though: "fix journalism funding" and "invest in media literacy" are correct and slow, while social media reaches a billion people today. The urgency asymmetry is real.
  • The epistemic crisis is also a crisis about trust in the institutions that would diagnose and respond to the epistemic crisis. The Centers for Disease Control, the FDA, mainstream journalism, academic research — all have experienced significant credibility losses over the past decade, some deserved and some not. Proposals that depend on these institutions' authority to identify and counter false information face the problem that the authority gap is itself part of what needs explaining. The Stanford Internet Observatory was doing careful, methodologically sound work. It was also, from some perspectives, operating as an institutional actor in contested political terrain. Both things can be true, and the tension between them doesn't have a clean resolution.

See also

  • Who bears the cost? — the framing essay for the burden-sharing conflict underneath the epistemic crisis: when local journalism collapses, engagement systems reward falsehood, and trust in public institutions erodes, the losses do not land evenly, and the real fight is over whether citizens, vulnerable communities, newsrooms, public institutions, or the platforms themselves absorb the democratic and civic damage.
  • Who gets to decide? — the framing essay for the authority dispute underneath the epistemic crisis: when institutions, platforms, or experts step in to police falsehood, the conflict is not only over truth but over who has legitimate standing to make that call for everyone else.
  • Who belongs here? — the framing essay for the civic-belonging wound inside the information crisis: when whole publics experience mainstream institutions as alien or contemptuous, the fight over misinformation becomes a fight over whose knowledge, fear, and political speech are treated as part of the democratic "we."
  • Platform Accountability and Content Moderation — the direct policy question of how content moderation should work, who decides, and under what rules; the procedural debate that sits underneath the epistemic one
  • Platform Moderation and Free Expression — the values argument over whether platforms are publishers or utilities, and what First Amendment principles apply in privately owned public spaces
  • Social Media and Democracy — the broader question of whether social media has been net positive or negative for democratic participation, discourse, and legitimacy
  • Algorithmic Recommendation and Radicalization — the specific question of whether recommendation algorithms push users toward more extreme content; the empirical debate between researchers like Brendan Nyhan and claims from the radicalization literature
  • Journalism and Media Trust — the structural question of what happened to local journalism, why trust in mainstream media collapsed, and what institutional alternatives might restore public interest information infrastructure
  • Social Trust and Institutional Legitimacy — the epistemic crisis sits within a larger crisis of institutional legitimacy; understanding why trust collapsed matters as much as understanding what to do now
  • AI and Democracy — the next iteration of the epistemic crisis question: what deepfakes, synthetic media, and AI-generated disinformation at scale do to information environments that are already struggling

References and further reading

  • Soroush Vosoughi, Deb Roy, and Sinan Aral: "The spread of true and false news online", Science, Vol. 359, Issue 6380 (March 2018) — the foundational quantitative study; false news spread six times faster than true news on Twitter, reaching more people and generating more retweets; novelty and emotional arousal drove sharing more than accuracy; the study controls for bots, showing the amplification was human-driven
  • Renée DiResta: Invisible Rulers: The People Who Turn Lies into Reality, PublicAffairs (2024) — argues that influence and reach in the information environment are now determined by algorithmic amplification rather than editorial judgment; traces how this power is invisible, unaccountable, and exploitable by state and non-state actors alike
  • Thomas Rid: Active Measures: The Secret History of Disinformation and Political Warfare, Farrar, Straus and Giroux (2020) — the historical account of information operations from World War I through the Internet Research Agency; argues disinformation has been a constant of political conflict and that the current moment requires understanding its genealogy to respond effectively
  • Philip N. Howard: Lie Machines: How to Save Democracy from Troll Armies, Deceitful Robots, Clickbait, and Political Propaganda, Yale University Press (2020) — the political economy of computational propaganda; documents state-level disinformation programs in more than 80 countries; focused on the industrial scale of manufactured political content
  • Victor Pickard: Democracy Without Journalism? Confronting the Misinformation Society, Oxford University Press (2020) — argues the collapse of the journalism industry, not individual bad actors, is the structural root cause of the epistemic crisis; proposes a public media model drawing on the BBC, Scandinavian broadcasting, and U.S. public radio traditions
  • Nina Jankowicz: How to Lose the Information War: Russia, Fake News, and the Future of Conflict, I.B. Tauris (2020) — case studies of Ukraine, Poland, Georgia, Czech Republic, and Germany's responses to Russian information operations; argues Western democracies lack the political vocabulary and institutional infrastructure to discuss the problem clearly
  • Senate Intelligence Committee: Report on Russian Active Measures Campaigns and Interference in the 2016 U.S. Election, Volume II: Russia's Use of Social Media (October 2019) — the definitive bipartisan public document on the Internet Research Agency operation; quantifies reach, targeting strategy, and methods; based on documents from Facebook, Twitter, Google, and YouTube provided under committee subpoena
  • Matt Taibbi et al.: Twitter Files capsule summaries, with links to the individual threads (2022–2023) — the primary source document trail for the anti-censorship argument; internal Twitter records showing government-platform coordination, "do not amplify" lists, and moderation decisions at scale; the documents are real; interpretations are contested; essential reading for understanding what the anti-moderation position actually argues
  • Frances Haugen / Facebook Papers (2021): internal Facebook research documents leaked by the whistleblower; key documents on algorithmic amplification of outrage, platform knowledge of ethnic violence in Ethiopia and Myanmar, and the suppression of civic content changes for engagement reasons; available via the public Facebook Papers document library on DocumentCloud
  • European Digital Services Act (DSA), Regulation (EU) 2022/2065 — the EU's primary regulatory response; creates algorithmic transparency requirements, risk assessment obligations for systemic risks including disinformation, and content moderation accountability requirements for very large online platforms; the most significant regulatory intervention to date and the primary model for those advocating structural platform reform
  • Pengjie Gao, Chang Lee, and Dermot Murphy: "Financing Dies in Darkness? The Impact of Newspaper Closures on Public Finance", Journal of Financial Economics (2020) — the empirical study linking newspaper closures to increased municipal borrowing costs; demonstrates local journalism's function as accountability infrastructure with measurable economic effects when it disappears
  • Jevin West and Carl Bergstrom: Calling Bullshit: The Art of Skepticism in a Data-Driven World, Random House (2020) — the epistemics textbook; tools for evaluating statistical claims, data visualizations, and scientific arguments in an environment of information abundance; a core text for the media literacy approach