Perspective Map
AI and Democracy: What Each Position Is Protecting
In the spring of 2024, a robocall went out to Democratic voters in New Hampshire using a convincing synthetic imitation of President Biden's voice, instructing them not to vote in the primary because their vote was "needed" for the general election in November. The audio was not subtle on close inspection — it had the texture of AI synthesis — but it was persuasive enough to prompt a federal investigation and a wave of state legislation. A few months later, an AI-generated deepfake circulated in Slovakia of a leading candidate apparently discussing plans to buy votes; it appeared two days before the election, when campaign advertising rules prohibited a response. The candidate lost by a thin margin. Whether the deepfake was decisive is unknowable. That it reached hundreds of thousands of voters in the final 48 hours is not.
These cases energized a policy debate that had been building for several years: what, if anything, should governments and platforms do about AI-generated content in electoral contexts? The debate is genuine — it touches foundational questions about democratic legitimacy, the governance of speech, the accountability of platforms, and the structural conditions that make democracies vulnerable to manipulation — but it is also prone to collapsing into familiar camps (tech optimists vs. tech alarmists; speech libertarians vs. content regulators) in ways that miss what each position is actually protecting.
Elena runs the disinformation research program at a university internet observatory. She has spent seven years cataloguing coordinated inauthentic behavior campaigns — fake accounts, bot networks, synthetic personas deployed by state actors and political operatives — and she has watched the cost of that activity collapse as AI tools have become accessible. A campaign that once required a team of human operators and months of account aging can now be bootstrapped with off-the-shelf language models in an afternoon. She is not panicking, because she has watched enough panics to know that most disinformation campaigns fail to achieve their stated goals, and she is not dismissive, because she has also watched enough elections to know that margins can be thin and information environments can shift. What concerns her most is not any single deepfake but the cumulative effect of an information environment where synthetic content is abundant and verification is expensive: a world where everything might be fake is almost as epistemically corrosive as a world where everything specific to you actually is fake.
Marcus is a First Amendment litigator. He has spent the last three years in court and in testimony defending the principle that the cure for bad speech is more speech, and watching that principle tested more severely than at any point in his career. He is not naive about disinformation — he has read the research, and he knows that some of it is produced by authoritarian governments specifically trying to undermine democratic institutions. What he cannot get past is the track record of institutions asked to distinguish true from false political speech: the "Disinformation Governance Board" that the Department of Homeland Security created in April 2022 and shut down three weeks later under bipartisan pressure; the COVID lab leak hypothesis, rated misinformation by platforms and vindicated as credible by the FBI and CIA; the Hunter Biden laptop story, suppressed by platforms as foreign disinformation and later confirmed as authentic. He is not arguing these examples prove the disinformation threat isn't real. He is arguing they prove that whoever holds the power to label speech as disinformation will use it — not necessarily in bad faith, but in ways that reflect their own assumptions and interests.
Diane served as a county election director for twelve years before moving to a nonprofit that trains election officials. In the two election cycles since AI generation became cheap, her network has tracked something specific and concrete: targeted misinformation about voting logistics — false claims about polling place relocations, incorrect dates, warnings that people with outstanding warrants will be arrested at polling places — delivered to identified voter subgroups through social media and SMS. This is not a dispute about political ideas. It is a direct attack on the franchise, made vastly cheaper and more targetable by AI. She is not particularly interested in the broader debate about political deepfakes and the epistemic commons. She wants the specific, verifiably false, specifically targeted logistical suppression to stop.
Rafael is a media economist who has studied the political economy of information for thirty years. He watched the disinformation panic emerge after 2016 with a mixture of recognition and frustration. Recognition because the structural vulnerabilities it identified — concentrated platform power, advertising models that monetize outrage, fragmented local journalism — were things he had been writing about since the 1990s. Frustration because the AI-and-disinformation frame tends to locate the problem in technology rather than in the political economy that deploys technology, and tends to produce governance proposals that address the technological symptom while leaving the structural cause untouched. The US right-wing information ecosystem was producing systematic disinformation before AI was involved. Addressing AI-generated content without addressing the ecosystem dynamics that amplify and reward it will not fix the underlying problem.
What epistemic defenders are protecting
The epistemic defense position is not primarily about any specific deepfake. It is about the conditions that make democratic self-governance possible: that voters can form genuine judgments about candidates and policies based on a shared information environment that is at least roughly tethered to reality.
They are protecting the authenticity conditions for meaningful consent. A voter who forms a judgment about a candidate based on a synthetic recording of the candidate saying something they never said has not meaningfully consented to be governed by that candidate. The argument is not just about manipulation — though it is partly about that — but about the logical structure of democratic legitimacy. Elections derive their authority from the informed choices of citizens. AI-generated content that makes it impossible to know what candidates actually said or believe degrades the informational basis on which those choices rest. Renée DiResta's Invisible Rulers: The People Who Turn Lies into Reality (PublicAffairs, 2024) traces how this degradation operates not through individual fabrications but through the cumulative effect of volume and velocity: when synthetic content is cheap and abundant, the cognitive cost of verification rises, and the instinct to disengage — to trust nothing — becomes rational at the individual level while being catastrophic at the collective level.
They are protecting the epistemic commons against what is genuinely new about AI. Disinformation is not new. Political propaganda is ancient. What AI changes is scale, personalization, and cost. Kathleen Hall Jamieson's Cyberwar: How Russian Hackers and Trolls Helped Elect a President (Oxford University Press, 2018) documented what a state-level adversary could accomplish with a team of human operators and a multi-year investment. Samuel Woolley's The Reality Game: How the Next Wave of Technology Will Break the Truth (PublicAffairs, 2020) documents the trajectory: the same capabilities are now available to political operatives, individual bad actors, and foreign intelligence services at a fraction of the cost and with a fraction of the operational sophistication previously required. The argument is not that AI creates new kinds of harm but that it dissolves the practical barriers — cost, skill, time — that previously limited who could produce sophisticated influence operations at scale.
They are protecting the capacity to verify: the shared epistemic infrastructure that allows disputed claims to be settled. Philip Howard's Lie Machines: How to Save Democracy from Troll Armies, Deceitful Robots, Junk News Operations, and Political Operatives (Yale University Press, 2020) identifies the core vulnerability: verification is an institutional capacity that requires time, resources, and trust in the institutions doing the verification. Industrial-scale synthetic content exploits the asymmetry between production (fast, cheap, automated) and verification (slow, expensive, labor-intensive). Epistemic defenders argue that protecting that asymmetry — through disclosure requirements, watermarking, platform liability for hosting undisclosed AI electoral content — is not about restricting speech but about maintaining the conditions under which speech can be evaluated.
What free expression advocates are protecting
The free expression position does not argue that deepfakes are harmless or that election manipulation is a fictional concern. It argues that every mechanism proposed to address AI-generated disinformation requires assigning authority over political truth to someone — and that this authority, however it is framed, is dangerous.
They are protecting the principle that the cure has a worse track record than the disease. The recent history of "disinformation governance" in democratic societies is a catalogue of overreach: the Disinformation Governance Board, proposed as a coordinating mechanism within the Department of Homeland Security, was shut down within three weeks of its announcement in April 2022 after bipartisan criticism that it gave a government agency unprecedented authority over political speech. The COVID-19 lab leak hypothesis — labeled as misinformation by platforms and fact-checkers for over a year — was assessed as plausible, possibly the more likely origin, by both the FBI and the Department of Energy by early 2023. The Hunter Biden laptop story, suppressed by Twitter and Facebook in October 2020 as potential foreign disinformation, was authenticated by major news organizations a year and a half later. Free expression advocates are not arguing these cases prove that platforms always get it wrong. They are arguing that the track record demonstrates that whoever holds epistemic authority over political speech will apply it in ways that reflect their own priors — with complete sincerity — and that this authority is structurally prone to suppressing true information that challenges incumbent power.
They are protecting the asymmetric application problem. Jacob Mchangama's Free Speech: A History from Socrates to Social Media (Basic Books, 2022) documents the consistent historical pattern: content moderation regimes purportedly designed to protect speech from abuse are consistently applied more aggressively against marginalized, non-Western, and minority communities than against dominant groups. The free expression concern about AI disinformation governance is not primarily about powerful actors protecting their speech — it is about the structural tendency of institutional content moderation to protect established speech while suppressing challenges to it, regardless of the content of the governance framework's stated principles.
They are protecting the distinction between restricting AI and restricting speech. Most current proposals for AI disinformation governance would, in practice, require platforms and governments to make judgments about the political content of AI-generated speech, not just its mode of production. A synthetic video of a politician that accurately depicts a position they actually hold is governed differently from one that depicts a position they don't hold — but determining which is which requires political judgment. Free expression advocates argue that the "AI content" frame is doing political work: it creates the appearance of content-neutral technological governance while embedding political judgments about truth inside that governance.
What electoral process defenders are protecting
A third position — narrower than the epistemic defense and less sweeping than the free expression position — argues that the AI and democracy debate is most tractable and most urgent at a specific point: the domain of verifiably false electoral logistics.
They are protecting the franchise itself, not just the quality of political discourse. Misinformation about where to vote, when to vote, whether a polling place has moved, whether your registration is valid, whether attending a polling place risks arrest — this is not a dispute about political ideas. It is targeted interference with the act of voting, made vastly cheaper and more precise by AI tools that allow operatives to identify specific voter subgroups and deliver tailored suppression messages at scale. The Cybersecurity and Infrastructure Security Agency's election security work has documented this specifically: AI-assisted targeting of voter suppression messaging toward minority voters, new voters, and low-propensity voters in competitive districts represents a qualitatively new logistical capacity for disenfranchisement. The electoral process position argues that this narrow domain — false claims about the mechanics of voting — is both the most clearly harmful and the most clearly regulable, because the falsity is verifiable (polling places either moved or they didn't) without requiring judgments about political interpretation.
They are protecting the distinction between foreign and domestic disinformation. There is considerably more legal and political consensus around prohibiting AI-enabled foreign influence operations than around domestic content regulation. State actors — Russia's Internet Research Agency, Iranian influence operations, Chinese coordinated inauthentic behavior campaigns — use AI to amplify divisive domestic content and fabricate authentic-seeming content without the First Amendment protections that apply to domestic political speech. Electoral process defenders argue that starting here — with clear foreign adversary applications — creates space for consensus while avoiding the constitutional and political minefield of domestic speech regulation. The distinction is increasingly blurry (domestic operatives use the same techniques; foreign actors amplify authentic domestic content), but it provides a more tractable entry point for governance.
They are protecting the narrow capacity of election officials to communicate reliably with voters. The practical request from election administrators is modest: that AI-generated content impersonating election officials or government agencies be prohibited; that platforms expedite removal of verifiably false logistical information during election periods; and that disclosure of AI-generated electoral advertising be required, as it is already required for human-produced political advertising. This is not a comprehensive theory of epistemic governance — it is a targeted defense of the administrative integrity of the election process itself.
What structural critics are protecting
A fourth position argues that the AI disinformation frame, however well-intentioned, systematically misidentifies both the cause of democratic information pathology and the category of solution it requires.
They are protecting the recognition that AI amplifies structural vulnerabilities it did not create. Yochai Benkler, Robert Faris, and Hal Roberts's Network Propaganda: Manipulation, Disinformation, and Radicalization in American Politics (Oxford University Press, 2018) — the most thorough empirical study of the US information ecosystem — reached a conclusion that the AI disinformation frame tends to obscure: the primary driver of systematic disinformation in US politics is not foreign interference, not platform algorithms, and not technology per se, but the asymmetric structure of the right-wing media ecosystem, which has been organized around political identity and adversarial epistemics for decades. AI makes this worse in degree. It does not change its structure. Governance proposals targeting AI-generated content will not address the underlying ecosystem dynamics that produce, amplify, and reward disinformation as a political strategy.
They are protecting the political economy of journalism as the real epistemic infrastructure question. Victor Pickard's Democracy Without Journalism? Confronting the Misinformation Society (Oxford University Press, 2020) documents the relationship between the collapse of local journalism — roughly 1,800 local newspapers closed between 2004 and 2020, reducing accountability coverage in thousands of communities — and the vulnerability of those communities to disinformation. AI content governance does not rebuild local journalism. A media system in which advertising revenue migrated to platforms that don't produce journalism, leaving communities with no independent institutional capacity to verify political claims, is epistemically fragile in ways that AI-labeling requirements cannot address.
They are protecting the conditions of receptivity: the political economy that makes people susceptible to manipulation. Zeynep Tufekci has argued across multiple venues that the deepfake panic misunderstands the mechanism of political manipulation: the problem is not that people are deceived by individual pieces of synthetic content but that they exist in information environments where institutional trust has been systematically eroded, where economic insecurity creates receptivity to authoritarian populism, and where political identity has become so totalized that information incompatible with group identity is rejected regardless of its authenticity. In this environment, AI-generated disinformation is effective not because it is convincing but because it gives people who want to believe something a reason to. This is a political and economic problem. It is not a content problem.
Where the real disagreement lives
The AI and democracy debate is structured by several genuine disagreements that positional statements about regulation or free speech rarely surface.
The cause vs. amplifier problem. Is AI making the democratic information landscape worse in kind — creating genuinely new categories of harm that couldn't exist without it — or worse in degree, amplifying pre-existing structural vulnerabilities? The deepfake is the paradigm case for the "in kind" argument: synthetic audio of a candidate saying something they never said is a new kind of falsification. But the persuasive power of that deepfake depends on pre-existing distrust, pre-existing media ecosystem dysfunction, and pre-existing political identity polarization. The empirical question — how much harm is attributable to AI specifically vs. to the structures AI operates within — is genuinely contested and has direct policy implications. The "in kind" answer tends toward targeted AI content regulation. The "in degree" answer tends toward structural media reform.
The epistemic authority trap. Every mechanism designed to protect the information commons from synthetic manipulation requires assigning epistemic authority to someone: a government agency, a platform, a fact-checking organization, an international body. In a polarized environment, any such authority will be politically contested — not because partisans are irrational but because the designation "disinformation" is not politically neutral, and the actors best positioned to hold epistemic authority are the ones with the most at stake in the outcome. This is not an argument against all governance. It is a structural feature of the problem that any governance proposal must grapple with: the mechanism designed to protect political truth will become a political battleground. The proposals that have had the most traction — disclosure requirements rather than removal, narrow scope (electoral logistics, candidate impersonation) rather than broad epistemic policing — minimize but do not eliminate this problem.
The scale and personalization problem. The deepfake gets the most attention, but the more consequential AI application may be something less visible: industrial-scale personalized political messaging optimized for individual psychological profiles. A political campaign that can generate and test thousands of message variants, identify which one is most likely to shift the behavior of each specific voter, and deploy it automatically — with no human involved in the final step — is doing something qualitatively different from traditional political advertising, even when every claim in the message is technically true. The manipulation concern here is not about false information but about optimized psychological targeting at a scale and precision previously impossible. This application is harder to regulate than deepfakes (no single false claim to identify), more pervasive (already deployed by major campaigns), and more fundamentally threatening to the model of democratic consent (which assumes that voter choices are genuinely the voter's own, not the product of precision psychological optimization).
The verification asymmetry problem. Producing synthetic electoral content is fast and cheap. Verifying, debunking, and correcting it is slow and expensive — and the correction rarely reaches the same audience as the original. This asymmetry existed before AI (the "firehose of falsehood" strategy associated with Russian information operations deliberately exploited it), but AI dissolves the practical constraints that limited how aggressively it could be exploited. Governance approaches that rely primarily on post-hoc correction — platform labeling, fact-checking, media literacy — are working against a structural asymmetry that worsens with every advance in generation capability.
See also
- Who gets to decide? — the framing essay for the legitimacy conflict underneath AI election governance: whether synthetic political speech should be constrained by public rules, platform policy, or not at all, and which institution gets to decide before the damage is obvious.
- Who belongs here? — the framing essay for the membership conflict inside this debate: voter-suppression targeting, minority disenfranchisement, and the manipulation of the shared public sphere are not only speech-governance questions but questions about whose participation counts as worth protecting and whose place in the demos is easiest to treat as disposable.
- Social Media and Democracy: What Each Position Is Protecting — the structural antecedent to this map: the algorithmic amplification and platform business model concerns in that map are the information environment into which AI-generated content lands; what platform defenders, algorithmic accountability advocates, epistemic commons defenders, and structural critics are protecting in the platform debate are the upstream conditions that determine whether AI-generated content is dangerous, and why.
- AI Governance: What Each Position Is Protecting — the institutional parallel: the four governance frames in that map (innovation-first, safety-first, accountability, global governance) apply with particular sharpness in the electoral context; the accountability and global governance positions there are the closest institutional expression of the structural critique in this map; and the dual-use problem named in that map — AI capabilities that enable beneficial applications and harmful ones cannot be cleanly separated — is structurally identical to the problem of regulating AI political content without suppressing legitimate political speech.
- Platform Accountability and Content Moderation: What Each Position Is Protecting — the governance design question that underlies this map: who decides what electoral speech is permissible, under what framework, with what transparency and appeals process; the "epistemic authority trap" named in this map is a specific instance of the governance legitimacy problem that the platform accountability map identifies as the central unresolved tension in all platform content governance.
- Surveillance Capitalism: What Each Position Is Protecting — the commercial data infrastructure that enables AI-driven personalized political targeting; the behavioral data profiles used to optimize political advertising are built by the same surveillance architecture that Zuboff and others critique; the most under-regulated AI application in elections (personalized persuasion optimization) is only possible because of the commercial data collection that surveillance capitalism critics have documented.
- Digital Privacy and Surveillance: What Each Position Is Protecting — the government surveillance dimension: AI-enhanced state surveillance capability intersects with AI-generated disinformation in ways that the electoral map only touches; authoritarian governments use AI both to surveil dissidents and to generate synthetic content about them; the same AI capabilities serve both functions.
Further reading
- Renée DiResta, Invisible Rulers: The People Who Turn Lies into Reality (PublicAffairs, 2024) — the clearest account of how influence operations actually work at scale: DiResta traces how synthetic amplification and coordinated inauthentic behavior manufacture the appearance of organic social consensus, creating conditions where manipulated narratives acquire real political force not because they persuade but because they change what people believe others believe; essential for understanding why the "people won't be fooled by obvious deepfakes" argument misunderstands the mechanism of mass disinformation, which operates through social proof and perceived consensus rather than individual persuasion.
- Yochai Benkler, Robert Faris, and Hal Roberts, Network Propaganda: Manipulation, Disinformation, and Radicalization in American Politics (Oxford University Press, 2018) — the indispensable structural antecedent: Benkler and colleagues analyzed millions of news articles and social media posts from the 2016 cycle and found that the primary driver of disinformation was not Russian interference or Facebook algorithms but the asymmetric structure of the right-wing media ecosystem, which operates with stronger tribal identity enforcement, weaker fact-checking norms, and greater susceptibility to propaganda than the mainstream and center-left media ecosystems; the book is not a partisan argument but an empirical one, and its conclusion — that AI amplifies a pre-existing structural problem rather than creating a new one — is the foundation for the structural critique of AI disinformation governance.
- Kathleen Hall Jamieson, Cyberwar: How Russian Hackers and Trolls Helped Elect a President (Oxford University Press, 2018) — the most rigorous attempt to assess whether the 2016 Russian influence operations actually shifted vote totals; Jamieson's answer is carefully hedged: the operations were real, large in scale, and targeted with some precision, and circumstantial evidence suggests they may have been sufficient to tip a close election; her work is important for both sides of the debate — for the epistemic defense position as evidence that sophisticated influence operations matter, and for the structural position as evidence that impact depended on pre-existing ecosystem vulnerabilities rather than the sophistication of any individual piece of disinformation content.
- Samuel Woolley, The Reality Game: How the Next Wave of Technology Will Break the Truth (PublicAffairs, 2020) — a comprehensive account of computational propaganda across national contexts (US, UK, Brazil, Philippines, China, Russia) from the director of the Propaganda Research Lab at UT Austin; Woolley's contribution is empirical documentation of how quickly capabilities spread from state intelligence agencies to political consultants to ordinary political operatives, and his trajectory analysis is the clearest statement of why the declining cost of sophisticated influence operations is a structural change rather than a temporary edge case.
- Victor Pickard, Democracy Without Journalism? Confronting the Misinformation Society (Oxford University Press, 2020) — the strongest argument that the epistemic commons requires institutional journalism as its foundation and that AI disinformation governance cannot substitute for the media funding and structural reform that collapsed local journalism requires; Pickard traces the connection between news deserts and political disinformation vulnerability and argues that the solution is not content moderation but the public media funding models that sustain robust journalism in most other wealthy democracies.
- Philip Howard, Lie Machines: How to Save Democracy from Troll Armies, Deceitful Robots, Junk News Operations, and Political Operatives (Yale University Press, 2020) — a practical account of the computational propaganda supply chain (content farms, bot amplification networks, microtargeting operations) and a governance proposal centered on platform transparency requirements, advertising regulation, and public investment in media literacy; Howard occupies a middle position between the epistemic defense and structural critique: he argues that the problem is real and urgent but that technical regulation is only one element of a solution that must also address the political economy of information.
- Jacob Mchangama, Free Speech: A History from Socrates to Social Media (Basic Books, 2022) — the most comprehensive history of free expression as a political and legal principle, with particular attention to the consistent historical pattern that content moderation regimes designed to protect speech from abuse are applied more aggressively against challengers to established power than against incumbents; the book does not argue against all content governance but provides the strongest empirical case for why the institutional design of epistemic authority matters as much as its stated principles.
- Zeynep Tufekci, "It's the (Democracy-Poisoning) Golden Age of Free Speech," Wired (January 2018) and subsequent essays — Tufekci's argument that the disinformation problem is about the architecture of the attention economy rather than any individual piece of content; her observation that the deepfake fear misunderstands the mechanism of political manipulation (which exploits motivated reasoning and identity-protective cognition rather than deceiving epistemically open minds) is the clearest statement of why governance focused on synthetic content addresses the symptom rather than the mechanism; her work across venues is the best entry point for the structural critique from someone who takes the disinformation concern seriously.