Sensemaking for a plural world

Perspective Map

AI and National Security: What Each Position Is Protecting

April 2026

On October 7, 2022, the U.S. Bureau of Industry and Security issued a set of export control regulations that, in the words of semiconductor historian Chris Miller, author of Chip War, represented "the most sweeping export controls ever imposed by any country." The rules restricted sales of advanced AI chips — specifically Nvidia's A100 and H100 — to China. They blocked U.S. citizens from working at Chinese advanced semiconductor fabs. And they expanded the foreign direct product rule to cover chips made anywhere in the world using American software or equipment.

The strategic logic behind the rules had been stated explicitly one month earlier by National Security Advisor Jake Sullivan. In a September 2022 speech, Sullivan announced that the United States was abandoning its prior doctrine of maintaining a "relative advantage" over adversaries in key technologies. In force-multiplier technologies like AI, semiconductors, and quantum computing, Sullivan said, "we want to maintain as large a lead as possible." The phrase captured a real shift: not just slowing China down, but actively widening the gap.

Then, on January 20, 2025 — the day of Donald Trump's second inauguration — a Chinese AI startup called DeepSeek released a model called R1. By January 27, it had become the most downloaded free app in the United States, overtaking ChatGPT. That same day, Nvidia lost more than $600 billion in market capitalization in a single session — the largest single-day market cap loss in U.S. stock market history. DeepSeek had achieved performance comparable to OpenAI's flagship models, at a fraction of the cost, using the chips Nvidia had designed specifically to comply with the export controls: the downgraded H800, restricted from the market in October 2023, but stockpiled in the intervening year.

The same week, Pentagon logs showed Project Maven — the AI-powered targeting system now operated through a $1.3 billion Palantir contract — had helped compress kill chain decisions from hours to minutes in strikes across the Middle East. By March 2026, the Trump administration had designated Anthropic, whose CEO had refused to remove human-authorization requirements from AI targeting systems, a "supply chain risk to national security."

These events — the chip controls, DeepSeek, Maven, the Anthropic standoff — are not separate debates. They are the same debate, asked four different ways: What does it mean to maintain technological supremacy? Who bears the cost of pursuing it? What constraints, if any, should govern its military application? And who gets to decide any of this?

What security-first export control hawks are protecting

The recognition that in transformative general-purpose technologies, a small lead can become a decisive structural advantage — and that maintaining it requires active, aggressive policy, not just market competition. Jake Sullivan's "as large a lead as possible" formulation was a repudiation of a prior assumption: that the United States could maintain its technological edge through innovation alone, and that export controls only needed to prevent adversaries from leaping ahead, not from keeping pace. The 2022 controls operationalized a different theory: that AI chips are to the 2020s what uranium enrichment was to the 1940s — a chokepoint technology whose access determines not just who wins a market but who shapes the next century's military and political order.

The hawk case for export controls rests on a specific empirical claim: that without the controls, China's AI capabilities would be substantially more advanced. The TSMC "die bank" episode is the clearest evidence. Before the controls took effect, Huawei illegally stockpiled approximately 2.9 million advanced chip dies at TSMC. That stockpile funded Huawei's Ascend AI chip production through 2024 and 2025. DeepSeek used H800 chips stockpiled before October 2023. In the hawk view, both of these facts prove the opposite of what critics claim: the controls are working well enough that China had to spend years stockpiling to circumvent them, and a China without those pre-ban stockpiles would be meaningfully further behind.

The technology ceiling argument is the deeper version of this case. SMIC, China's most advanced domestic chip foundry, is currently producing chips at approximately 7nm-equivalent density. The global frontier, currently held by TSMC, is 2nm. SMIC cannot reach sub-7nm without ASML's extreme ultraviolet lithography machines, which are export-controlled (the Netherlands, under U.S. pressure, restricted ASML exports to China in 2023). Even a 5nm gap in process technology translates to roughly two to three generations of performance. The ceiling is real. The question is whether it matters enough to justify the cost of maintaining it.

Dario Amodei of Anthropic made the hawk case explicitly in a Wall Street Journal op-ed in January 2025: the controls should be tightened, not loosened, and the argument that they are backfiring by spurring Chinese innovation "gets the causation backwards." Chinese stockpiling before controls took effect was predictable and should have been anticipated; the solution is faster and more comprehensive enforcement, not retreat. The Semiconductor Industry Association's warning that controls could harm U.S. firms should be taken seriously, but not treated as dispositive — in 1945, the argument that nuclear controls would harm American physics research would have been true and irrelevant.

What economic and industry critics of controls are protecting

The market logic that made American semiconductor leadership possible in the first place: global revenue funds the R&D that generates the lead, and removing the revenue base doesn't slow China down — it accelerates China's domestic build-out while weakening the American firms that the controls are supposed to protect. Jensen Huang, Nvidia's CEO, estimated China as a "$50 billion annual opportunity" for Nvidia in calendar year 2025. That figure represents lost research capacity. A New York Federal Reserve study found that the October 2022 announcement alone caused a persistent 2.5 percent stock valuation decline for affected U.S. firms, representing approximately $130 billion in aggregate market capitalization. The Information Technology and Innovation Foundation estimated that full decoupling would cut U.S. semiconductor research and development spending by roughly 24 percent — about $14 billion per year — and eliminate 80,000 or more direct jobs, with five to six downstream jobs lost for each semiconductor position.

The industry critique has a harder version: the controls may be producing exactly the outcome they were designed to prevent. Before October 2022, Huawei was a chip customer — dependent on TSMC and other non-Chinese suppliers. After the controls, Huawei was forced to become a chip manufacturer. By 2024, it had become the fourth largest wafer fabrication equipment buyer in the world, with $7.3 billion in capital expenditure, up 27 percent year over year. A company that was purchasing chips from the West is now building the infrastructure to produce them domestically. The Semiconductor Industry Association's position — that controls should be "narrowly targeted" and developed "with sufficient industry expertise" — is not a lobbying position, it is an operational observation: controls designed without understanding the actual supply chain create loopholes, generate stockpiling incentives, and drive the adversary toward self-sufficiency faster than arms-length competition would.

DeepSeek crystallized the efficiency argument. MIT Technology Review's investigation found that export controls forced DeepSeek's engineers to optimize radically for compute efficiency — not because they wanted to, but because they had to. The H800 chip they used has roughly one-third the bandwidth of the H100 for the operations that matter most in AI training. The team developed novel memory management techniques, a mixture-of-experts architecture that activates only a fraction of its parameters at once, and training protocols that squeezed far more performance from less hardware than U.S. labs considered worth pursuing. In the Brookings Institution's analysis, "DeepSeek shows the limits of U.S. export controls on AI chips" — not because the controls failed to restrict hardware, but because hardware restriction drove algorithmic innovation that hardware access had made unnecessary. The controls made China better at AI under resource constraints, and resource constraints may turn out to be the permanent condition of AI deployment, not its frontier.

What international and multilateral governance advocates are protecting

The possibility that the risks posed by AI — autonomous weapons capable of acting without human authorization, surveillance systems that can identify and track dissidents at scale, disinformation systems that can destabilize democratic elections — require international frameworks that no unilateral technology lead, however large, can address on its own. The multilateralist position is not primarily that export controls are wrong. It is that export controls are the wrong tool for the most important problems, and that pursuing technological supremacy as the organizing frame of AI strategy crowds out the governance work that actually needs to happen.

The evidence for this concern is in what the international forums have and haven't produced. The November 2023 Bletchley Park AI Safety Summit — 28 countries plus the European Union — issued the Bletchley Declaration and launched the International Network of AI Safety Institutes (eventually including the U.S., UK, EU, Japan, Singapore, South Korea, Canada, France, Kenya, and Australia). This was genuine diplomatic infrastructure for AI governance. The February 2025 AI Action Summit in Paris, which France and India co-chaired with approximately 100 countries and 1,000 stakeholders in attendance, was notably renamed from "AI Safety" to "AI Action" — and its final declaration included no substantial commitments to AI safety. The Paris summit marked the moment when the international frame shifted from managing AI's risks to competing for AI's benefits.

The multilateralist critique identifies a specific mechanism by which technology competition undermines governance. The U.S. AI Action Plan (July 2025) explicitly pursues "plurilateral" controls — coordinating with Japan, the Netherlands, and South Korea outside multilateral treaty bodies, rather than working through the Wassenaar Arrangement or seeking UN-level agreements. This approach is deliberately chosen: multilateral bodies include China, which can block or water down agreements. But the cost is that China and Russia, excluded from the design of these rules, have no incentive to comply and every incentive to position themselves as the alternative for the Global South. In November 2025, 156 nations supported a UN resolution calling for a legally binding treaty on autonomous weapons systems. The United States was not among them. Russia voted no, alongside North Korea and Belarus — and is reportedly conducting approximately 300 AI-assisted strikes per day in Ukraine. The multilateral infrastructure that might eventually constrain that is being built without U.S. leadership, and may be weaker for it.

Georgetown's Center for Security and Emerging Technology has documented that U.S. allies face a genuine legal problem: most of them lack the domestic statutory authority to implement AI and semiconductor export controls comparable to U.S. BIS rules. The plurilateral approach requires each ally to build its own legal infrastructure — a slow, politically contentious process that creates enforcement gaps and inconsistency. RAND's analysis of the AI Diffusion Rule framework (now rescinded) found that the rule's most ambitious goal — using chip access as leverage to bind Tier 2 countries into U.S. data center governance standards — was also its most fragile, because it depended on countries voluntarily accepting U.S. oversight conditions that reduce their digital sovereignty. China's counter-narrative — that it offers access without conditions — is structurally appealing to governments skeptical of American tech hegemony.

What democratic accountability and anti-militarism critics are protecting

The principle that when governments use AI to make targeting decisions that result in the deaths of specific human beings, democratic accountability — not just technical oversight — is what "meaningful human control" actually requires: known standards, public debate, congressional authorization, and the ability to hold specific people responsible when something goes wrong. The accountability critique is not primarily about whether AI targeting systems work. It is about who knows they exist, who authorized them, what standards they operate under, and who is liable when they fail.

Project Maven — launched in April 2017, now operated under a Palantir contract that grew from $480 million in May 2024 to $1.3 billion by May 2025, with a separate $10 billion Army framework announced in July 2025 — is described by the Pentagon as "human on the loop": humans authorize each engagement, but AI recommends targets, ranks them, and fuses intelligence from more than 150 data sources simultaneously. The Brennan Center for Justice, in its 2025 report on the business of military AI, found that "even the most basic information about the types of systems the Pentagon is adopting is often hidden from Congress and the public." The FY2025 National Defense Authorization Act, signed December 2024, requires an annual congressional report listing all autonomous weapons systems approved and deployed under DoD Directive 3000.09 — the first time Congress has mandated such disclosure, after seven years of Maven's operation without it.

The Anthropic standoff of early 2026 crystallized the accountability argument in a way that abstract policy debates cannot. Anthropic signed a $200 million DoD contract in July 2025 with two explicit contractual conditions: Claude models would not support fully autonomous lethal targeting without human authorization, and they would not support domestic surveillance of U.S. citizens. These were not policy preferences — they were hard red lines Anthropic refused to remove. When the Pentagon demanded their removal and Anthropic refused, the Trump administration designated Anthropic a "supply chain risk to national security" in March 2026, ordering federal agencies to phase out Claude within six months. The implications are significant: a company drawing a line against autonomous weapons targeting was treated not as acting on legitimate ethical grounds but as a threat to the national security apparatus itself.

DoD Directive 3000.09, updated in January 2023, is widely mischaracterized as a ban on lethal autonomous weapons systems. It is not. It requires "appropriate levels of human judgment over the use of force" and senior-level review before deploying autonomous weapons — but explicitly does not define what "appropriate" means, and Pentagon officials have stated that the U.S. "may be compelled to develop [LAWS] if U.S. competitors choose to do so." This is not a guarantee against autonomy; it is a procedural framework that defers the core ethical question to a race-to-the-bottom dynamic. If Russia or China deploy fully autonomous weapons, the directive provides the rationale for the U.S. to follow. The accountability critics argue that this is precisely the wrong governance structure for a technology whose errors, in combat, cannot be undone.

What cuts across all four positions
  • The market structure of semiconductor manufacturing has no real precedent — and every position in this debate is partly wrong because of it. TSMC produces 78 percent of global foundry market share and 92 percent of the manufacturing capacity for the most advanced chips. This is not a normal market. The "silicon shield" theory — that Taiwan's chip dominance simultaneously deters Chinese invasion and makes Taiwan a prime military target — cuts against both the hawk position (which assumes controls will hold indefinitely) and the multilateralist position (which assumes governance frameworks can be built on a stable technological foundation). As TSMC expands to Arizona, Kumamoto, and Dresden, Taiwan's unique indispensability erodes — potentially weakening the deterrence that has kept the Taiwan Strait from becoming an active conflict zone. This is a structural dynamic that no policy framework currently in place is designed to address.
  • DeepSeek proved that the hardware and the intelligence are more separable than the export control framework assumed — and nobody on any side of this debate fully reckoned with that before January 2025. The "small yard, high fence" doctrine assumed that restricting access to advanced training chips would constrain the frontier of AI capability. DeepSeek's engineers, working under real hardware constraints, developed training techniques and architectural innovations — mixture-of-experts, novel memory management, aggressive quantization — that reduced the compute required to achieve frontier-level performance by an order of magnitude. MIT Technology Review's analysis confirmed that U.S. university researchers independently replicated the results; the efficiency gains were real. This means the "fence" has a gate that the controls did not anticipate: algorithmic innovation is not a controlled technology, and scarcity drives it faster than abundance. Both the hawks (who argued controls would hold the line) and the industry critics (who argued controls would be easily worked around) were partially right, but neither predicted the specific mechanism — that being cut off from compute would accelerate, not forestall, Chinese AI development.
  • The inconsistency of U.S. policy between administrations is itself a geopolitical liability — and all four positions understate this. The Biden AI Diffusion Rule, a three-tier country framework years in the making, was issued January 15, 2025, set to take effect May 15, 2025, and rescinded May 13, 2025 — two days before implementation. The H20 chip ban was imposed April 9, 2025, reversed in July 2025 as part of a trade war truce, with 15 percent of China sales revenue paid to the Commerce Department. U.S. allies building their own export control legal frameworks — at U.S. request — face a principal who may reverse course between administrations. Countries in the middle tier that accepted governance conditions in exchange for chip access found those conditions evaporating when the Diffusion Rule was rescinded. The instability of the U.S. position is a structural problem that neither tighter controls nor looser ones resolve.
  • The question of what "human control" means in AI-assisted targeting is not resolved by existing law, and the Geneva Conventions provide no clear answer. International humanitarian law requires distinguishing between combatants and civilians, taking precautions to minimize civilian harm, and holding specific individuals accountable for violations. All three requirements become ambiguous when the entity making the targeting recommendation is a system trained on historical strike data that may itself encode prior errors. A UN analysis in 2024 found that AI in warfare "muddles legal accusations, producing no designated individual liable for transgressions." Pentagon Directive 3000.09's "human on the loop" framework may satisfy the letter of international humanitarian law — a human reviews each engagement — while violating its spirit if the review happens in under 90 seconds on the basis of a recommendation the human has no realistic basis to second-guess. The accountability critics are right that this is unresolved. The hawks are right that Russia is not waiting for it to be resolved. Neither position has an adequate answer to what happens when both things are true at once.

See also

  • Who bears the cost? — the framing essay for the burden-shifting dispute inside AI security policy: export controls, alliance strategy, and autonomous weapons programs all ask which workers, firms, civilians, and smaller states are expected to absorb the risks created by great-power competition.
  • Who gets to decide? — the framing essay for the authority conflict underneath this map: whether military planners, intelligence agencies, executive officials, allied states, or democratic publics should have standing to set the terms for high-stakes AI deployment.
  • AI Governance — the broader debate over how AI should be regulated domestically and internationally; this map focuses specifically on the national security dimension: export controls, military AI, and the U.S.-China technology race
  • AI Safety and Existential Risk — the debate over whether advanced AI poses civilizational risks independent of geopolitics; the military AI debate and the AI safety debate have largely developed in parallel, but the Anthropic standoff connects them directly
  • NATO and Collective Security — the broader debate over how Western alliances should respond to authoritarian state power; the semiconductor export control regime is in practice a new form of collective security arrangement, with the same coordination problems
  • Nuclear Security and Nonproliferation — the closest historical precedent for the current AI export control debate; the NPT regime provides both a model (controls can be sustained for decades with sustained political will) and a warning (proliferation happened anyway, and controls changed its pace but not its direction)
  • Open-Source AI and Model Weights — the debate over whether AI model weights should be freely released; DeepSeek released its model weights openly, which the export control framework has no mechanism to address — a chip is a physical object that can be blocked at a border, but a model weight is a number that can be transmitted anywhere
  • Algorithmic Governance and Automated Decisions — the domestic version of the accountability question: when algorithmic systems make consequential decisions about people, who is responsible and what standards apply? The military targeting debate is this question at its highest stakes
  • Global Trade and Industrial Policy — the CHIPS and Science Act is simultaneously a national security measure, an industrial policy, and a trade intervention; the WTO complaints China has filed against it, and the debate over whether domestic semiconductor subsidies undermine the trade regime, are the industrial policy dimension of the same debate

References and further reading

  • Chris Miller: Chip War: The Fight for the World's Most Critical Technology, Scribner (2022) — the essential historical context for the current semiconductor competition; traces how the United States achieved and maintained semiconductor dominance from the 1960s through the TSMC era, and why chips became the central terrain of U.S.-China strategic competition
  • U.S. Bureau of Industry and Security: "Commerce Implements New Export Controls on Advanced Computing and Semiconductor Manufacturing Items", October 7, 2022 — the foundational rule that began the current era of AI chip controls; introduced the A100/H100 restrictions, the Rule 7 U.S. persons ban prohibiting Americans from working at Chinese advanced fabs, and the expanded foreign direct product rule
  • Jake Sullivan: Remarks at the Special Competitive Studies Project Global Emerging Technologies Summit, September 16, 2022 — the speech that announced the "as large a lead as possible" doctrine, explicitly repudiating the prior framework of seeking only "relative advantage" in dual-use technologies; the strategic theory of the October 2022 controls
  • U.S. Bureau of Industry and Security: "Commerce Strengthens Restrictions on Advanced Computing Semiconductors, Semiconductor Manufacturing Equipment, and Supercomputing Items to Countries of Concern", October 17, 2023 — the update that added Nvidia H800 and A800 to the controlled list (closing the loophole Nvidia had designed around the 2022 rules) and expanded geographic scope to 43 additional countries to prevent diversion
  • Dario Amodei: "America Can't Afford to Lose the AI Race", Wall Street Journal, January 2025 — Anthropic CEO's explicit hawk position on export controls; argues that controls should be strengthened, not loosened, and that the efficiency-innovation argument "gets the causation backwards"
  • Anthropic: "Securing America's Compute Advantage: Anthropic's Position on the AI Diffusion Rule", January 2025 — formal policy submission to BIS supporting the three-tier framework while recommending tighter Tier 2 thresholds, government-to-government agreements for large deployments, and increased enforcement funding
  • Center for Strategic and International Studies: "DeepSeek: A Deep Dive", 2025 — a compact technical and policy read on what DeepSeek's H800-constrained training run actually showed: how export controls shaped the hardware environment, what efficiency gains came from architecture and systems design, and why the episode complicated simple claims that chip scarcity alone would cap Chinese AI progress
  • Brookings Institution: "DeepSeek shows the limits of US export controls on AI chips", February 2025 — argues that hardware restrictions drove algorithmic efficiency innovation, and that the scarcity created by controls may accelerate, rather than constrain, Chinese AI capability development
  • Center for Strategic and International Studies: "DeepSeek, Huawei, Export Controls, and the Future of the US-China AI Race", 2025 — CSIS Wadhwani AI Center analysis; takes a more cautious view than Brookings, arguing that without controls China's capabilities would be substantially more advanced, and that the controls are working even if imperfectly
  • Federal Reserve Bank of New York: Securing Technological Leadership? The Cost of Export Controls on Firms, Staff Report No. 1096 (2024; rev. 2025) — found that the October 2022 announcement caused a persistent 2.5 percent stock valuation decline for affected U.S. firms, an aggregate estimated $130 billion in market capitalization loss; the empirical baseline for the economic cost of the controls
  • Information Technology and Innovation Foundation: "Decoupling Risks: How Semiconductor Export Controls Could Harm US Chipmakers and Innovation", November 2025 — estimates full decoupling would cut U.S. semiconductor R&D spending by 24 percent (~$14 billion annually) and eliminate 80,000+ direct jobs; the most detailed modeling of the economic cost of the export control regime
  • Semiconductor Industry Association: Export Control and National Security Policy — the industry's formal position; calls for controls to be "narrowly targeted," coordinated with allies, and developed "with sufficient industry expertise," warning that "poorly calibrated" rules harm U.S. firms without achieving security goals
  • SemiAnalysis: "Huawei Ascend Production Ramp: Die Banks, TSMC Continued Production, HBM is The Bottleneck", 2025 — detailed technical analysis of Huawei's Ascend 910C production at SMIC; yield rates (20% in September 2024, approximately 40% by February 2025), production volumes (805,000 units in 2025), and the HBM bottleneck that remains a genuine constraint on Chinese AI chip output
  • RAND Corporation: "Can Export Controls Create a US-Led Global AI Ecosystem?", RAND Perspectives PEA3776-1 (2025) — analysis of the AI Diffusion Rule framework; examines whether chip access conditionality can bind Tier 2 countries into U.S. data center governance standards, and the risks of fragmentation if it cannot
  • Center for Strategic and International Studies: "Understanding U.S. Allies' Current Legal Authority to Implement AI and Semiconductor Export Controls", 2025 — documents the legal gaps in allied countries' domestic export control authority; explains why the plurilateral coordination approach requires each ally to build new statutory infrastructure from scratch
  • Future of Life Institute: "AI Safety Summits", 2023–2025 — tracks the Bletchley (November 2023), Seoul (May 2024), and Paris (February 2025) summits; notes the shift from "AI Safety" to "AI Action" framing at Paris and the absence of substantive safety commitments in the final Paris declaration
  • Brennan Center for Justice: "The Business of Military AI", 2026 — documents the lack of public transparency around Pentagon AI acquisitions; finds that "even the most basic information about the types of systems the Pentagon is adopting is often hidden from Congress and the public"; the primary source for the accountability critique of Project Maven
  • Bloomberg: "AI Warfare Becomes Real for US Military With Project Maven", 2024 — the most detailed journalistic account of how Project Maven operates in practice; documents the compression of kill chain decisions from hours to minutes, the fusion of 150+ data sources, and the strikes in Iraq, Syria, and Yemen attributed to Maven intelligence support
  • U.S. Department of Defense: Maven Smart System contract notice, 2024 — the initial $480 million award for the Maven Smart System prototype; a useful primary anchor for the later contract-expansion reporting discussed in the essay
  • U.S. Department of Defense: Directive 3000.09, "Autonomy in Weapon Systems", updated January 2023 — the governing policy for U.S. autonomous weapons development; requires "appropriate levels of human judgment" and senior review before deployment; explicitly does not ban lethal autonomous weapons systems; defines "human on the loop" as the current standard for systems like Maven
  • Lawfare: "Decoding the Defense Department's Updated Directive on Autonomous Weapons", 2023 — authoritative analysis of what DoD 3000.09 does and does not prohibit; corrects the widespread mischaracterization of the directive as a moratorium on LAWS
  • Atlantic Council: "The Anthropic Standoff Reveals a Larger Crisis of Trust Over AI", March 2026 — analysis of the March 2026 Trump administration designation of Anthropic as a "supply chain risk to national security"; frames it as the clearest evidence of the accountability gap: when a company draws explicit red lines against autonomous lethal targeting, the Pentagon's response is to classify the company as a national security threat rather than to negotiate the ethical question on its merits
  • United Nations General Assembly: Resolution A/RES/80/57 on lethal autonomous weapons systems, December 2025 — the core multilateral UN text on autonomous weapons from the 2025 General Assembly session; a direct primary source for the treaty-governance debate invoked in the essay
  • MIT Technology Review: "Taiwan's Silicon Shield Could Be Weakening", August 2025 — analysis of how TSMC's expansion to Arizona, Kumamoto, and Dresden is eroding Taiwan's unique indispensability; examines the Taiwan government's semiconductor export policy (prohibiting offshore production of most advanced chips) and U.S. pressure to accelerate overseas fab construction
  • Stimson Center: "Why Taiwan Fears America First Risks Eroding Its Silicon Shield", 2025 — argues that U.S. pressure to move TSMC production offshore could reduce the deterrence value of Taiwan's semiconductor position, serving Chinese strategic interests without a single military action