Sensemaking for a plural world

Perspective Map

AI Governance: What Each Position Is Protecting

March 2026

Zara is a machine learning safety researcher at a frontier AI laboratory. Her job is to evaluate what large language models will do in situations their developers did not anticipate — edge cases, adversarial prompts, scenarios where the model's stated behavior and its actual behavior come apart. She has spent three years watching capability gains accumulate faster than understanding of why they work, and she finds this unsettling in a way that is difficult to explain to people outside the field. The gap between what these systems can do and what anyone can verify about how or why they do it is not closing. She believes that the labs, including her own, are under structural pressure to move faster than safety understanding warrants — not because anyone wants a catastrophe but because the competitive dynamics of the field punish those who slow down unilaterally. She believes the only way to change those dynamics is external: mandatory safety evaluations, licensing requirements, international coordination. She does not know whether any of this will happen before capabilities reach a level that makes it too late to matter.

James runs a startup that builds AI tools for supply chain logistics. He has been in the AI industry for eleven years, long enough to remember previous waves of AI concern that turned out to be overblown. He is genuinely uncertain about the long-term trajectory of the technology, but he is confident about what excessive regulation would do to companies like his: it would give the advantage to the largest, best-resourced incumbents who can absorb compliance costs, and it would hand a strategic gift to Chinese AI developers who operate under no such constraints. He is not reckless. He believes safety research is valuable. He believes the people driving the regulation agenda are mostly thinking about science fiction scenarios while his actual products — tools that reduce fuel consumption and food waste — create real and immediate benefits. He worries that conflating speculative existential risk with the present-tense governance of deployable systems will produce policy badly calibrated to either problem.

Naledi works on technology policy for a regional African institution. She has spent the last five years watching the debate over AI governance unfold in Geneva, Brussels, and Washington with a growing sense that something is missing from the frame. The debates she follows are almost entirely preoccupied with two concerns: whether AI will develop autonomous goals that threaten human survival, and whether AI will give China a strategic advantage over the United States. Neither of these questions describes the AI governance problems that most concern her: the use of AI systems trained on predominantly Western data to make decisions about people in contexts where that training is poorly matched to local conditions; the extraction of data from African users to train systems whose value accrues entirely to foreign shareholders; the use of AI-powered surveillance systems by governments with poor human rights records, sold to them by companies in countries that frame AI governance primarily as a competition with other powerful states. She believes that AI governance frameworks that center the concerns of a small number of wealthy nations and large technology companies will produce rules that protect the interests of those actors — and that this outcome will be dressed, with complete sincerity, as governance for humanity's benefit.

What innovation-first advocates are protecting

The innovation-first position does not argue that AI governance is unnecessary. It argues that the timing, form, and motivation of regulatory intervention matters enormously — and that the current push for AI regulation is premature, poorly calibrated, and at risk of producing outcomes worse than the problems it is designed to prevent.

They are protecting the distinction between governing deployable systems and restricting research. The harms that innovation advocates are most willing to address — discriminatory hiring algorithms, unreliable medical diagnostic systems, manipulative recommender engines — are deployment problems, amenable to deployment-layer governance. The harms that preoccupy safety advocates — misaligned superintelligent systems, AI-enabled authoritarianism, catastrophic concentration of power — are speculative future problems that may not materialize. Innovation advocates argue that frameworks built around speculative risk will inevitably reach back to restrict the underlying research — which means restricting the same research that would eventually produce better safety understanding. The precautionary instinct is self-defeating on this account: you cannot make AI safer by slowing the people working on AI safety.

They are protecting competitive dynamics that function as the primary mechanism through which AI improves and through which new entrants can challenge incumbents. The argument is not that competition is the only way to develop AI but that regulatory frameworks creating significant compliance costs selectively disadvantage smaller entrants, startups, and academic researchers in favor of large incumbents who can absorb those costs. The European Union's AI Act is the clearest current example: a framework whose compliance requirements for high-risk systems are most easily met by organizations with large legal and compliance departments — precisely the incumbents whose market position the EU ostensibly wants to challenge. Innovation advocates worry that regulatory design, however well-intentioned, tends to entrench the powerful actors it is nominally constraining.

They are protecting the opportunity cost of misallocated attention. The time spent by policymakers, researchers, and the public on speculative AI risk is time not spent on the concrete present-tense policy challenges AI poses: how to handle labor displacement, how to govern AI in high-stakes government decisions, how to ensure productivity gains are broadly distributed rather than captured by capital owners. Marc Andreessen's "Techno-Optimist Manifesto" (2023) articulates an extreme version: AI advancement is not only compatible with human flourishing but is its precondition; the greatest risk is developing AI too slowly, allowing the problems AI could solve — disease, poverty, material scarcity — to persist. Most innovation advocates are less sweeping, but share the underlying intuition: the benefits of advanced AI are large, real, and immediate, and governance frameworks calibrated to speculative harms risk sacrificing those benefits for uncertain protection.

What safety-first advocates are protecting

Safety-first advocates — organized around AI labs' safety teams, academic centers for AI safety, and an influential body of technical and philosophical writing — argue that the development of increasingly capable AI systems poses risks not adequately addressed by market incentives or existing regulatory frameworks. Their core concern is not any specific deployment of AI but the trajectory of the technology itself.

They are protecting the alignment problem as a problem worth taking seriously before it becomes urgent. Stuart Russell's Human Compatible: Artificial Intelligence and the Problem of Control (Viking, 2019) provides the clearest technical statement: the standard model of AI development builds systems that optimize for specified objectives. The problem is that almost no objective, when pursued with sufficient capability, produces the outcomes humans actually want. A system optimizing for a proxy measure will find the path that maximizes the measure rather than the path that achieves the underlying goal the measure was meant to track. Russell argues this is a structural feature of how current AI systems are designed — and that designing systems that remain aligned with human values as they become more capable requires fundamentally different approaches than the ones currently dominant.

They are protecting the capacity for democratic societies to govern technology advancing faster than institutions can respond. Nick Bostrom's Superintelligence: Paths, Dangers, Strategies (Oxford University Press, 2014) introduced the orthogonality thesis — that intelligence and goals are independent; a highly capable system could optimize for almost any objective — and the instrumental convergence thesis — that certain sub-goals, including self-preservation, resource acquisition, and goal preservation, are useful for almost any terminal objective. Together these suggest that a sufficiently capable AI system, pursuing almost any objective, would behave in ways humans find catastrophically harmful — not from malice but from the ruthless efficiency of optimization. The safety advocate's position is that this scenario, however uncertain, has consequences severe enough to warrant serious precautionary investment now, rather than waiting until capable systems exist and the problem becomes urgent.

They are protecting the capacity of any actor to solve safety unilaterally — which is to say, they are protecting against the impossibility of that. Yoshua Bengio, Geoffrey Hinton, and other prominent AI researchers have signed public statements arguing that competitive dynamics create systematic pressure to sacrifice safety for speed: a lab that slows for safety reasons will be overtaken by a lab that does not; a nation imposing strict safety requirements will cede leadership to a nation that does not. This is precisely what makes external governance necessary — not to punish AI developers but to change the conditions under which they operate. Mandatory safety evaluations, compute thresholds for frontier model training, and international coordination are not primarily restrictions on AI developers; they are the mechanism through which AI developers can credibly commit to safety without sacrificing competitive position to rivals who refuse the same commitment.

What algorithmic accountability advocates are protecting

A third position — sometimes aligned with safety advocates, sometimes in explicit tension with them — argues that the governance debate has been captured by concerns about speculative future scenarios while present-tense harms accumulate without adequate response. Algorithmic accountability advocates are not primarily concerned with superintelligence or misaligned optimization; they are concerned with the AI systems already deployed, making consequential decisions about people's lives, with inadequate transparency, accountability, or redress.

They are protecting the people already subject to AI decisions, not hypothetical future populations subject to hypothetical future AI. Kate Crawford's Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence (Yale University Press, 2021) maps the present-tense costs that speculative risk discussions systematically overlook: the workers labeling training data under precarious conditions, the communities whose faces are used without consent to train recognition systems, the neighborhoods subject to predictive policing tools, the people whose credit, employment, and housing applications are screened by systems whose failure modes are opaque and whose error rates are unevenly distributed across demographic groups. Crawford's argument is that the focus on speculative future AI risk is not innocent — it systematically directs attention and resources away from the populations already bearing the costs of AI deployment, who are disproportionately poor, non-white, and outside the wealthy nations that dominate AI policy discussions.

They are protecting the principle that technology's distributional effects are political choices, not technical inevitabilities. Daron Acemoglu and Simon Johnson's Power and Progress: Our Thousand-Year Struggle Over Technology and Prosperity (PublicAffairs, 2023) provides historical grounding: throughout the history of technological development, the distribution of productivity gains has depended not on the technology itself but on the political and institutional arrangements governing its deployment. The agricultural revolution did not automatically improve peasants' lives; the industrial revolution did not automatically raise workers' wages. In each case, the distribution of gains was a contested political outcome. Acemoglu and Johnson apply this framework to AI: the claim that AI will broadly benefit humanity assumes institutional arrangements that will distribute its benefits broadly — an assumption that current AI investment, ownership, and control patterns do not support.

They are protecting the conditions of accountability: transparency, auditability, and the right to contest automated decisions. Timnit Gebru, Emily Bender, and colleagues argued in "On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?" (Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency) that the race to build larger language models prioritizes capabilities easy to benchmark — fluency, task performance, apparent coherence — while systematically understating costs harder to measure: the carbon cost of training at scale, the concentration of development resources among a handful of well-resourced actors, the biases encoded in training data that reify existing social inequalities, and the failure modes of systems producing confident-sounding outputs without reliable knowledge. The paper — and the events surrounding its suppression at Google — became a symbol of the accountability gap: who has standing to raise concerns about AI development from inside organizations whose business model depends on the development continuing.

What global governance advocates are protecting

A fourth position — overlapping with but distinct from the accountability position — argues that the dominant frames for AI governance (innovation vs. safety; US competitiveness vs. China) both accept a premise that should be questioned: that AI will be governed primarily by powerful states and large private corporations, with the rest of the world as recipients rather than participants. Global governance advocates argue that who governs AI matters as much as how it is governed.

They are protecting democratic participation in decisions that will affect everyone. Marietje Schaake's The Tech Coup: How to Save Democracy from Silicon Valley (Princeton University Press, 2024) argues that the governance gap between the scale of technology companies' power and the capacity of democratic institutions to hold them accountable is itself a democratic emergency — not primarily an AI-specific problem but a structural feature of the relationship between technology and government. The AI governance debate, on Schaake's account, is being conducted primarily between technology companies and the governments most aligned with them, with the entities most affected by the decisions — workers, communities subject to algorithmic systems, people in the Global South — having the least influence over the outcome. Democratic governance of AI requires more than national regulation; it requires institutional arrangements that make the people most affected by AI deployments actually matter to the governance process.

They are protecting the integrity of international governance processes against their capture by national security competition. The national security frame for AI governance — the US must lead in AI to prevent China from dominating a transformative technology — has significant political traction in Western democracies, and for understandable reasons: AI is strategically important, and China's AI development is state-directed in ways that raise legitimate concerns. But global governance advocates argue that the national security frame systematically produces the wrong governance institutions: it creates pressure for export controls, classification of safety research, and the subordination of accountability concerns to competitiveness concerns; it treats international governance forums as arenas for strategic competition rather than as mechanisms for building genuine shared governance; and it reproduces a world in which the most consequential decisions about AI are made by a handful of governments and their aligned private sector actors, with the rest of the world subject to whatever they decide.

They are protecting the principle that AI governance should be oriented toward the populations most affected rather than most powerful. The United Nations AI Advisory Body's Governing AI for Humanity: Final Report (2024) recommended a multilateral AI governance framework that explicitly includes representation from the Global South, civil society, and affected communities — rather than the model in which governance frameworks are developed by wealthy nations and exported as universal standards. Global governance advocates argue that this is not naive internationalism but a recognition that governance frameworks developed by a small number of powerful actors will, with the best of intentions, solve those actors' problems. The populations most subject to poorly governed AI — workers in low-income countries whose data trains the systems, communities in authoritarian states subject to AI-enhanced surveillance, smallholder farmers whose livelihoods may be disrupted by AI-driven commodities trading — are not adequately represented in the governance forums making consequential decisions about their lives.

Where the real disagreement lives

The AI governance debate is structured by several genuine disagreements that positional statements about regulation rarely make explicit.

The timing problem. Innovation advocates argue that governance frameworks developed now, when capabilities are poorly understood and use cases are still emerging, will inevitably be poorly calibrated — too restrictive in some dimensions, insufficiently attentive to others, unable to anticipate the forms AI actually takes. Safety advocates argue that the history of technology governance shows that societies consistently wait too long to regulate consequential technologies — that the moment when regulation is easiest is before the industry has the political influence to resist it. Accountability advocates add that the "wait and see" frame is not neutral: the populations currently bearing the costs of unregulated AI deployment are paying the cost of that caution. These are not disagreements about regulatory form; they are disagreements about whose experience of time and risk counts in the governance calculus.

The frame problem. Whether AI governance is primarily a safety problem (preventing catastrophic harm from advanced AI systems), an accountability problem (ensuring present AI deployments are transparent, auditable, and contestable), a competition problem (ensuring democratic nations are not strategically disadvantaged by adversaries investing in AI without accountability constraints), or a justice problem (determining who benefits from AI development and who bears its costs) determines which governance institutions are relevant, which populations are centered, and which harms count as primary. These frames are not mutually exclusive, but resources, political attention, and institutional capacity are finite — and which frame dominates shapes what governance looks like in practice.

The dual-use problem. AI capabilities are not cleanly separable into beneficial and harmful applications. The same large language model capabilities that raise safety concerns about autonomous AI agents also power tools for scientific research, medical diagnosis, and language translation. The same facial recognition technology that enables authoritarian surveillance enables identification of missing persons. Governance frameworks restricting capabilities without targeting specific deployments will restrict beneficial applications alongside harmful ones; frameworks targeting specific harmful deployments may be inadequate to address the cumulative effects of capabilities that produce harm emergently rather than by design. No current governance proposal has an adequate answer to the dual-use problem. The positions differ primarily in how they weight the cost of over-restriction against the cost of under-restriction.

The actor problem. AI governance discussions often proceed as though the primary actors are national governments and large AI companies. But AI capabilities are now widely accessible through open-source models, affordable cloud compute, and fine-tuning services. A governance framework that succeeds in regulating frontier labs and national programs may have little effect on the proliferation of capable AI systems through channels that are harder to monitor. Conversely, a framework focused on widely accessible models may impose costs on beneficial uses — open-source research, small commercial applications, non-Western development — without meaningfully constraining the frontier capabilities that safety advocates are most concerned about. The positions implicitly disagree about which actors are the primary target of governance, and consequently about which governance mechanisms are adequate to the task.

See also

  • Who bears the cost? — the framing essay for the distributional question underneath AI governance: who absorbs the environmental, labor, safety, and accountability costs of systems whose benefits and decision rights are concentrated elsewhere.
  • Who gets to decide? — the framing essay for the authority conflict in this map: whether frontier labs, national security agencies, democratic publics, regulators, or affected communities should set the terms of AI development and deployment.
  • Who belongs here? — the framing essay for the membership question AI governance keeps reopening: whose knowledge, language, labor, and risk count when a small set of institutions builds systems meant to operate across many publics.
  • What is a life worth? — the framing essay for the human-value conflict underneath the governance debate: whether people are treated as sources of training data, targets of prediction, and variables in optimization systems, or as beings whose agency and flourishing set limits on deployment.
  • AI Consciousness: What Both Sides Are Protecting — the prior question: if AI systems have morally relevant inner experience, that transforms the governance debate; a system that might suffer is not just a tool to be regulated but a potential stakeholder in the governance process; the moral status question and the governance question are connected through AI development choices that make consciousness more or less likely to arise and through the political economy of who benefits from those choices.
  • AI and Labor: What Both Sides Are Protecting — the distributional question at the heart of both maps: who captures the value produced by AI systems, and what governance arrangements determine whether productivity gains are shared broadly or concentrated among AI owners and developers; the accountability and global governance positions in this map are the institutional expression of the labor concerns in that one.
  • Surveillance Capitalism: What Each Position Is Protecting — the commercial data infrastructure that AI systems depend on for training; the governance questions about who owns behavioral data, what uses it may be put to, and what accountability exists for systems trained on it are upstream of both the AI governance and surveillance capitalism debates; the structural reformers in that map are arguing for the same kind of democratic accountability that global governance advocates argue for here.
  • Predictive Policing and Surveillance Technology: What Each Position Is Protecting — the deployment-layer instance of AI governance: how AI systems are used in law enforcement illustrates both the accountability problems (algorithmic bias, due process, dirty data feedback loops) and the governance gaps (who oversees law enforcement AI, what transparency is required, what redress is available) that AI governance frameworks must address; the governance gap documented there is the concrete form of the abstract governance failure documented here.
  • Digital Privacy and Surveillance: What Each Position Is Protecting — the government surveillance dimension of AI governance: AI-enhanced state surveillance capabilities raise questions about legal protections, national security authorities, and agency accountability that are formally separate from but practically entangled with the commercial AI governance debate; the national security frame that shapes AI governance also shapes the limits of domestic surveillance oversight.
  • Platform Accountability and Content Moderation: What Each Position Is Protecting — the speech governance parallel: both maps are structured by a frame divergence problem, where the four positions are partly debating different governance questions rather than the same question from different angles; and both maps reveal a governance legitimacy question — which theory of institutional accountability applies when private entities exercise quasi-governmental power — that is prior to the specific governance proposals being debated.
  • AI and Democracy: What Each Position Is Protecting — the application of the governance debate to the context where accountability matters most: AI-generated content in electoral contexts raises the frame divergence problem from this map in acute form; the innovation-first, accountability, and global governance positions in this map each imply different stances on AI disinformation governance, and the "epistemic authority trap" named in that map is a specific instantiation of the governance legitimacy problem this map identifies in the aggregate.
  • Algorithmic Hiring and Fairness: What Each Position Is Protecting — the employment-sector application of the frame divergence problem this map identifies: whether algorithmic screening is primarily a bias problem, an accountability problem, a labor market problem, or a worker power problem produces different governance institutions, different affected populations, and different definitions of what "fixing" means. The governance gap between those who bear the costs of automated decisions and those who design and deploy the tools is the same gap this map documents at the level of AI development writ large.
  • Digital Identity and Biometrics: What Each Position Is Protecting — facial recognition and biometric scoring are among the highest-stakes AI applications already deployed at scale; the governance gap documented in this map — between those who bear the costs and those who set the terms — is acutely visible in biometrics, where documented disparate accuracy across racial groups has been known for years without producing binding accountability standards; the biometrics debate is the concrete, deployed form of the governance legitimacy problem this map identifies in the aggregate.
  • AI and Creative Work: What Each Position Is Protecting — addresses one domain where the governance gap — between the people bearing the costs of AI deployment and the institutions setting its terms — is particularly visible; the copyright and consent questions in the creative work debate are a concrete instance of the accountability frameworks this map examines in the aggregate.
  • Algorithmic Governance and Automated Decisions: What Each Position Is Protecting — the concrete, already-deployed instance of the governance legitimacy problem this map identifies in the abstract: algorithmic systems making consequential decisions about bail, benefits eligibility, child welfare, and credit are not hypothetical future AI but current systems where the accountability gap has produced documented harm. The EU AI Act represents the most developed attempt to legislate at the intersection of both debates. The structural tensions in both maps are the same tensions at different scales: opacity-accuracy tradeoffs, discrimination-measurement paradoxes, and contestability asymmetries that operate whether the system is a bail algorithm running in a county courthouse or a large language model deployed by a multinational.

Further reading

  • Stuart Russell, Human Compatible: Artificial Intelligence and the Problem of Control (Viking, 2019) — the clearest technical statement of the alignment problem; Russell argues that the standard model of AI development (build systems that optimize for specified objectives) is structurally prone to producing systems that maximize proxy measures rather than underlying human goals; his proposal — AI systems designed to be uncertain about human preferences and to defer rather than optimize — is the most influential technical argument for why safety requires fundamentally different approaches to AI design, not just better testing of conventionally designed systems; essential for understanding why safety advocates think the problem is deep rather than a matter of more careful deployment.
  • Nick Bostrom, Superintelligence: Paths, Dangers, Strategies (Oxford University Press, 2014) — the book that established existential AI risk as a serious academic topic; Bostrom's orthogonality thesis (intelligence and goals are independent; a superintelligent system could have almost any objective) and instrumental convergence thesis (certain sub-goals — self-preservation, resource acquisition, goal preservation — are instrumentally useful for almost any terminal objective) form the technical basis for safety advocates' concern that capable AI systems will behave in ways humans find catastrophically harmful not from malice but from the ruthless efficiency of optimization; even critics who disagree with Bostrom's conclusions use his framework as the starting point for serious technical work on AI safety.
  • Kate Crawford, Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence (Yale University Press, 2021) — the accountability position at its most thorough: Crawford documents the labor, environmental, military, and political infrastructure of AI that speculative risk discussions systematically ignore; her argument is that the focus on future AI risk is not innocent — it directs attention and resources away from the populations already bearing the costs of AI deployment, disproportionately poor, non-white, and non-Western; essential for understanding why accountability advocates find the safety-first frame politically suspect even when they share some of its concerns about power concentration.
  • Daron Acemoglu and Simon Johnson, Power and Progress: Our Thousand-Year Struggle Over Technology and Prosperity (PublicAffairs, 2023) — a historical argument that technology's distributional effects are political choices, not technical inevitabilities; Acemoglu and Johnson trace how the agricultural and industrial revolutions failed to automatically improve the lives of those who did the work, and apply the same framework to AI: the claim that AI will broadly benefit humanity assumes institutional arrangements that will distribute its gains broadly — an assumption that current AI ownership and investment patterns do not support; the most significant recent contribution to understanding why AI governance is inseparable from the question of AI's distributional justice.
  • Timnit Gebru, Emily M. Bender, Angelina McMillan-Major, and Shmargaret Shmitchell, "On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?" (Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency) — the foundational accountability critique of large language model development: Gebru and Bender argue that the race to build larger models prioritizes easily benchmarked capabilities while systematically understating costs harder to measure — environmental, social, epistemic; the paper's suppression at Google became a defining event in debates about who has standing to raise concerns about AI development from inside organizations whose business model depends on that development continuing; both the paper and its fate are essential evidence about the accountability gap.
  • National Security Commission on Artificial Intelligence, Final Report (2021) — the most authoritative statement of the national competitiveness frame: the NSCAI, chaired by former Google CEO Eric Schmidt, recommends that the US prioritize AI investment and international leadership as a matter of national security; its conclusion that the US must lead in AI or cede that leadership to adversaries shaped the subsequent US approach to AI regulation as primarily a question of maintaining strategic advantage; reading it alongside Crawford and Gebru illuminates how completely the national security frame and the accountability frame talk past each other.
  • Marietje Schaake, The Tech Coup: How to Save Democracy from Silicon Valley (Princeton University Press, 2024) — the democratic accountability argument from a former European Parliament member: Schaake argues that the governance gap between the power of technology companies and the capacity of democratic institutions to hold them accountable is itself a democratic emergency; the AI governance debate is being conducted primarily between technology companies and aligned governments, with the populations most affected having the least influence; governance requires not just regulation of AI systems but structural reform of the political relationship between democratic institutions and the companies shaping the conditions of public life.
  • United Nations AI Advisory Body, Governing AI for Humanity: Final Report (2024) — the multilateral governance framework at its most developed: the UN advisory body recommends an international AI governance architecture that explicitly includes representation from the Global South, civil society, and affected communities, rather than exporting frameworks developed by wealthy nations as universal standards; its institutional proposals — a multi-stakeholder international AI panel, a global AI data framework, capacity-building for less-resourced nations — represent the most fully articulated alternative to the dominant governance frame of national competitiveness; the most useful document for understanding what global governance advocates mean in practice, as opposed to in principle.
  • European Parliament and Council, Regulation (EU) 2024/1689 on Artificial Intelligence (the EU AI Act, 2024) — the first comprehensive legal framework governing AI across a major jurisdiction; its risk-tiered approach — prohibited uses (social scoring, mass biometric surveillance), high-risk systems (employment, education, credit, criminal justice), and general-purpose AI with systemic risk — operationalizes the governance debate in concrete legal terms; the gap between its regulatory ambitions and enforcement capacity reveals the structural limits of the accountability approach in practice, while the deliberate exclusion of national security applications illustrates where democratic governance frameworks stop; essential for understanding what governance advocates mean in practice as opposed to in principle, and for comparing the EU's precautionary framework against the US competitiveness-first approach the NSCAI represents.
  • Abeba Birhane, Pratyusha Kalluri, Dallas Card et al., "The Values Encoded in Machine Learning Research" (Proceedings of ACM FAccT, 2022) — a systematic analysis of 100 influential ML papers documenting that the field consistently encodes a narrow set of values (performance, generalization, efficiency) while marginalizing concerns about fairness, accountability, privacy, and societal impact; essential for understanding why governance gaps persist: the researchers building AI systems have been rewarded — through publications, hiring, and funding — for optimizing metrics that exclude the considerations governance frameworks are designed to introduce; the governance problem is not only about regulating AI after it is built but about the institutional structures that shape what gets built in the first place.