Sensemaking for a plural world

Perspective Map

Automation Policy and Labor Displacement: What Each Position Is Protecting

April 2026

In the first seven months of 2025, American employers cited artificial intelligence as the reason for more than 10,000 layoffs — not speculation about future cuts, but actual documented job eliminations, tracked by the outplacement firm Challenger, Gray & Christmas. Goldman Sachs estimated in 2023 that generative AI could automate the equivalent of 300 million full-time jobs globally. A widely cited OpenAI and University of Pennsylvania study found that around 80 percent of the U.S. workforce could have at least 10 percent of their tasks affected by large language models, with roughly 19 percent facing 50 percent or more.

These are projections, not prophecies. Technology forecasts have repeatedly overstated the pace of labor displacement and understated the compensating creation of new work. The industrial revolution eliminated most agricultural and craft employment and produced far more total employment than it destroyed, though over a period measured in generations and at enormous human cost to those in the transition. The spreadsheet did not eliminate accountants. ATMs expanded bank branch employment in the short run by lowering the cost of running a branch.

But "this has happened before and it worked out" is a claim about aggregate long-run outcomes, not about what happens to displaced individuals in the medium run, and not about whether the current moment differs structurally from past technological transitions. Three things are different about AI: it substitutes for cognitive labor, not just physical labor; its adoption curve is faster than prior general-purpose technologies; and — critically — it is not arriving in a labor market with high union density, robust retraining infrastructure, or generous unemployment support. The question of what policy should do about automation is not settled by pointing to the historical record. It is reopened by it.

Klarna's recent disclosures make the argument unusually concrete. In a May 30, 2025 comment letter on the company's registration statement, SEC staff told Klarna to explain how its AI strategy had changed after the chief executive publicly said some AI customer service outcomes were lower quality than human support and stressed that customers still needed a path to a person. Yet Klarna's 2025 annual report, filed for the year ended December 31, 2025, still celebrated the same system for handling 80 percent of customer service chats, doing work equivalent to more than 700 full-time agents, and supporting a broader reduction in workforce costs through AI-driven efficiency and attrition. That is what makes the current dispute different from a generic "technology changes jobs" story: firms are making active decisions about where to remove labor, where to preserve a human backstop, and what level of service degradation they are willing to accept in exchange for cost savings.

The debate has four distinct positions, and they disagree not only about policy but about what the problem actually is.

What direction-of-technology advocates are protecting

The proposition that automation is not a natural force but a set of choices shaped by incentives — and that the current incentive structure actively subsidizes displacement over augmentation. This is the central argument in Daron Acemoglu and Simon Johnson's 2023 book Power and Progress: Our Thousand-Year Struggle over Technology and Prosperity, and it runs counter to how both sides of the conventional debate tend to frame the problem.

The conventional framing treats automation as exogenous — something that happens to workers and to which policy must respond. Acemoglu and Johnson argue it is endogenous: which tasks get automated, which get augmented, and which new tasks get created are all shaped by price signals, tax codes, and political power. The current tax code taxes labor (through payroll taxes) more heavily than it taxes capital investment in automation. Research and development subsidies flow toward automation without conditions requiring that the resulting technology create new tasks for workers. The gains from AI accrue primarily to shareholders and the engineers who develop it. None of this is technologically inevitable. It reflects choices embedded in law and policy that can be changed.

Their preferred policy response is not to slow automation or compensate its victims after the fact, but to change what gets automated and what gets augmented. The distinction matters: an automation technology that replaces a worker at a task raises capital's productivity but reduces labor's marginal productivity — wages fall, or the worker loses the job entirely. A labor-augmenting technology that creates new tasks workers can perform at higher productivity raises both capital returns and labor's marginal productivity — wages can rise. Acemoglu and Johnson call the former path "so-so automation": it generates modest productivity gains while producing large displacement effects, concentrating benefits at the top of the income distribution while spreading costs across the labor force.

The Klarna case helps show why this critique lands. The managerial question was not whether customer service could be made more efficient in the abstract. It was whether the company would use generative AI mainly to remove labor from service operations, or to give human agents better tools while preserving escalation paths, judgment, and relationship quality. Once that choice is visible, "automation" stops looking like weather and starts looking like governance inside the firm. The policy question is then not only how to insure workers after displacement, but how tax incentives, disclosure rules, labor law, and procurement standards shape the business case for replacement versus augmentation in the first place.

The policy toolkit they advocate: reduce payroll taxes below taxes on capital to stop rewarding automation purely on tax grounds; condition AI research subsidies on labor-complementary outcomes; invest in worker training not as a compensatory afterthought but as an anticipatory investment in new task creation; strengthen privacy regulations and data ownership frameworks to reduce surveillance capitalism's incentive to harvest data from workers rather than augment their capabilities. What this position protects is the recognition that the future of work is not being written by technological forces alone — it is being written by people making choices about which technologies to develop, deploy, and reward. The appropriate response is to intervene upstream, not downstream.

What universal basic income advocates are protecting

The proposition that when work becomes structurally scarce, human dignity requires decoupling income from employment — and that an unconditional cash floor is more effective, more respectful, and more freedom-preserving than any work-based alternative. The philosopher Philippe Van Parijs laid out the foundational argument in Real Freedom for All (1995): a basic income is justified not primarily as insurance against job loss but as a share of the collectively inherited endowment of technology, institutions, and natural resources that no individual produced. Rutger Bregman's Utopia for Realists (2017) popularized the argument for a contemporary audience. Andrew Yang made it a centerpiece of his 2020 presidential campaign with a proposed $1,000 monthly "Freedom Dividend."

The empirical case for UBI has been strengthened, though not settled, by a growing body of pilot evidence. GiveDirectly's ongoing cash transfer programs in Kenya have documented sustained gains in consumption, assets, and psychological wellbeing over multi-year periods. The Stockton SEED pilot (2019–2021) found that recipients in the guaranteed income cohort were more likely to find full-time employment than the control group — challenging the "laziness" objection directly. The Stanford Basic Income Lab has catalogued more than 160 pilot programs globally over four decades, with generally positive effects on health, education, and poverty reduction, though employment effects are mixed and context-dependent.

UBI advocates make a deeper argument that goes beyond labor market insurance: the employment contract is not merely an income-delivery mechanism, it is a power relationship. Workers who cannot afford to refuse bad jobs accept bad jobs. A basic income floor changes the bargaining position of every worker in every negotiation without requiring unionization, without requiring employers to admit fault, and without bureaucratic gatekeeping about who is "genuinely" seeking work. Sam Altman, OpenAI's chief executive, has argued that AI will generate enough economic surplus to fund a basic income generously — and that doing so is the obvious response to a technology that concentrates gains at the frontier of its development. The OpenAI Worldcoin project and Altman's personal advocacy for basic income reflect a Silicon Valley strand of UBI support that is optimistic about automation's potential and uses UBI precisely as a mechanism for sharing the gains.

What UBI advocates are protecting is, ultimately, the possibility of a life with dignity independent of the labor market — the recognition that unpaid care work, civic participation, creative practice, and community building have genuine value that market employment measures poorly and often crowds out. An unconditional income does not just cushion displacement. It changes what "viable" looks like for people deciding how to spend their working years.

What job guarantee and active labor market policy advocates are protecting

The proposition that income security is not sufficient — that work itself has social and psychological value that cannot be replaced by cash — and that the state should guarantee the availability of work rather than compensate for its absence. Pavlina Tcherneva's The Case for a Job Guarantee (2020) is the most developed recent statement of this position. Tcherneva, an economist at Bard College working within the Modern Monetary Theory framework, argues that unemployment is not a personal failing and not a market-clearing equilibrium — it is a policy choice. Governments that maintain inflation targets by tolerating a "buffer stock" of unemployed workers are making a decision about who bears the cost of price stability. A job guarantee inverts this: instead of using unemployment as the policy lever, the state offers a publicly funded job at a living wage to anyone willing and able to work, creating a "buffer stock of employed" workers whose guaranteed wage anchors the labor market floor.

The Danish flexicurity model offers a partial real-world example. Denmark combines high employer flexibility (low employment protection legislation, easy hiring and firing) with generous unemployment replacement rates and active labor market policies that require participation — job search, retraining, or subsidized employment — as a condition of continued benefit receipt. Denmark spends more per unemployed worker on active labor market programs than any other OECD country, funding retraining at 110 percent of unemployment benefit levels in shortage sectors. The result is a labor market that is simultaneously flexible for employers, secure for workers, and able to absorb structural change — including technological change — without mass long-term unemployment. The model has survived repeated automation cycles because the transition infrastructure is continuous, not episodic.

Germany's Kurzarbeit scheme demonstrates a complementary mechanism: rather than letting workers be laid off when demand falls, the government subsidizes reduced working hours, allowing firms to retain workers and workers to keep their jobs and skills while avoiding redundancy. Kurzarbeit kept German unemployment dramatically lower than American unemployment during the 2008–09 recession and again during the 2020 pandemic. Applied to automation, a working-time reduction or work-sharing approach could distribute the productivity gains of automation as leisure rather than as inequality — shorter workweeks rather than mass unemployment.

What this position is protecting is the evidence that work is not just income-delivery — it is social integration, purpose, skill development, and collective participation. The research on unemployment consistently documents psychological harm beyond income loss: loss of routine, loss of social contact, loss of identity. A UBI addresses the income dimension but not the participation dimension. Job guarantee advocates argue that a policy which leaves large numbers of people with cash but nothing to contribute to is not a solution to displacement — it is a more comfortable version of it.

What automation tax and structural critics are protecting

The proposition that the gains from automation are being captured privately while the costs are socialized — and that the appropriate response is to tax those gains at their source, both to fund support for displaced workers and to correct a distortion in the tax code that actively incentivizes displacement over retention. In October 2025, Democratic staff on the Senate Health, Education, Labor, and Pensions Committee published a report titled "The Big Tech Oligarchs' War Against Workers," projecting that AI and automation could displace close to 100 million U.S. jobs over a decade. Senator Bernie Sanders confirmed plans to introduce legislation imposing a "robot tax" on employers who automate jobs, with revenue directed to displaced workers.

The structural argument underneath this policy is not merely about revenue. It is about tax code distortion. In the United States, employers pay payroll taxes on wages — effectively a tax on employing humans. Capital investment in automation is deductible. The current system therefore taxes human labor at a higher effective rate than it taxes the machines and software that replace human labor. This is not a neutral market outcome. It is a subsidy to automation embedded in the tax code, a policy choice made long before anyone anticipated its current implications.

Kate Crawford's Atlas of AI (2021) extends this critique to the full extraction chain: AI systems are presented as autonomous intelligence but are actually built on mineral extraction (rare earth mining for hardware), labor extraction (low-wage data labeling in the Global South, including in traumatizing content moderation roles), and data extraction (the harvesting of users' behavioral data without compensation). The automation dividend is not simply the result of technological ingenuity — it is the result of transferring costs onto workers, communities, and environments that are not represented in the price of the system. A tax on automation gains is, in this framing, not a penalty on innovation but a partial internalization of costs that have been externalized onto people who bear them without compensation.

Critics of robot taxes raise genuine objections: definitional ambiguity (does a word processor automate a secretary's job?), competitive distortions (firms facing an automation tax could lose ground to international competitors who don't), and the risk of protecting incumbents rather than workers. But the strongest version of the automation tax argument doesn't depend on taxing "robots" as such — it depends on taxing the gains from productivity wherever they concentrate, through a more progressive corporate income tax, a financial transactions tax, or a wealth tax that captures the capital gains that automation produces. What this position is protecting is the principle that productivity gains are collectively produced — through public investment in science, through the accumulated knowledge of prior generations, through the infrastructure that makes markets possible — and that when those gains accrue almost entirely to shareholders and executives, something has gone wrong with the distribution mechanism, not just the labor market.

Where the real disagreement lives

The four positions disagree about what kind of problem automation displacement is. That disagreement is not resolvable by labor market data alone.

Direction-of-technology advocates and automation tax advocates share a structural diagnosis — current outcomes are policy-shaped, not technologically determined — but differ on the locus of intervention: redirect the technology upstream, or tax the gains downstream. UBI advocates and job guarantee advocates share a concern with what happens to displaced workers, but disagree about whether the goal is income security or employment itself. The tension between UBI and job guarantee is ultimately a tension between two accounts of what makes work valuable: UBI implies that income, autonomy, and freedom to choose how to spend one's time are what matter; job guarantee implies that participation in collective production, structured routine, and social integration through work are irreducible goods that cash cannot substitute.

Both accounts are correct about something. Income insecurity and loss of participation are both real harms, and the question of which harm is more tractable — which is more responsive to policy — is an empirical and institutional question rather than a philosophical one. The evidence from Denmark suggests that well-designed active labor market policy can address both simultaneously. The evidence from guaranteed income pilots suggests that cash transfer recipients often use the income to invest in their own capacity to work, not to exit the labor market. These findings are not incompatible. They suggest that the dichotomy between "give people money" and "give people jobs" is less sharp in practice than in theory.

The more fundamental disagreement is between those who treat automation as a problem to be adapted to and those who treat it as a problem to be shaped. The former accept that the direction of AI development is largely determined by technological and market forces and ask what safety net is appropriate given that. The latter argue that the direction of AI development is itself a political question — that Acemoglu and Johnson's "so-so automation" is a predictable outcome of a set of specific policy choices about taxation, R&D subsidies, and labor law, and that the appropriate response is to change those choices before the displacement happens rather than after.

Recent corporate AI rollouts make this distinction harder to evade. A company can say, in the same year, that AI preserved customer satisfaction and removed the equivalent of hundreds of support jobs, while regulators press it to explain whether quality fell and whether human support still remains meaningfully available. That is not a dispute about whether technology advances. It is a dispute about what organizations optimize for when they adopt it, and who gets to bear the downside when "efficiency" is defined too narrowly.

History favors the shapers over the adapters, at least prospectively: the labor protections, minimum wages, and workplace safety standards that make the last century's technological change broadly beneficial were not inevitable adaptations — they were fought for, against resistance, by people who refused to accept the existing distribution of power as natural. The question is whether the political capacity exists now to do the same for the AI transition — and whether it can happen fast enough.

What sensemaking surfaces

Each position has a legitimate core concern that the others tend to talk past.

Direction-of-technology advocates are right that treating automation as a natural force rather than a policy-shaped choice forecloses the most important intervention point. If the direction of AI development can be changed — toward labor-augmenting applications, toward new task creation, toward technology that raises workers' marginal productivity rather than replacing them — then the downstream debate about UBI versus job guarantee becomes less urgent. The problem with this position is that it requires the political capacity to redirect the technology agenda of companies with enormous lobbying power and genuine first-mover advantages in global competition.

UBI advocates are right that income security is foundational — that the psychological and social harms of economic precarity are severe and that the American safety net's conditionality and patchwork design leaves displaced workers exposed to cascading losses that income stabilization would prevent. The problem with UBI as the primary response to automation is Acemoglu and Johnson's objection: it accepts a future of structural labor scarcity rather than contesting it, and it treats labor market exclusion as inevitable rather than as the product of changeable choices.

Job guarantee advocates are right that the social value of work is not exhausted by its income-delivery function, and that a policy which replaces wages without replacing participation will not address the full scope of displacement harm. The problem is institutional: well-designed job guarantee programs require administrative capacity, political will to maintain over the business cycle, and mechanisms to create work that is genuinely valuable rather than busywork. Denmark's flexicurity model works because Denmark has been building its institutions for thirty years. Countries without those institutions cannot borrow the outcome without building the capacity.

Automation tax advocates are right that the gains from AI are not being distributed through anything that resembles the classical story of productivity gains flowing to workers through competitive labor markets. The concentration of AI-derived wealth at the top of the income and wealth distribution is not a neutral market outcome — it is the predictable result of a political economy that protects intellectual property, subsidizes R&D without distribution conditions, and taxes labor more heavily than capital. The problem is that taxing automation gains after the fact is easier to advocate for than to design without unintended effects, and that the firms with the most to protect from such taxes have the most resources to shape the legislation that would impose them.

The strongest version of each position would acknowledge these weaknesses rather than pretending they don't exist. And all four positions share a premise that tends to get lost in the debate between them: the current path — rapid AI adoption, stagnant retraining infrastructure, declining union density, a safety net designed for temporary unemployment rather than structural displacement — is not an option anyone has explicitly chosen. It is the default. Defaults are choices too, made by inaction in the face of compounding change. The disagreement about which policy to adopt is less dangerous than the absence of any policy at all.

Further reading

  • Daron Acemoglu and Simon Johnson, Power and Progress: Our Thousand-Year Struggle over Technology and Prosperity (PublicAffairs, 2023) — the most thorough recent argument for redirecting AI toward labor-augmenting applications; central to the direction-of-technology position.
  • Daron Acemoglu and Pascual Restrepo, "Tasks, Automation, and the Rise in U.S. Wage Inequality" (Econometrica, 2022) — the technical grounding for the task-based model of automation and its distributional effects.
  • Daron Acemoglu and Simon Johnson, "Rebalancing AI" (IMF Finance and Development, December 2023) — accessible summary of their policy recommendations.
  • U.S. Securities and Exchange Commission, comment letter to Klarna Group plc (May 30, 2025) — unusually clear regulatory pressure on a real AI deployment case, asking Klarna to explain how its customer-service strategy changed after public concerns about lower-quality AI interactions and the need for human support.
  • Klarna Group plc, Annual Report on Form 20-F for fiscal year 2025 (filed 2026) — primary-source disclosure on AI-assisted customer service, claimed cost savings, support-chat automation rates, and workforce-efficiency framing.
  • Pavlina Tcherneva, The Case for a Job Guarantee (Polity, 2020) — the systematic argument for a federal job guarantee as both macroeconomic stabilizer and labor market floor.
  • Philippe Van Parijs and Yannick Vanderborght, Basic Income: A Radical Proposal for a Free Society and a Sane Economy (Harvard University Press, 2017) — the philosophical and economic foundations of unconditional basic income.
  • Rutger Bregman, Utopia for Realists: How We Can Build the Ideal World (Little, Brown, 2017) — accessible case for basic income and shorter working hours, with historical and empirical grounding.
  • Kate Crawford, Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence (Yale University Press, 2021) — maps the full extraction chain behind AI systems and the distribution of costs and benefits.
  • Senate HELP Committee Democratic Staff, "The Big Tech Oligarchs' War Against Workers" (October 2025) — the legislative background to the Sanders robot tax proposal.
  • Brookings Institution, "Navigating the Future of Work: A Case for a Robot Tax in the Age of AI" — analysis of automation tax design options and trade-offs.
  • NPR/WVIA, "Denmark's Flexicurity Policies Help Get People Back on Their Feet" (November 2025) — current reporting on Danish active labor market policy.
  • Stanford Basic Income Lab — ongoing compilation of UBI pilot evidence and policy analysis at basicincome.stanford.edu.

See also

  • Who bears the cost? — the framing essay for the distributional conflict underneath automation policy: when technological productivity gains arrive by displacing workers, the core question is whether firms, consumers, the state, and workers absorb those gains and losses on terms anyone could plausibly call fair.
  • Who gets to decide? — the framing essay for the governance conflict underneath automation policy: whether deployment choices stay inside managerial strategy and investor pressure, or whether workers and democratic institutions get meaningful authority over the pace, purpose, and conditions of labor-replacing technology.
  • What is a life worth? — the framing essay for the dignity dispute underneath labor displacement: if income, status, and social membership are still organized around paid work, then automation policy is not only about efficiency but about what kinds of contribution society is willing to honor and protect.
  • AI and Labor — maps the philosophical dispute about what human work is for and whether its displacement constitutes harm; this map addresses the policy question of what to do about displacement, which the AI and Labor map treats as background.
  • Universal Basic Income — the full map on UBI as a general policy proposal, covering debates beyond automation; the automation displacement context sharpens the UBI arguments and steelmans them against the job guarantee critique.
  • Workers' Rights and Labor Law Reform — addresses the NLRA framework and union organizing in the context of current labor law; the automation policy debate connects to this through the question of whether stronger collective bargaining could constrain displacement decisions at the firm level.
  • The Welfare State and Austerity — the fiscal constraints on automation policy (UBI, job guarantee) depend partly on the same arguments about public spending multipliers and fiscal sustainability that animate the welfare state debate.
  • AI Governance — the institutional question that sits above this one: who decides how AI is developed and at what pace, and whether governance frameworks can actually shape the direction-of-technology that Acemoglu and Johnson argue is decisive.
  • Wealth Inequality — the distribution of automation gains connects directly to the structural concentration of wealth; automation tax proposals are partly continuous with broader wealth taxation arguments.
  • The judgment call nobody made — synthesis essay on the AI cluster; the automation policy map is a specific instance of the deployment-governance gap identified there.