Sensemaking for a plural world

Perspective Map

AI and Labor: What Both Sides Are Protecting

March 2026

A radiologist in her mid-forties spent twelve years training to read images that most people cannot see clearly. She knows what a Grade III glioblastoma looks like at 3 a.m. after the third scan in a row. She has told people things she will never forget telling them. Now a model reads scans faster than she does, catches things she misses, and costs a fraction of what her hospital pays her. Her hospital administrator describes this as progress. She is not sure what word to use.

A software engineer in his thirties spent the last two years building AI tools for a company that helps small businesses automate their back-office work. The businesses using his tools are three-person operations that could not have survived without the capability they now have. He watches a florist in Des Moines manage inventory, ordering, and customer email with software that would have cost a Fortune 500 company half a million dollars a decade ago. He thinks of this as one of the most important things he has ever done.

Both of them are looking at the same technology. Neither is arguing in bad faith. The AI and labor debate has become a contest between what each of them is in the habit of seeing first. The sensemaking question is not whether AI produces winners and losers — it clearly does both. It is: what is each side actually trying to protect?

What AI skeptics are protecting

People who want to slow, regulate, or constrain AI deployment in labor markets are not simply afraid of change, nostalgic for the past, or ignorant of technology's benefits. At the core of the AI-skeptic position are several claims that deserve honest engagement.

They are protecting the dignity and meaning that work provides. Work is not merely an economic transaction. For most people, it is also where they develop competence, earn respect, build relationships, and participate in something larger than themselves. Matthew Crawford's Shop Class as Soulcraft (2009) is the most articulate account of this: the argument is not that difficult work should be preserved for its own sake, but that the cognitive and moral development that comes from doing difficult work well is not replaceable by leisure time or income transfers. When a profession is automated, it is not only a job that disappears — it is a pathway to expertise, identity, and meaning. The radiologist was not just earning a living; she was becoming someone.

They are protecting the communities that depend on the jobs being replaced. Labor markets are not frictionless. A truck driver in Youngstown, Ohio whose job is automated by an autonomous vehicle does not automatically retrain as a data scientist. This is not a failure of imagination but a description of how labor transitions actually work, empirically. Daron Acemoglu and Simon Johnson's Power and Progress (2023) marshals the historical evidence: previous waves of automation did not automatically distribute gains broadly; they concentrated them unless actively redirected by policy and institutional counterpressure. The assumption that workers will flow smoothly into new roles is a model, not a fact, and the model has a troubled empirical record.

They are protecting a fair distribution of the gains from productivity. If AI makes the economy ten percent more productive, who captures that ten percent? In recent decades, the gains from technology and globalization have gone disproportionately to capital over labor, and to the highest earners over the median. AI skeptics are not anti-productivity. They are skeptical that the gains will be distributed differently this time without deliberate intervention — and they point to the early evidence, which does not show broad wage gains for non-technical workers in AI-affected sectors.

They are protecting the human relationships and accountability structures embedded in labor. A teacher is not just an information-delivery device. A nurse is not just a medical procedure. A therapist is not just a set of cognitive-behavioral prompts. These relationships involve something — presence, accountability, shared vulnerability, moral seriousness — that AI systems do not currently replicate and that is lost when the human is removed from the loop, even if the measurable output is as good or better. AI skeptics argue that this loss is real even if it doesn't show up in the metrics being optimized.

What AI optimists are protecting

People who advocate for AI adoption in labor markets are not uniformly indifferent to displacement, inequality, or the texture of human work. They are making a specific set of claims about what matters most — claims that also deserve serious engagement.

They are protecting access to expertise that most people can't afford. The radiologist who loses her premium salary is one story. The person in rural Guatemala who now has access to diagnostic imaging for the first time is another. The child in a failing school district who has a personalized tutor for the first time is another. The small business owner who can now afford legal review of her contracts is another. AI optimists argue that the debate about labor displacement is overwhelmingly conducted by and for people who already have access to professional expertise, and that the billions of people who don't have that access stand to benefit enormously from its democratization. Erik Brynjolfsson and Andrew McAfee's The Second Machine Age (2014) makes this case directly: the question is not whether AI creates value but whether we organize its deployment to spread that value.

They are protecting the people who are suffering under the current system. The current economy is also producing losers — people who die from diagnostic errors, people who can't afford lawyers when they need them, people who work jobs that are dangerous, repetitive, and unfulfilling because no one has bothered to automate them yet. AI optimists argue that the "compared to what" baseline in the skeptic argument is too often an idealized version of the status quo, not the actual status quo, which involves enormous amounts of preventable suffering caused by the scarcity of expertise and the high cost of human labor. The question is not "should we disrupt stable, good jobs" but "should we improve on a system that is already doing enormous harm?"

They are protecting the historical track record of technological adaptation. Every major wave of automation — agricultural, industrial, digital — has generated predictions of permanent mass unemployment that did not materialize. New technologies destroy categories of work and create new ones. The optimist argument is that the current moment, while genuinely disruptive, fits this pattern: AI will create new categories of work that we cannot currently see, as the internet created jobs that did not exist in 1990. This is not complacency; it is a claim about the historical evidence. The Luddites were not wrong that power looms would destroy their specific trade. They were wrong that it would destroy work generally.

They are also, at their strongest, protecting a distinction between augmentation and substitution that gets flattened in the public argument. Erik Brynjolfsson, Danielle Li, and Lindsey Raymond's field study of generative AI in customer support found a version of AI deployment that looked more like augmentation: the system raised productivity most for less-experienced workers by giving them access to the organization's accumulated expertise. But Klarna's 2025 filings describe a different managerial choice: AI handling 80% of customer-service chats, doing work the company said was equivalent to more than 850 full-time agents, and delivering cost savings large enough to feature in the annual report, even as the company emphasized that customers still needed a path to human support. That is the real fork in the road. The same underlying models can be used to widen workers' capability or to narrow the payroll. Much of the optimist case is really a defense of the first path.

They are protecting the potential for radical reduction in human suffering. The most serious version of the AI optimist argument is not economic but moral: if AI can accelerate scientific discovery, compress the timeline to treatments for diseases that kill millions per year, and extend human health and lifespan at scale, then the costs of slowing or restricting its development are not just economic — they are measured in lives. Demis Hassabis, describing DeepMind's work on protein folding and drug discovery, is not making a case for shareholder value. He is making a case about what the technology could do for human health if not constrained by incumbent interests.

Where the real disagreement lives

Both sides want a world where people are able to live with dignity and where human potential is expanded rather than diminished. The dispute is three structural layers deeper than the surface arguments about jobs and growth.

Conditional vs. unconditional worth. This is the pattern that makes the AI and labor debate different from most others on this site. Embedded in both arguments is a question about human worth that neither side fully surfaces. The AI-skeptic position often implies that human worth is at least partly tied to productive contribution — that the dignity of work is in part the dignity of being needed, of having something irreplaceable to offer. If this is true, then automation that makes human labor redundant is not merely an economic disruption; it is an existential one. The AI-optimist position often implies the opposite: that human worth is unconditional, and that a world where machines do more of the drudgery is a world where humans are freer to be fully human. The problem is that neither position has fully worked out what happens when conditional worth is the psychological reality even if unconditional worth is the ethical ideal — when people who have tied their identity to their work lose it, and no philosophical reassurance about their inherent dignity makes them feel less lost.

Whose costs are centered. The AI optimist argument is typically made by people who are insulated from displacement or positioned to benefit from it — the engineers, investors, and executives who build and deploy these systems, and the knowledge workers who use them as tools. The AI skeptic argument is typically made by people in occupational categories currently threatened, or by researchers and advocates who center their perspective. This asymmetry does not settle the argument — the costs the optimist centers (the Guatemalan patient, the rural student, the preventable death) are real — but it should make both sides suspicious of their own motivated reasoning. Acemoglu and Johnson's point is not that AI is bad; it is that the people deciding how it is deployed have systematically failed to account for the costs that fall on others.

Compared to what. AI skeptics compare AI-enabled displacement to a world in which those jobs would still exist and those communities would still be stable. AI optimists compare AI adoption to a world in which people continue to die from diagnostic errors, cannot afford lawyers, and work dangerous jobs that no one has bothered to automate. These counterfactuals are not both equally realistic: the jobs in question were already under pressure before AI, and the suffering the optimist names was already present. But the optimist's argument also papers over the transition costs, which are real and concentrated, by pointing at long-run gains that are diffuse and uncertain.

What is being optimized: capability or headcount? This is the practical question that turns abstractions about AI into labor politics. The Brynjolfsson field study is the cleanest current example of capability-enhancing deployment: AI acting as a tutor and knowledge layer inside an existing workforce, with the largest gains flowing to less-skilled workers. Klarna's 2025 disclosures are a clean example of replacement-oriented deployment: AI framed simultaneously as better service, lower cost, and the equivalent of hundreds of agents, with human support retained as an escalation path rather than the primary design center. The disagreement is not only about whether AI works. It is about whether firms use the productivity gains to deepen human work, cheapen it, or eliminate it.

Who bears the burden of proof. This debate has an unusual structure: in market economies, the default is that deployment proceeds unless harm is demonstrated. This places the burden on AI skeptics to prove that the harms are real and specific, while optimists need only point to potential benefits. AI skeptics argue that this default is wrong — that the scale and speed of AI deployment create a plausible case for a precautionary burden shift. Optimists argue that precautionary approaches to transformative technology have historically delayed enormous benefits far more than they have prevented harms. Both are citing the historical record; they are reading different parts of it.

What sensemaking surfaces

"AI and labor" is not one debate but several, conducted simultaneously at different levels of abstraction. The question of how to support workers who are displaced is different from the question of whether to slow deployment. The question of how to govern AI in high-stakes domains like medicine is different from the question of whether to automate email or data entry. The question of how to distribute AI's productivity gains is different from the question of what AI means for the meaning of human work. Treating these as one debate — are you "for" or "against" AI? — prevents progress on any of them.

The most important asymmetry in this debate is not economic but temporal. The costs of AI displacement fall on specific people now, in concentrated ways, in identifiable communities. The benefits of AI development accrue to diffuse populations over long time horizons in ways that are harder to see and name. This asymmetry is not an argument against the benefits — they may be enormous and real — but it is an argument for taking the visible costs as seriously as the invisible gains, and for designing transition support, profit-sharing, and governance not as afterthoughts but as integral to the deployment decision itself.

The debate also has a class dimension that is underexplored. AI's displacement pressure is moving up the credential ladder faster than previous automation waves. It is now threatening legal work, diagnostic work, financial analysis, and creative production — work that was supposed to be safe from automation because it required judgment and expertise. This creates a novel political coalition: white-collar workers who previously supported labor-displacing technologies when only blue-collar workers were affected are now discovering that the logic they endorsed applies to them too. How this coalition forms — whether it produces thoughtful governance or simply more effective protection of incumbent interests — may matter as much as the technology itself.

The strongest version of each position would sit with the costs its preferred approach produces. An AI optimist who takes seriously the potential to reduce human suffering globally should also take seriously that the transition costs of rapid deployment fall hardest on the people who are already most economically vulnerable, and that those costs do not resolve themselves automatically. An AI skeptic who takes seriously the dignity of the radiologist should also take seriously the patient in the underserved community who never saw a radiologist at all, and ask whether slowing the technology is actually protecting that patient's dignity or only the radiologist's. What is underneath both positions is a question about whose flourishing counts as the template for the good society — and whether the people making that decision are the people who have to live with the consequences.

Patterns at work in this piece

All five recurring patterns are present here, with "conditional vs. unconditional worth" at particular depth. See What sensemaking has taught Ripple so far for the four-pattern framework, and The burden of proof for the fifth.

  • Whose costs are centered. AI optimists center the costs the current system imposes on those without access to expertise — the undiagnosed patient, the unrepresented tenant, the undereducated child. AI skeptics center the costs of displacement on workers whose skills and identities are tied to their work. Both sets of costs are real; they are not visible from the same vantage point.
  • Compared to what. The skeptic compares AI-enabled displacement to a stable present; the optimist compares AI adoption to a present already full of preventable suffering. Neither baseline is fully accurate, but each captures something real that the other ignores.
  • Whose flourishing is the template. The template for the optimist tends to be the knowledge worker who gains a powerful tool; the template for the skeptic tends to be the professional whose expertise is devalued. Both are real people, but they are not evenly distributed across the income spectrum, and the policy debate tends to reflect whose voice carries further.
  • Conditional vs. unconditional worth. This pattern reaches its deepest form in the AI debate. If human worth requires productive contribution, then displacement is existential. If human worth is unconditional, then freedom from drudgery is liberation. Neither side has fully reckoned with the gap between the ethical ideal and the psychological reality.
  • Burden of proof. The market default places the burden on skeptics to prove harm. Skeptics argue this default is wrong given the scale and speed of deployment. This structural disagreement about burden shapes which evidence counts as sufficient to act.

Further reading

  • Daron Acemoglu and Simon Johnson, Power and Progress: Our Thousand-Year Struggle Over Technology and Prosperity (PublicAffairs, 2023) — the most important corrective to techno-optimist historical narrative; documents that previous waves of automation did not automatically distribute gains broadly, and argues that the structure of AI development today is concentrating power in ways that previous transitions also did when left ungoverned. Essential for anyone whose prior is that "it worked out last time."
  • Erik Brynjolfsson and Andrew McAfee, The Second Machine Age: Work, Progress, and Prosperity in a Time of Brilliant Technologies (W.W. Norton, 2014) — the most rigorous optimist case; takes displacement seriously while arguing that the right policy response is to invest in education, infrastructure, and redistribution rather than to slow deployment. Honest about the distributional challenge in a way that more credulous boosters are not.
  • Matthew Crawford, Shop Class as Soulcraft: An Inquiry into the Value of Work (Penguin Press, 2009) — not about AI specifically, but the deepest philosophical account of what is lost when expert manual and cognitive work is eliminated from human life; argues that the development of genuine competence is not replaceable by leisure time or income transfers. The best resource for readers who feel the loss but can't articulate why.
  • Daron Acemoglu, "The Simple Macroeconomics of AI," NBER Working Paper (2024) — a careful empirical analysis arguing that current AI deployment is too focused on labor-replacing automation and not enough on labor-complementing augmentation; quantifies the productivity gains and finds them modest relative to the displacement costs at current deployment patterns. The most important recent empirical contribution to this debate.
  • David Autor, "Work of the Past, Work of the Future," American Economic Review: Papers & Proceedings (2019) — labor economist's account of how previous waves of automation interacted with labor markets; most useful for the "compared to what" question, documenting what actually happened to workers in prior transitions and which policy interventions helped versus which arrived too late or not at all.
  • Carl Benedikt Frey, The Technology Trap: Capital, Labor, and Power in the Age of Automation (Princeton University Press, 2019) — the most rigorous historical analysis of how new technologies have interacted with labor across centuries; argues that the same technology can be labor-replacing or labor-augmenting depending on who controls its deployment, and that political power determines which outcome prevails. Essential corrective to the view that technological transitions inevitably work out for everyone.
  • Aaron Benanav, Automation and the Future of Work (Verso, 2020) — the most important challenge to the automation narrative itself; argues that labor market stagnation and precarity are primarily caused by global overcapacity and slow growth, not by robots displacing workers. Even if automation is not the root cause, the policy response — demand-side investment, shorter working hours, decommodified care work — is distinct from the retraining-and-redistribution consensus. Essential for readers who find both the optimist and pessimist automation accounts unsatisfying.
  • Virginia Eubanks, Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor (St. Martin's Press, 2018) — documents how automated decision-making systems in welfare, child protective services, and homeless services systematically harm low-income people; the distributional argument is not only about job loss but about which communities bear the costs of algorithmic error and which receive the benefits of algorithmic efficiency. The "compared to what" question looks very different from these vantage points.
  • Erik Brynjolfsson, Danielle Li, and Lindsey R. Raymond, "Generative AI at Work," NBER Working Paper 31161 (2023) — the most important field study of generative AI's actual effect on workers; followed customer service agents at a large firm before and after GPT-4 deployment, finding that productivity rose 14% on average but gains concentrated heavily among lower-skill workers, who improved most when the AI gave them access to the firm's accumulated expert knowledge. The nuanced finding — AI as leveler within a workforce rather than pure displacer — challenges both the straightforward optimist story (AI raises all boats) and the pessimist story (AI replaces all workers); it also documents a flattening of skills development over time as workers rely on AI rather than building their own expertise. The only major field study using actual productivity data on generative AI, as opposed to lab or survey evidence.
  • U.S. Securities and Exchange Commission, comment letter to Klarna Group plc (May 30, 2025) — unusually clear primary-source evidence of a regulator pressing a firm to explain an AI deployment that was publicly framed as both service improvement and labor saving, including whether customers still had meaningful access to human support.
  • Klarna Group plc, Annual Report on Form 20-F for fiscal year 2025 (filed 2026) — primary-source corporate disclosure on AI-assisted customer service: 80% of chats handled by the AI assistant in 2025, work the company said was equivalent to more than 850 full-time agents, claimed cost savings, and an explicit description of a "dual-track" model combining scaled AI support with human representatives.
  • Nick Srnicek and Alex Williams, Inventing the Future: Postcapitalism and a World Without Work (Verso, 2015) — the most important argument that automation should be a political demand, not a threat to be resisted; argues that the left has trapped itself in a "folk politics" of localism, horizontalism, and defensive labor protection that cannot address the structural scale of automation, and that a post-work future — full automation of drudge work, universal basic income, reduction of the working week — is achievable but requires building hegemonic political power rather than protecting existing jobs. The affirmative left case for automation as liberation from compelled labor, in direct tension with the defensive labor-protection positions that dominate this debate; its implicit interlocutor is both Brynjolfsson's techno-optimism (which it accepts as technically accurate) and the Crawford/Benanav defenses of work meaning and labor solidarity (which it diagnoses as strategic dead ends).

See also

  • Who bears the cost? — the framing essay for the distributional conflict underneath AI deployment: when productivity gains arrive through automation, the central question is whether workers, firms, consumers, and the public absorb the losses and gains on anything like fair terms.
  • Who gets to decide? — the framing essay for the governance dispute underneath AI deployment: whether firms can decide on their own that new systems should replace labor, or whether workers, publics, and democratic institutions get a real say over how fast automation moves, where it is used, and what obligations come with it.
  • What is a life worth? — the framing essay for the dignity dispute running underneath the labor question: if institutions still tie security, status, and social recognition to paid work, then replacing labor is never only an efficiency story but a judgment about what kinds of human contribution count.
  • automation policy and labor displacement map — the companion map on the policy question this page treats philosophically: once AI can be deployed either to augment workers or replace them, what institutions, taxes, bargaining rights, and income supports can actually shape that choice?
  • AI and consciousness map — addresses the deeper question that underlies much of this debate: whether AI systems can have moral status, and what that would mean for how we develop and deploy them — the labor question and the moral status question are not separable for long.
  • wealth inequality map — addresses the broader question of what concentrated capital requires from a just society — the AI/labor debate is one of the mechanisms by which capital's share of income is currently being restructured, and the debate about what that requires (redistribution, retraining, UBI, worker ownership) is the downstream policy version of the upstream philosophical dispute about whether market-generated inequality is self-justifying.
  • surveillance capitalism map — addresses the data infrastructure that underlies AI development — the behavioral data generated by surveillance capitalism is the training substrate for AI systems, and the question of who owns that data is closely related to the question of who owns the AI trained on it; the structural antitrust critique of platform concentration connects the labor displacement and power-concentration concerns in both debates.
  • AI governance map — addresses the institutional question that sits above this one: who decides how AI is developed and deployed, under what accountability constraints, and whether the governance frameworks emerging from wealthy nations are adequate to the distributional justice concerns that the labor map names. The accountability and global governance positions in that map are the institutional expression of what this map diagnoses as a distributional problem.
  • AI and creative work map — maps the specific terrain of creative labor disruption — authorship, copyright, training data consent, and the economics of the working creative class — where the philosophical and economic questions are particularly entangled; it is the companion piece to this map for the domain where AI's displacement of human expertise has generated the most concentrated cultural conflict.
  • The share that stopped flowing — synthesis essay on the full labor cluster; situates the AI/labor question within the broader productivity-wage decoupling and maps the competing strategies for addressing it