Perspective Map
AI and Creative Work: What Each Position Is Protecting
She spent eight years developing a visual style — the way she renders light on fabric, the particular flattening she does with shadow, the palette she arrived at through hundreds of failed experiments. Clients found her through her portfolio; they were paying for the thing she had made of herself through years of work. Now she opens her email to find a brief asking her to match a sample generated by an AI trained on images scraped from accounts like hers. The sample looks like something she might have made. The rate offered is a quarter of what she charged two years ago.
He is a novelist who uses an AI writing assistant to get unstuck. When a scene won't move, he asks it for five different ways the character might respond; he reads them, discards four, takes a phrase from the fifth and develops it into something the model never approached. He produces better work faster. The output is unambiguously his in every way that matters to him — the intention, the selection, the voice, the meaning. He can't understand why colleagues treat the tool as a threat to authorship rather than an extension of it.
A small animation studio in Montreal can now afford to render photorealistic environments that would have required a production budget twenty times larger five years ago. Their human animators spend their time on performance, timing, and narrative — the things AI cannot do. They're telling stories that would otherwise never have been made.
A voice actor who spent fifteen years building a career doing audiobooks and video game characters signed a contract granting a studio the right to use her voice for a specific project. The contract didn't specify what "use" meant. The studio synthesized a voice model from her recordings. She is now competing, without compensation, against a version of herself.
None of these people is arguing in bad faith. The debate about AI and creative work has become a conflict between several genuinely different things people are trying to protect — and the arguments run past each other because they are, at their core, about different problems. This map attempts to hold them simultaneously.
What the creative tool tradition protects
People who describe AI as a creative tool in the tradition of other transformative technologies are protecting something real: the history of art as a history of tools changing what is possible, without ending what it means to be an artist.
They are protecting the long record of creative survival. Photography did not kill painting; it freed it from the obligation of representation and produced expressionism, abstraction, conceptual art. The synthesizer did not kill music; it created entirely new genres while making more kinds of music accessible to more people. Desktop publishing did not eliminate graphic design; it changed what designers spend their time doing. The argument from history is not naive optimism — it acknowledges that specific skills and markets were disrupted — but it notes that the feared extinctions did not materialize, and that the replacement for "what humans do in this domain" has always turned out to be more interesting than the fear imagined.
They are protecting the expansion of creative access. Most humans who want to make things — to tell visual stories, to compose music, to design spaces — lack the technical training that current creative production requires. AI tools lower the floor of entry into creative expression. The novelist using a writing assistant, the first-generation college student who can now design her own graphics, the person with a disability who can now translate ideas into images without the fine motor control that drawing requires: these are not edge cases. The democratization of creative production is a real good, and the people defending the tool tradition are protecting it.
They are protecting authorship located in intention and selection. The author, on this account, is the one with something to say — the one who frames the question, selects from what the tool generates, rejects what doesn't serve the work, and takes responsibility for the result. This is not a radical departure from existing creative practice. Every photographer selects a frame from all possible frames; the selection is the authorship. Every sculptor who works in marble works within the grain of the stone; the collaboration with material is not a challenge to authorship. The novelist who uses a synonym dictionary to find the right word is still the author of the sentence. On this account, what matters is not whether the tool generates the raw material but who is accountable for the choices that shape it into meaning.
What copyright holders and authorship rights protect
People who argue that AI training violates artists' rights — the illustrators, musicians, writers, and their advocates who have filed class action suits and lobbied for legislative protection — are protecting something that the tool tradition too often dismisses as self-interested resistance to change.
They are protecting the principle of consent in derivative creation. The legal history of copyright is the history of deciding that creators have the right to control how their work is used to produce new work. A cover song requires a license. A film adaptation requires a license. A sample in a hip-hop track requires a license — a principle established through litigation that initially felt as disruptive to musicians as AI training feels to illustrators now. The argument is not that AI outputs directly reproduce any artist's work (though some do). It is that the entire training pipeline was built by systematically ingesting copyrighted work without consent and without compensation, and that this is legally and ethically distinct from a human artist looking at other artists' work and developing their own style. Ed Newton-Rex, who resigned as Vice President of Audio at Stability AI over this question, articulates the distinction clearly: human artists learn from art in ways that are fundamentally non-copying; large language and image models ingest, encode, and statistically reproduce at a scale and mechanism that human learning doesn't resemble.
They are protecting authorship as a legal and moral category that carries accountability. The Supreme Court's 2023 ruling in Thaler v. Vidal held that AI-generated work cannot hold copyright because copyright requires human authorship. This cuts both ways: it means that works produced without meaningful human creative input cannot be protected — but it also illuminates why authorship matters. Copyright is not merely about economic reward; it is about a chain of accountability from creative act to creator. When an AI-generated image defames someone, who is responsible? When an AI-synthesized voice is used to deceive, who is liable? When AI-generated journalism contains errors that damage reputations, who answers? The authorship question is inseparable from the accountability question, and those who defend it are protecting a system in which creative output is traceable to a person who can be held responsible.
They are protecting the economic logic of creative markets. Copyright is not just a property right — it is the mechanism by which creative investment is recovered. When a composer spends three years writing a score, she is making an investment that the copyright system is designed to protect by giving her control over uses of the work during the period when the investment might be recouped. AI training on that score without compensation extracts the value of that investment and redirects it to AI developers. The argument from the tool tradition — "you still made your work; this is just a new tool in the market" — does not address the extraction that happened before the tool appeared. The model was built on unpaid labor.
What the working creative class protects
There is a third position in this debate that is distinct from both the tool tradition and the copyright rights argument — and it is the one most likely to be rendered invisible by the noise of the other two.
The working creative class — illustrators, voice actors, stock photographers, technical writers, concept artists, mid-tier musicians — are not primarily arguing about the philosophy of authorship or the doctrinal scope of copyright. They are arguing about whether they can survive economically in a market that has just been flooded with competent substitutes for their work at zero marginal cost.
They are protecting the commercial creative middle class. The creative economy is not composed primarily of canonical artists whose reputations are secure and whose work has moved beyond market vulnerability. It is composed of millions of people who trained for years, developed real skills, built client relationships, and made a living from work that required exactly the expertise AI tools are now generating at scale. Karla Ortiz, the illustrator who is a named plaintiff in Andersen v. Stability AI, is not arguing that her work is philosophically irreplaceable. She is arguing that her livelihood has been destroyed by a system trained on her work without her consent, and that the legal and economic frameworks that were supposed to protect creative labor have not kept pace with the speed of the disruption. This is not an argument about what art is. It is an argument about whether people who do skilled creative work for a living have any protection when that skill is systematically extracted and industrialized.
They are protecting the distinction between the tool transition and the extraction transition. Previous creative tool disruptions — the synthesizer, the digital camera, desktop publishing — changed what skills the market rewarded, but they did not begin by extracting value from the existing holders of those skills. The synthesizer was not trained on fifty years of recorded performances without compensating the musicians whose recordings made it possible. The digital camera was not built by scanning and encoding every photograph ever taken without asking photographers. AI creative tools were built differently: the training phase was an extraction, not a neutral technical process. That is the argument the working creative class is making, and it is not the same as arguing against new tools in principle.
They are protecting the capacity to negotiate. SAG-AFTRA's 2023 contract negotiations over AI voice and image synthesis, and the Writers Guild's fight over AI use in screenwriting, were not arguments that AI tools should not exist. They were arguments that the people whose labor built the training data and whose professional markets are being disrupted by AI outputs should have a seat at the table where the terms of AI's integration into creative industries are set. What the working creative class is protecting, at its most basic, is the right to negotiate rather than simply absorb.
What open culture advocates protect
There is a fourth position that is easy to dismiss as cover for commercial interests but is actually making a distinct and important argument.
Open culture advocates — the tradition running from Lawrence Lessig's Free Culture through the Creative Commons movement to contemporary critics of copyright maximalism — are protecting something that the copyright rights position does not take seriously enough: the way the existing copyright system fails artists even as it claims to protect them.
They are protecting the creative commons that all creativity draws from. Every artist learned by studying other artists. The folk tradition has always been built by transformation, appropriation, and recombination without formal permission. Shakespeare took plots from Holinshed and Plutarch without licensing them. Jazz developed through musicians playing each other's compositions, adapting and mutating the material without formal permission structures. The blues was built from field hollers and spirituals that nobody owned. The argument that AI training is theft ignores that every artistic tradition has a non-proprietary commons at its foundation — and that the legal regime now being defended as protection for artists did not exist for most of the history in which those traditions were built.
They are protecting artists from a copyright system that primarily serves intermediaries. The Copyright Term Extension Act of 1998 — which extended copyright to life plus 70 years, largely to keep Mickey Mouse from entering the public domain — was passed at the behest of the Walt Disney Company and the major entertainment corporations, not at the behest of working artists. The music licensing system that nominally protects songwriters primarily channels money to publishers and labels. The book contract regime that gives publishers copyright over works-for-hire primarily benefits publishing companies, not the writers. Lessig's argument is not that artists deserve nothing; it is that the copyright regime as currently constituted is a monopoly system that corporate copyright holders have shaped to serve their interests, and that working artists who appeal to it for protection are largely appealing to a system that was not designed to protect them.
They are protecting the right of future artists to inherit a workable commons. If the principle that AI training on creative work requires compensation and consent is established in law, one implication is that future AI systems can be trained only on licensed data. Licensable data is data owned by organizations with the legal infrastructure to license it: corporations, estates, and institutions. Individual artists who do not have the administrative capacity to structure licensing agreements will be in practice excluded from training data, while the large corporate rights holders will control the next generation of tools. The open culture argument is that the copyright solution to the AI training problem may produce outcomes even worse for working artists than the current situation.
Where the real disagreement lives
These four positions are frequently argued as if they are arguing about the same thing. They are not. The confusion is productive to name.
Is training legally equivalent to copying? This is a genuine legal question that no court has fully settled. The AI training cases pending in federal court will eventually produce a ruling. Pamela Samuelson's rigorous legal analysis finds that the transformative use doctrine may support training even on copyrighted data, while acknowledging that commerciality and market substitution concerns — the tests the Supreme Court applied in Andy Warhol Foundation v. Goldsmith (2023) — complicate the analysis for AI systems whose outputs compete directly with the creators whose work trained them. The legal question is not settled by analogy ("it's like a human learning") or by assertion ("it's obviously theft"). It requires engaging with the actual doctrinal framework, which is genuinely uncertain.
Which creative class is the template for this debate? The tool tradition draws its examples from the novelist who uses AI assistance, the animator whose storytelling expands, the first-generation student who gains access to design. The working creative class draws its examples from the illustrator whose commercial market has collapsed, the voice actor whose likeness was used without compensation. Both sets of examples are real. The debate is often conducted as if they are the same people in the same situation. They are not, and which example is treated as primary determines which policy response seems obvious.
What does authorship actually require — and for whom? The philosophical question about whether AI-generated work constitutes "real" authorship is orthogonal to the economic question about whether working creatives can survive. You can believe that AI-generated images are not genuine art and still believe the compensation model is sustainable; you can believe that AI-assisted work is fully legitimate authorship and still believe the training data extraction was wrong and requires remedy. The conflation of the philosophical and economic questions is responsible for most of the debate's heat. Ted Chiang's careful distinctions are useful here: the question of what ChatGPT is doing when it writes is different from the question of what it is doing to the people who write for a living.
Does the extraction argument depend on the tool argument? The open culture tradition is right that the copyright system has been captured by corporate interests and does not primarily serve working artists. But this does not settle the extraction question. The fact that the pre-existing system was imperfect does not license the claim that the new system, built on that imperfection, incurs no obligations to the people whose work it encoded. The case for compensation for training data extraction can be made on terms the open culture tradition should accept: not as a defense of copyright maximalism, but as a claim that labor has been used and should be acknowledged and compensated — separate from what happens to copyright doctrine afterward.
What sensemaking surfaces
This debate has a structure that almost guarantees confusion: it is actually three separate arguments being conducted simultaneously, and they require different kinds of resolution.
The philosophical argument about what authorship requires, what creativity is, and whether AI-generated work constitutes genuine art is a genuine and interesting question. But it is not directly connected to the other two. Resolving it in any direction does not tell you who should be compensated for training data, and it does not tell you what governance frameworks should apply to AI deployment in creative industries. It is a question for aesthetics and philosophy, not for courts or collective bargaining.
The legal argument about whether training on copyrighted data requires consent and compensation is a genuine and unsettled question. The pending cases will produce rulings. The rulings will be appealed. Eventually, legislatures will act. This process should be engaged on its own terms — which requires actually reading the doctrinal arguments rather than replacing them with philosophical intuitions about whether AI "really" learns.
The economic argument about what the disruption of creative labor markets requires from the companies that built the tools is perhaps the most urgent and the most neglected. Even if training on copyrighted data is ultimately held to be lawful, the working creative class has been disrupted by a technology built on their uncredited labor, and the question of what recognition and transition support they are owed is not the same as the copyright question. The WGA and SAG-AFTRA negotiations were attempts to address this layer; they produced partial agreements. The economic layer will require ongoing renegotiation as the technology continues to develop.
What tends to get lost in the noise: the open culture tradition is correct that working artists are poorly served by copyright maximalism, and the copyright rights tradition is correct that training data extraction was systematically unconsented. These are not contradictory claims. It is possible to believe both — to argue for a training data compensation and consent framework that does not reproduce the rent-extraction structure of existing copyright while still acknowledging that the people whose work built the tools deserve recognition and a share of the value generated by them.
The strongest version of the tool tradition would sit with the working creative class's economic disruption rather than pointing to the novelist and the student as the representative cases. The strongest version of the copyright rights argument would grapple honestly with what the current copyright regime actually does for working artists, rather than assuming that defending copyright doctrine defends them. The strongest version of the working creative class argument would engage with the open culture critique — asking not just for compensation under the existing system, but for different terms entirely. And the strongest version of the open culture argument would acknowledge that "the system is imperfect" does not license the extraction that happened before any reform was in place.
Patterns at work in this piece
This map introduces a variant that appears elsewhere but here becomes explicit: the craft/commerce split, in which a philosophical question and an economic question are being argued simultaneously as if they were the same question. The debate about what authorship requires and the debate about whether working illustrators can survive have different logical structures, different relevant evidence, and different mechanisms of resolution. Conflating them — arguing that AI is "not real creativity" as if this settles whether training compensation is owed, or arguing that the tool is legitimate as if this settles whether the extraction was wrongful — is the primary source of the debate's heat-to-light ratio.
- Whose costs are centered. The tool tradition centers the novelist with new creative capacity and the first-generation student who gains access. The copyright rights tradition centers the illustrator whose style has been appropriated. The working creative class position centers the voice actor whose likeness has been used without consent. Each is a real person in a real situation; the choice of exemplar determines which policy response feels obvious.
- Compared to what. The tool tradition compares AI to a pre-AI creative market and finds it expansive. The copyright rights tradition compares AI training to licensed derivative use and finds it extractive. The open culture tradition compares the current copyright system to a functioning commons and finds it already broken. All three comparisons capture something real; none of them alone provides the relevant baseline.
- Vocabulary collision. "Theft," "creativity," "originality," "style," and "authorship" mean genuinely different things to the four positions — not as rhetorical maneuvers but as reflections of different models of what intellectual property is for and how creative work produces value. "Style cannot be copyrighted" is a legal statement; "training on my style without consent is theft" is a moral claim about uncredited labor. These are not the same claim, and the argument proceeds as if they contradict each other when they don't.
- Structural absence. The people least present in the high-profile AI and creativity debate are the mid-tier commercial creatives — the illustrators, stock photographers, technical writers — whose economic disruption is the most immediate and severe. The debate is largely conducted by philosophy professors, AI executives, intellectual property lawyers, and canonical artists whose market position is not primarily threatened. The working creative class has spokespersons (Karla Ortiz, the SAG-AFTRA negotiating team) but not proportional presence in the forums where the framing is set.
Further reading
- Ted Chiang, "Will A.I. Become the New McKinsey?" (The New Yorker, 2023) — the most careful literary perspective on what gets lost when creative work is automated; Chiang argues that ChatGPT resembles consulting McKinsey not because it produces bad work but because it produces output optimized for acceptability rather than truth — and that both McKinsey and ChatGPT allow organizations to obscure responsibility for decisions by distributing it into a process. Distinct from most AI art commentary in that it is not primarily about style or authorship but about the social function of creative and intellectual work, and what is at stake when that function is made frictionless. Read at The New Yorker
- Ed Newton-Rex, "Why I resigned from Stability AI" and writings on Fairly Trained certification (2023–2024) — the insider account of why consent in AI training is both ethically necessary and technically achievable; Newton-Rex argues that the "AI learns like humans do" analogy is false — humans don't ingest entire creative outputs in ways that enable reproduction; the most credible articulation of the consent-based training position from someone who built the systems. His Substack
- Lawrence Lessig, Free Culture: How Big Media Uses Technology and the Law to Lock Down Culture and Control Creativity (Penguin Press, 2004) — the foundational account of how copyright expansion became corporate capture; argues that the Sonny Bono Copyright Term Extension Act protected Disney's assets, not working artists, and that a copyright system designed around corporate ownership of cultural artifacts is incompatible with the conditions under which creative traditions have always flourished. Essential for anyone who wants to argue for artists' rights without inadvertently arguing for the regime that has primarily served corporate intermediaries. Free online edition
- Pamela Samuelson, "Generative AI Meets Copyright," Science Vol. 381 (2023) — the most rigorous legal scholarship on whether AI training falls within fair use; Samuelson finds that the transformative use doctrine may support training on copyrighted data but that the commercial outputs competing with the training data's source market complicate the analysis; essential for anyone who wants to engage the copyright question legally rather than rhetorically. DOI link
- Matthew Butterick, "GitHub Copilot litigation" documentation and "Stable Diffusion litigation update" (Practically Typing, 2022–2024) — the lawyer-developer who filed the Stable Diffusion class action provides the most detailed articulation of the legal theory underlying the copyright rights position; Butterick argues that the "training is not copying" claim collapses under scrutiny because models encode and reproduce statistical patterns derived from specific creative works in ways that enable output to substitute for those works in their commercial markets. His analysis
- Kate Crawford, Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence (Yale University Press, 2021) — on the labor and extraction costs of AI development rendered invisible in the "AI as creative tool" framing; training data is not a free resource but human creative labor encoded and extracted; the working creative class argument has a structural back-end: not only the market disruption forward but the uncredited labor extraction that built the tools. Crawford's analysis of how AI production requires massive quantities of human annotation, classification, and feedback labor — mostly low-wage and offshore — is the complement to the training data extraction argument.
- Andy Warhol Foundation for the Visual Arts v. Goldsmith, 598 U.S. 508 (2023) — the Supreme Court ruling that found Warhol's Prince Series portraits infringed Lynn Goldsmith's copyright; the Court held that commerciality and market substitution (both works occupied the same market for magazine licensing of Prince images) were dispositive; the decision narrowed the "transformative use" doctrine that AI developers rely on, and has influenced subsequent lower court analysis of AI training. The most important recent copyright precedent before the AI training cases are fully litigated — and a useful study in how commercial stakes can determine the application of doctrine that appears neutral.
- WGA and SAG-AFTRA AI provisions (2023 contract negotiations) — the agreements reached following the 2023 strikes established early precedents for AI consent, compensation, and credit in screenwriting and performance; while narrow in scope and subject to renegotiation, they represent the most concrete attempt to date to address the economic layer of the AI and creative work debate through collective bargaining rather than litigation or legislation; the most useful documents for understanding what practical working creative class protections look like when they are actually negotiated. WGA 2023 MBA SAG-AFTRA AI resources
See also
- Who bears the cost? — the framing essay for the distributional conflict underneath creative automation: when models are trained on collective cultural labor and then deployed into the same markets, the question is who absorbs the losses, who captures the gains, and what kind of compensation or bargaining power creators should have.
- Who gets to decide? — the framing essay for the governance dispute underneath the AI art fight: whether model developers, platforms, and studios can decide on their own that creative labor is raw material for automation, or whether creators and democratic institutions get a real say over consent, licensing, attribution, and deployment rules.
- What is a life worth? — the framing essay for the deeper dispute underneath creative automation: whether making art is valuable mainly because it produces marketable outputs, or because human practice, recognition, and cultivated skill are themselves part of what a flourishing life requires.
- AI and labor map — addresses the broader economic disruption from AI — displacement across all sectors, questions of benefit distribution, historical comparisons — of which creative labor disruption is one instance; the two maps are companion pieces, with the labor map providing the macroeconomic frame and this map providing the specific terrain of creative work, where the philosophical and the economic questions are particularly entangled.
- AI governance map — addresses the institutional questions above this one: who decides the rules for AI development and deployment, under what accountability frameworks, and whether existing governance institutions are adequate; the copyright and consent questions in the creative work debate are one domain where the governance gap — between the people bearing the costs of AI deployment and the institutions setting its terms — is particularly visible.
- work and worth map — addresses the deeper question underneath the creative labor debate: whether skilled work provides goods — mastery, contribution, belonging, the satisfaction of making something — that are not fully captured by its market value, and what it means when the market for that work collapses. The illustrator losing clients to AI is not only experiencing income loss; she is losing a form of practice that organized her relationship to time, skill, and meaning in ways the work-worth map traces carefully.