Perspective Map
Surveillance Capitalism: What Each Position Is Protecting
A thirty-one-year-old graduate student in Chicago has never paid for a search engine, an email account, a social network, a navigation app, or a video platform. In exchange for this, her search history, her correspondence, her location at every hour of the past nine years, her purchasing patterns, her political interests, her health queries, her relationship status, her sleep schedule inferred from device activity, and her emotional valence inferred from the words she chooses have been collected, stored, and sold to parties she has never heard of. She agreed to this. She clicked the box. She knows, in a general way, what she agreed to, because the basic mechanics are no longer a secret. She keeps using the services because they work and because opting out of them would mean opting out of the infrastructure of modern life. She is not sure whether to call herself a consumer, a product, a subject, or just someone trying to get directions.
A product manager at a mid-sized advertising technology firm believes the arrangement is genuinely fair. The services his company's clients use to reach her are funded by her data, and the services she receives — navigation, communication, information retrieval, entertainment — are real and valuable. He knows they cost money to provide. He believes she knows she is paying with her attention and her data, and that she continues to choose the deal because the deal is good. He is not indifferent to privacy concerns. He would support reasonable consent requirements. He resists the framing that she is being exploited, because exploitation requires a victim who is harmed, and he does not see the harm. What he sees is a functional exchange that funds a digital ecosystem billions of people chose to participate in.
Shoshana Zuboff spent a decade studying the architecture that connects these two people. Her conclusion was that neither of their accounts is adequate to describe what is actually happening. The graduate student is not simply a consumer or a product. The product manager's company is not primarily in the advertising business. What is actually happening, Zuboff argues, is that human experience has become the raw material of a new economic logic — one that converts behavioral data into predictions about future behavior and sells those predictions to any buyer willing to pay. The graduate student does not know which buyers. She does not know what they are predicting. She cannot see the feedback loops between the predictions and the content she is shown next. The product manager's account of a fair exchange misses the part of the transaction she cannot observe.
These three people are not arguing about privacy in the ordinary sense. They are arguing about what kind of thing the data economy is, who owns the power it generates, and whether the existing legal and market frameworks are even asking the right questions.
What data rights advocates are protecting
The data rights position holds that individuals should have meaningful control over personal data — what is collected, how it is used, who it is shared with, and for how long it is retained. The European Union's General Data Protection Regulation, enacted in 2018, is the most consequential implementation of this framework: it establishes rights to access, correction, deletion, and portability; requires explicit and informed consent for data collection; restricts the purposes for which collected data can be used; and imposes significant penalties for violations. Data rights advocates are protecting several interconnected goods.
They are protecting meaningful consent as the foundation of legitimate exchange. Woodrow Hartzog's Privacy's Blueprint: The Battle to Control the Design of New Technologies (Harvard University Press, 2018) argues that the consent mechanisms through which people nominally agree to data collection are systematically designed to be ineffective — dark patterns, buried settings, cognitive overload from terms of service documents that would take hours to read, interface designs that make consent the easiest click and refusal the most cumbersome. Hartzog's argument is not that people should not be able to consent to data collection; it is that the current system produces a fiction of consent that insulates companies from accountability while providing users no meaningful protection. Data rights advocates are protecting the principle that consent should be substantive rather than performative — that clicking a box under conditions of manufactured confusion is not the same as genuine agreement.
They are protecting contextual integrity — the idea that information should flow in ways that match the norms of the context in which it was shared. Helen Nissenbaum's framework, developed in Privacy in Context (Stanford University Press, 2010), gives this intuition philosophical precision: the problem with surveillance capitalism is not that data flows at all but that it flows in ways that violate the expectations under which it was generated. Health data shared with a physician appropriately flows to other treating physicians. It does not appropriately flow to an insurer, an employer, or a political data broker. Location data shared with a navigation app to get driving directions does not appropriately flow to a law enforcement agency tracking a political protest. The harm is not disclosure per se but contextual betrayal — information weaponized against the person who generated it, in contexts she could not anticipate when she shared it.
They are protecting the practical consequences of de-identification failure. Paul Ohm's "Broken Promises of Privacy" (UCLA Law Review, 2010) documented the systematic failure of anonymization as a privacy technique: researchers have repeatedly demonstrated that supposedly anonymized datasets can be re-identified using external information, often with remarkably little data. The Netflix Prize dataset, "anonymized" for a public research competition, was re-identified by Narayanan and Shmatikoff using only two or three correlated movie ratings. Location data with identifying information stripped is re-identifiable from four spatio-temporal data points alone. Ohm's argument is that legal frameworks built on the promise that data can be safely anonymized are systematically undermined by the actual capabilities of re-identification — and that data rights protections (minimization, deletion) are the only reliable backstop against a harm that technical pseudonymization cannot prevent.
What market defenders are protecting
The market position holds that the data economy, while imperfect, represents a voluntary and broadly beneficial exchange: users receive services of genuine value in exchange for data that funds those services. Heavy-handed regulation, on this view, risks eliminating the exchange by making it legally or economically unworkable — which would either eliminate the services entirely or force users to pay for them directly, producing access barriers that would harm the least wealthy users most. The market position is not indifferent to privacy; it is skeptical that regulatory intervention will improve outcomes rather than creating new harms.
They are protecting the value of services that data-funded models make universally accessible. The search engine, the navigation tool, the communication platform, the email account, the video service — these have become fundamental infrastructure for employment, education, civic participation, and social life. Before digital advertising funded this infrastructure, equivalent services were either unavailable or available only to those who could pay for premium versions. Market defenders argue that the data-for-services exchange has democratized access to capabilities that would otherwise be stratified by income — and that regulation calibrated to protect users from hypothetical data harms risks producing a two-tiered system where premium users pay for privacy while everyone else gets worse services or no services at all.
They are protecting the self-correcting capacity of competitive markets. The argument is not that every data practice is benign but that market competition gives companies ongoing incentives to respond to user preferences about privacy. Users who feel mistreated will switch to competitors; companies that develop reputations for data abuse will lose market share; new entrants can compete on privacy as a feature. This mechanism is imperfect — switching costs are real, network effects create lock-in, many privacy harms are invisible — but market defenders argue that regulatory intervention, particularly at the hands of agencies that struggle to keep pace with technological change, tends to generate compliance theater rather than genuine protection. Regulation is also most easily absorbed by large incumbents with legal and compliance departments, meaning that stringent data regulation can paradoxically entrench the platforms it is ostensibly designed to constrain.
They are protecting personalization as a genuine good, not merely a technique for manipulation. Targeted advertising funds content that would not otherwise exist commercially; recommendation algorithms surface content that users actually want to see; personalized services improve over time as they learn user preferences. The market defender's argument is not that personalization cannot be misused — it can — but that critics of surveillance capitalism routinely conflate the misuse of personalization with personalization itself, and that a regulatory framework that treats all behavioral data use as suspect will eliminate genuinely valuable applications alongside genuinely harmful ones. The question is not whether to permit personalization but how to distinguish its legitimate from its manipulative forms — a distinction that requires precision rather than prohibition.
What behavioral modification critics are protecting
Shoshana Zuboff's The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power (PublicAffairs, 2019) introduces a critique that does not fit comfortably in either the data rights or the market framework. Zuboff's argument is that surveillance capitalism is not primarily about data collection — it is about what data collection enables: a new form of power, the power to predict and modify behavior at scale. The behavioral futures market sells not just predictions about what people will do but interventions that make those predictions come true. Data rights frameworks address the collection end; Zuboff's critique is aimed at the modification end, which data rights legislation largely leaves untouched.
They are protecting cognitive liberty — the right not to have one's choices shaped by invisible forces acting on information asymmetry. The asymmetry Zuboff documents is not just informational (the platform knows things about you that you do not know it knows) but operational: the platform can use that information to serve you content, shape your interface, and time its interventions to produce behavioral outcomes that you do not know you are being steered toward. This is not ordinary advertising, where the persuasive intent is visible and the audience can consciously accept or reject the pitch. It is behavioral modification at the level of choice architecture itself — manipulation of the conditions under which choices are made rather than of the choices directly. The data rights advocate's demand for consent addresses the collection; it does not touch the modification apparatus that operates on the other side of the collection.
They are protecting epistemic autonomy at the population scale. Frank Pasquale's The Black Box Society: The Secret Algorithms That Control Money and Information (Harvard University Press, 2015) traces what happens when opaque algorithmic systems control not just advertising but credit, employment, insurance, and the information people receive about the world. The opacity is not incidental — it is strategic: companies cannot be held accountable for systems no one can audit. The behavioral modification critic's position is that the harm is not primarily individual but collective: when the same invisible systems shape the information diets of hundreds of millions of people simultaneously, they shape what those people know, believe, and feel about the world. This is a political problem that individual data rights cannot address — because the modification happens at the population level, not the individual data record level.
They are protecting "the right to a future tense" — Zuboff's phrase for the condition in which one's future behavior is not already predicted, priced, and sold before one has acted. The behavioral futures market requires certainty: the product being sold to advertisers is a reliable prediction that user X will do action Y. Achieving that certainty requires eliminating the behavioral uncertainty that comes from genuine autonomy. Behavioral modification critics argue that surveillance capitalism is structurally incentivized to reduce human freedom — not because it wants to control people in a totalitarian sense but because prediction accuracy is its product and freedom is what makes prediction hard. The market defender's frame of "voluntary exchange" does not reach this level of the argument: the issue is not whether users consent to data collection but whether the economic logic of behavioral prediction is compatible with genuine individual freedom over time.
What structural reformers are protecting
The structural position holds that the problem with surveillance capitalism is not data practices per se but market structure: a handful of platforms have achieved dominance through network effects and data accumulation that has made genuine competition nearly impossible. The solution, on this view, is not primarily data rights legislation or behavioral modification restrictions but antitrust enforcement, interoperability mandates, data portability requirements, and the treatment of essential platforms as regulated utilities. The behavioral modification critic diagnoses a problem with what surveillance capitalism does to individuals; the structural reformer diagnoses a problem with the market conditions that allow it to happen without accountability.
They are protecting competitive markets as the precondition for accountability. Lina Khan's "Amazon's Antitrust Paradox" (Yale Law Journal, 2017) — written before her appointment as FTC chair — argued that antitrust law, built around the consumer welfare standard's focus on price and output, had systematically failed to address the new forms of market power that platform businesses accumulate. A platform that provides services below cost to foreclose competitors, or that uses its marketplace position to advantage its own products over third-party sellers, harms competition even if consumers observe no price increase. Khan's argument extends to data: a platform that accumulates proprietary behavioral data at scale has a competitive moat that new entrants cannot cross — not because their product is better but because they lack the data. Data accumulation as competitive foreclosure is an antitrust problem, not a privacy problem, and the regulatory tool that addresses it is not a consent requirement but structural intervention in market concentration.
They are protecting interoperability as the mechanism of genuine user choice. The market defender's argument that users can switch to better services rests on switching being practically possible. On social networks, the barrier to switching is not the service itself but the social graph: your friends are there, and moving means leaving them behind. Tim Wu's The Attention Merchants: The Epic Scramble to Get Inside Our Heads (Knopf, 2016) traces how the advertising business model has, throughout its history, created recurring cycles of audience capture followed by degradation followed by exodus — and argues that the current cycle is unusually difficult to break because network effects create lock-in that advertising-era magazines and television stations never had. Structural reformers argue that interoperability requirements — mandating that platforms allow users to communicate across networks, as email does — would restore the conditions for competition to function as a discipline on data practices. You could switch without leaving your social graph behind.
They are protecting democratic accountability over infrastructure that has become essential. Ari Ezra Waldman's Industry Unbound: The Inside Story of Privacy, Data, and Corporate Power (Cambridge University Press, 2021), drawing on interviews with privacy engineers and policy teams inside major technology companies, documents a systematic pattern: privacy compliance is treated as a legal risk to be managed rather than a user interest to be served; privacy teams operate under conditions that make genuine protection structurally difficult to achieve; and the gap between stated privacy commitments and actual practice is not primarily the result of bad intentions but of an economic model in which privacy protection conflicts with the core business. Waldman's argument is that the problem is structural — it cannot be solved by better corporate ethics or stronger consent requirements — and that the only durable solution is democratic oversight of essential infrastructure operating in the public interest, not primarily in the interest of shareholders whose returns depend on behavioral data extraction.
Where the real disagreement lives
The surveillance capitalism debate is structured by several genuine disagreements that procedural or regulatory solutions often fail to reach.
The consent fiction problem. All four positions agree that "click Accept on forty-three pages of terms of service" is not meaningful consent. They disagree about what follows. Data rights advocates conclude that better consent mechanisms — shorter disclosures, granular opt-outs, periodic renewal — can repair the fiction. Behavioral modification critics argue that consent is insufficient even when it is genuine: consenting to data collection does not consent to the behavioral modification apparatus that operates on the other side of it. Structural reformers argue that consent is the wrong frame entirely — because the structural conditions that make surveillance capitalism possible cannot be addressed at the individual transaction level. Market defenders argue that consent mechanisms, if simplified and meaningful, are adequate. The argument is not about whether consent matters; it is about what consent is capable of accomplishing in this context.
The product question. The four positions cannot fully agree on what surveillance capitalism is selling. Is the product the services users receive — search, navigation, communication — and the data collection is the cost? Is the product the user's attention, sold to advertisers? Is the product behavioral prediction futures, sold to anyone who will pay? Is the product market foreclosure — the accumulation of competitive advantages that make rivals unable to enter? Each answer leads to different diagnoses and different regulatory responses. Consent requirements are designed for a product-as-services frame. Behavioral modification restrictions address the product-as-prediction frame. Antitrust enforcement addresses the product-as-market-control frame. None of these responses addresses the others' diagnosis.
The level-of-analysis problem. Data rights operate at the individual level: your data, your consent, your access. Behavioral modification concerns operate at the population level: what happens when invisible systems shape the information diets of hundreds of millions of people simultaneously. Antitrust concerns operate at the market level: what happens to competition when data is a moat. These are not competing answers to the same question. They are answers to different questions operating at different levels of abstraction. A policy framework that addresses individual data rights while ignoring population-level modification and market structure is not wrong — it is solving one problem while leaving the others intact.
The comparison problem. Market defenders compare a regulated version of the data economy to an unregulated one and ask who loses access to services. Behavioral modification critics compare the data economy to a counterfactual in which cognitive liberty is legally protected and ask how different human choice-making looks in that world. Structural reformers compare concentrated platform power to a world with genuine competition and ask whether the current arrangement was the necessary consequence of technological development or a contingent outcome of regulatory failure. These comparisons are not contradictory, but they are not measuring the same thing. Deciding which comparison is most important is not a technical judgment. It reflects a prior commitment about which dimension of the problem — individual rights, collective cognition, or market structure — is the primary site of the harm.
See also
- Who gets to decide? — the framing essay for the private-authority problem beneath this map: when firms can watch, predict, and shape behavior at scale, what kind of governing power have they acquired, and what makes that power legitimate, contestable, or politically accountable?
- Who bears the cost? — the companion framing essay for the distributional side of surveillance capitalism: the model looks frictionless because users do not pay cash, but the real costs are offloaded into captured attention, degraded privacy, discriminatory profiling, weaker bargaining power, and public institutions that inherit the harms after platforms monetize them.
- What is a life worth? — the framing essay for the dignity conflict beneath behavioral prediction: if a person's time, attention, and future choices can be treated mainly as extractable inputs for optimization, what happens to the idea that human interior life deserves protection beyond whatever can be priced and targeted?
- Digital Privacy and Surveillance: What Each Position Is Protecting — the government surveillance map: state collection of digital data, encryption policy, national security versus civil liberties. Distinct from this map in its subject (state actors, law enforcement, intelligence agencies rather than private commercial platforms) but connected through the regulatory arbitrage problem: state agencies can purchase commercial data rather than subpoenaing it, meaning the commercial and government surveillance ecosystems are not cleanly separable.
- Social Media and Democracy: What Each Position Is Protecting — the behavioral modification critique of surveillance capitalism is closely linked to the political concerns about algorithmic content curation; what platforms do with behavioral data intersects with how democratic discourse is organized; the filter bubble and radicalization debates are downstream of the same data extraction model that surveillance capitalism critics diagnose.
- AI and Labor: What Both Sides Are Protecting — the data infrastructure of surveillance capitalism is the training substrate of AI systems; the question of who owns behavioral data produced by users is closely connected to the question of who owns the AI trained on it, and the labor question of who captures the value generated by that training.
- Technology and Attention: What Both Sides Are Protecting — Tim Wu's attention merchant framing connects to the surveillance capitalism debate at its core: the business model is attention capture, and the data collection is the mechanism for optimizing that capture; the technology and attention map addresses what this model costs individual users in terms of focus, time, and psychological wellbeing.
- Predictive Policing and Surveillance Technology: What Each Position Is Protecting — the law enforcement application of the same data infrastructure: police departments purchase behavioral location data from commercial vendors rather than subpoenaing it, the same facial recognition systems trained on commercial photo databases are deployed in law enforcement contexts, and the algorithmic accountability critique that applies to COMPAS risk scoring is structurally identical to the critique of behavioral futures markets in commercial surveillance. The commercial and government surveillance ecosystems are not cleanly separable, and the predictive policing map is the clearest instance of that overlap.
- AI Governance: What Each Position Is Protecting — the institutional question upstream of both this map and the predictive policing map: who decides how AI and data systems are developed and overseen, under what accountability constraints, and whether the governance frameworks being built by wealthy nations address the distributional and structural concerns that the accountability and global governance positions in this map name. The surveillance capitalism debate is one of the inputs to AI governance; the governance outcome will determine whether the structural critique of data extraction has any institutional purchase.
- Platform Accountability and Content Moderation: What Each Position Is Protecting — the speech governance complement to this map: where surveillance capitalism asks what platforms do with behavioral data, platform accountability asks who decides what speech is permissible; the two debates are structurally linked because the recommendation systems at the center of the algorithmic governance position in the content moderation map are the same engagement-optimization systems that the behavioral modification critique of surveillance capitalism diagnoses as the primary mechanism of harm.
- Childhood and Technology: What Each Position Is Protecting — the sharpest consumer-protection application of this map's central argument: children are among the most profitable users for behavioral data extraction, they are developmentally unable to assess or resist the engagement-optimization techniques designed for them, and the children's digital rights debate is structurally a consumer protection intervention into a surveillance-capitalist system that has not priced in the developmental costs it externalizes.
- Algorithmic Hiring and Fairness: What Each Position Is Protecting — the employment-context application of the behavioral data economy: the same psychographic profiling, personality assessment, and automated scoring that surveillance capitalism enables for advertising is now applied to employment screening. The data collection infrastructure that Zuboff diagnoses as a tool for predicting and modifying behavior at scale is also the infrastructure for predicting who will be a "good hire" — and the accountability questions about who the model is optimizing for, and by whose definition, are structurally identical.
- Algorithmic Governance and Automated Decisions: What Each Position Is Protecting — the data extraction infrastructure that surveillance capitalism analyzes is the upstream condition for the automated decision systems that algorithmic governance maps: COMPAS-type recidivism predictions, benefits eligibility determinations, and credit scoring all depend on the behavioral data economy Zuboff describes. The governance gap is structural: the populations most subject to automated consequential decisions — bail, benefits, child welfare — are the most extensively surveilled, while the populations that designed both layers are subject to neither. What surveillance capitalism names as the extraction model, algorithmic governance names as the deployment model; they are the same system at different points in its operation.
- Digital Identity and Biometrics: What Each Position Is Protecting — the physical layer of the surveillance capitalism model: biometric identifiers — faces, fingerprints, gait patterns — are behavioral data that cannot be changed or revoked, making them the most consequential form of data extraction. The convergence of commercial behavioral data with persistent biometric identity infrastructure makes de-anonymization trivial and transforms every camera in the commercial environment into a surveillance input; the identity map addresses the governance implications of this convergence, where the infrastructure that proves who you are also functions as the infrastructure that tracks everywhere you go and everything you do.
Further reading
- Shoshana Zuboff, The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power (PublicAffairs, 2019) — the foundational text for the behavioral modification critique: Zuboff documents how Google developed the insight that user behavioral data could be extracted and sold to advertisers as a byproduct of providing search — and how this logic was then generalized across the internet economy. Her concept of "behavioral surplus" (data beyond what's needed to improve the service, extracted and sold) and "behavioral futures markets" (the product being sold to advertisers is a reliable prediction that a user will click, buy, or vote in a specific way) reframes the data economy as a new form of power rather than a new form of advertising. Essential starting point for understanding why the behavioral modification critic's position is distinct from the data rights position.
- Lina Khan, "Amazon's Antitrust Paradox", Yale Law Journal 126(3): 710–805, 2017 — the article that established the structural antitrust critique of platform businesses: Khan argues that the consumer welfare standard, which measures antitrust harm primarily by price effects, systematically fails to capture how platform businesses accumulate market power by providing services below cost, using their platform positions to foreclose rivals, and accumulating proprietary data that functions as a competitive moat. The framework applies directly to surveillance capitalism: data accumulation is a form of market foreclosure that antitrust law, under the consumer welfare standard, cannot see.
- Woodrow Hartzog, Privacy's Blueprint: The Battle to Control the Design of New Technologies (Harvard University Press, 2018) — the case that privacy law has focused on the wrong thing: instead of regulating data flows after collection, regulation should target the design of systems that engineer away meaningful consent. Dark patterns, interface designs that make data sharing the default and privacy the buried exception, cognitive overload from terms of service — these are design choices, and regulation that targets design rather than disclosure can address the consent fiction at its source rather than after the fact.
- Frank Pasquale, The Black Box Society: The Secret Algorithms That Control Money and Information (Harvard University Press, 2015) — traces the opacity of the algorithmic systems that now govern credit scores, search results, social media feeds, and employment decisions; Pasquale's central argument is that opacity is strategic rather than incidental — it insulates companies from accountability, prevents users from understanding why they received the decision they received, and allows the same system to behave differently toward different populations without detection. The accountability problem is not only about what data is collected but about what can be audited and contested once collected.
- Tim Wu, The Attention Merchants: The Epic Scramble to Get Inside Our Heads (Knopf, 2016) — a historical account of the attention economy from the first newspaper advertisers through radio, television, and the internet: Wu traces recurring cycles of audience capture, attention extraction, degradation, and resistance, arguing that the current cycle is historically distinctive because the platforms that capture attention also control the social infrastructure through which people maintain their relationships and navigate their lives. The attention merchant model connects the economic logic of surveillance capitalism to the individual experience of the technology and attention debate: the data extraction and the attention capture are two aspects of the same business model.
- Paul Ohm, "Broken Promises of Privacy: Responding to the Surprising Failure of Anonymization", UCLA Law Review 57(6): 1701–1777, 2010 — the systematic demolition of anonymization as a reliable privacy technique: Ohm reviews the computer science literature showing that supposedly anonymized datasets are regularly re-identified using modest amounts of external information; his conclusion is that the legal and policy assumption that data can be "safely" shared once de-identified is unsupported by the technical evidence, and that data rights protections must address the data itself rather than relying on pseudonymization as a substitute.
- Ari Ezra Waldman, Industry Unbound: The Inside Story of Privacy, Data, and Corporate Power (Cambridge University Press, 2021) — based on over one hundred interviews with privacy professionals inside technology companies; Waldman documents how privacy compliance functions inside corporations: privacy teams are understaffed, operate under conditions that make genuine protection difficult, and are systematically outmaneuvered by business units whose interests conflict with theirs. His argument is that the gap between stated privacy commitments and actual practice is structural — a consequence of an economic model in which data extraction and privacy protection are in tension — and cannot be closed by regulatory frameworks that take corporate self-governance seriously as a mechanism.
- Helen Nissenbaum, Privacy in Context: Technology, Policy, and the Integrity of Social Life (Stanford University Press, 2010) — the philosophical account of "contextual integrity" as the organizing principle of privacy: information is not inherently private or public but is appropriately or inappropriately shared depending on whether it flows in ways that match the norms of the context in which it was originally disclosed. Nissenbaum's framework provides precision for the intuition that commercial data flows are wrong even when technically "consented" to: the consent was given in one context (using a navigation app to get directions) and the data is used in another (tracking attendance at a political rally). The framework is widely used in both legal scholarship and technology policy.