Sensemaking for a plural world

Perspective Map

Digital Privacy and Surveillance: What Each Position Is Protecting

March 2026

A woman in her early forties reads the terms of service for a new health tracking app — or rather, she begins to read them, realizes the document is forty-three pages long, and taps Accept. She is not naive. She is a nurse practitioner who understands that data about her sleep, her heart rate, her menstrual cycle, and her location will be collected, stored, and probably sold to parties she cannot identify, for purposes she cannot anticipate, and retained for a period she cannot determine. She taps Accept because the alternative is to opt out of a tool that might actually help her patients understand their own bodies better. The asymmetry between what she is giving and what she is getting is not hidden from her. She has simply run out of ways to negotiate it.

A retired police detective in the same city has watched fentanyl distribute itself through his neighborhood for the better part of a decade. He knows the investigators who are trying to map the supply network. He knows that the difference between a successful prosecution and a dead end often comes down to whether the government can access the communication records of people who conduct their business on encrypted platforms. He does not think of himself as an enemy of privacy. He thinks of himself as someone who has watched people die because the surveillance tools available to law enforcement have not kept pace with the tools available to people who move drugs.

A civil liberties attorney who argued before the Supreme Court three years ago knows that the digital infrastructure built for tracking criminals does not stay confined to tracking criminals. She has seen the same cell-tower location data used to map protest movements. She has seen national security authorities define "material support for terrorism" in ways that would cover a wire transfer to a mosque whose imam later turned out to have radical sympathies. She believes the surveillance apparatus, whatever its intent, functions as a tool for controlling political life, and that this function becomes more visible when the political winds shift.

A technology policy researcher has spent years trying to explain to legislative committees that the most consequential surveillance in contemporary life is not being conducted by governments. It is being conducted by advertising technology companies whose business model depends on knowing, with increasing precision, what people want before they know they want it — and on selling that predictive capacity to the highest bidder. The state can subpoena your records. But the data broker already has them, and no one had to subpoena anything.

All four of these people are responding to something real. The debate about digital privacy has become entangled in partisanship, in tech-industry self-interest, and in the tendency to treat every hard problem as either obviously solved by more encryption or obviously solved by giving law enforcement whatever it asks for. This is an attempt to hear what each position is actually protecting.

What the privacy-as-control position is protecting

People who center individual control over personal data are not, in the main, protecting the ability to hide wrongdoing — though the worst-case versions of this argument do slip into treating any law enforcement interest as inherently suspect. At its core, the privacy-as-control position is protecting something that any serious political philosophy has to take seriously.

They're protecting the conditions for autonomous personhood. Samuel Warren and Louis Brandeis, writing in 1890 as photography and the penny press began to make public exposure of private life commercially profitable, described privacy as "the right to be let alone" — a right that they argued followed not from any statute but from the general principle of personal inviolability. The digital version of that argument runs: if everything you do, say, and feel generates data that is collected, analyzed, and acted upon by parties you cannot identify, you are not fully the author of your own life. Your choices occur within an information environment that has been shaped, in part, by what prior data revealed about you. The person who chooses the health app, the news feed, the route home, is choosing within a frame that prior surveillance helped construct. The loss is not primarily what other people know about you; it is the gradual attrition of the space within which choice occurs.

They're protecting contextual integrity. The political philosopher Helen Nissenbaum has argued that privacy is not best understood as secrecy but as the appropriate flow of information — information is private when it flows in ways that match the norms of the context in which it was originally shared. Medical information shared with a doctor appropriately flows to other treating physicians; it does not appropriately flow to an employer or an insurer. Location data shared with a mapping app to get from the airport to a hotel does not appropriately flow to a data broker who sells it to a firm trying to infer your income, your health status, or your political sympathies. What the digital surveillance economy does, on this account, is systematically strip information of its contextual meaning and reassemble it in ways that violate the expectations under which it was disclosed.

They're protecting the social value of opacity. A person who knows they are being watched behaves differently than a person who does not. This is not a hypothesis; it is well documented in social psychology and is the underlying logic of the panopticon that Michel Foucault analyzed. In a society of pervasive surveillance, this dynamic operates not as a specific deterrent — "I won't do this specific thing because someone is watching" — but as a general conditioning of what feels like acceptable behavior. The writer who self-censors not because she has been told to but because she assumes her communications are logged is experiencing the chilling effect that privacy advocates identify as one of surveillance's deepest costs. The cost is not only to the individual but to the society whose intellectual life depends on people thinking thoughts that might not be approved.

What the security-and-safety position is protecting

People who defend surveillance tools for law enforcement and national security purposes are not, in the main, indifferent to civil liberties — though the worst-case versions of this argument do treat any privacy interest as a minor inconvenience to be waived. At its core, the security position is protecting something that also requires serious engagement.

They're protecting the capacity of public institutions to perform their core functions. Law enforcement's ability to investigate crime has always depended on access to information — witness accounts, documentary records, physical evidence. The digital transition has not changed this logic; it has changed the location of the records. The communication that used to happen in a parking lot now happens on Signal. The financial transaction that used to happen in cash now happens in cryptocurrency. If the law's warrant requirements are calibrated for the technological infrastructure of 1970, they create a systematic gap between what investigators can access and what they need to establish culpability. This gap is not neutral; it advantages people who commit serious crimes while providing little protection to people who use encryption merely to conduct normal, legal lives.

They're protecting a non-absolutist reading of the Fourth Amendment. The Supreme Court has never held that privacy is absolute. It has held that the government's interest in public safety must be balanced against the individual's reasonable expectation of privacy, and that the balance is calibrated through warrant requirements and probable-cause standards. The argument from the security position is that these mechanisms — judicial oversight, probable cause, warrant requirements — are the appropriate place to conduct the balancing, and that declaring categories of information categorically off-limits (through strong encryption with no lawful access mechanism, for example) forecloses the balancing rather than performing it. The 2018 Supreme Court decision in Carpenter v. United States — which held that the government requires a warrant to access historical cell phone location records — reflects this logic: not that location data is inaccessible, but that access requires judicial authorization, which is precisely what the security position regards as the appropriate procedure.

They're protecting the public interest in outcomes that privacy absolutism cannot fully account for. The missing child whose location is identified through cell tower data. The fentanyl supply network whose logistics depend on encrypted communications. The terrorism plot that is disrupted because metadata revealed a pattern of contact that could not otherwise be seen. These are not hypothetical edge cases; they are the recurring justifications that law enforcement agencies cite, and their empirical force depends on whether one believes the claimed outcomes. The security argument is not that privacy doesn't matter; it is that privacy is one value among several, and that when privacy protection has direct costs in public safety, those costs need to be counted.

What the surveillance-capitalism critique is protecting

The critique of commercial data extraction is distinct from either the privacy-as-control position or the security debate, because its antagonist is not the state but the market. This position has been most thoroughly developed by the Harvard Business School scholar Shoshana Zuboff, whose 2019 book The Age of Surveillance Capitalism argues that a new economic logic has emerged in which private human experience is the raw material from which behavioral prediction products are manufactured and sold.

The surveillance-capitalism critique is protecting a non-commercial understanding of what human experience is for. Zuboff's argument is not simply that tech companies collect too much data; it is that they have pioneered a new form of capital accumulation that treats human behavior as a natural resource to be extracted, refined, and sold without the consent of the people whose behavior is being harvested. Users are not the customers of Google or Meta; they are the ore. The actual customers are businesses that want to influence behavior — to sell products, shift votes, change minds. The asymmetry is structural: the platforms know vastly more about users than users know about themselves, and they use that asymmetry to produce behavioral predictions that are sold into markets the user cannot see, influence, or exit.

The surveillance-capitalism critique is protecting the meaningfulness of consent. The standard defense of data collection is that users consent to it — they read the terms of service and tap Accept. The critique responds that this is not consent in any philosophically serious sense. The terms are non-negotiable, written in language designed to obscure rather than disclose, and the consequences of refusal are exclusion from services that have become, for practical purposes, prerequisites of social and economic participation. A person who cannot use Google, Facebook, or smartphone apps because they are unwilling to surrender their behavioral data is not exercising a realistic choice. The consent obtained under these conditions is formal, not substantive.

The surveillance-capitalism critique is protecting the political preconditions of self-governance. When prediction products can be used to microtarget voters with content calibrated to their individual psychological profiles, the information environment in which democratic deliberation occurs is no longer shared. Citizens are not encountering a common set of facts and arguments and forming views in response to them; they are encountering individualized streams of information shaped by what prior behavioral data suggests will be most effective at producing a desired response. The Cambridge Analytica episode — whatever its actual electoral effects — illustrated that the infrastructure of behavioral manipulation is available for political use, and that this use is difficult to detect or regulate.

What the democratic-accountability position is protecting

The democratic-accountability argument focuses less on whether surveillance is conducted and more on whether it is subject to meaningful oversight. This position is not opposed to surveillance as such; it is opposed to surveillance that operates outside of democratic control, without transparency, and without remedies for abuse.

The democratic-accountability position is protecting the right of civil society to function without state interference. The history of intelligence agencies in democratic societies is a history of surveillance tools built for one purpose being repurposed for another. The FBI's COINTELPRO program surveilled and disrupted civil rights organizations and anti-war groups on the theory that they posed national security risks. The NSA programs revealed by Edward Snowden in 2013 — bulk collection of telephone metadata, interception of international communications, infiltration of fiber-optic cables — were built after September 11 to detect terrorism and were operated with minimal judicial or congressional oversight. The argument from democratic accountability is that surveillance powers reliably migrate from their stated purposes, and that the appropriate response is institutional — transparency, oversight, sunset clauses, judicial review — rather than trust in the current administration's restraint.

The democratic-accountability position is protecting the chilling effect as a concrete harm, not a theoretical one. After the Snowden revelations, there were measurable declines in searches for certain topics on search engines and in visits to Wikipedia pages covering terrorism, firearms, and drug use — not because the searchers intended to do anything illegal, but because they were aware that their searches were potentially logged and could be associated with their identities. The legal scholar Daniel Solove has documented how even people with nothing to hide have rational reasons to avoid creating records of curiosity, association, and exploration when those records may be reviewed by parties with interests they cannot anticipate. A democracy in which people self-censor their reading and their associations is a democracy that has already conceded something important.

The democratic-accountability position is protecting the principle that surveillance should be visible and contestable. The FISA court — the specialized federal court that approves national security surveillance requests — received 33,942 applications in its first twenty-five years and denied eleven. A court that approves 99.97 percent of government requests is not providing meaningful adversarial review; it is providing legitimating cover. The democratic-accountability argument is that if surveillance is to be legitimate in a constitutional democracy, the legal framework authorizing it needs to be publicly known, the mechanisms of oversight need to have real teeth, and the people being surveilled need to have some avenue of legal recourse when the surveillance exceeds its authorized scope. Secret law — legal interpretations of surveillance authority that are themselves classified — is not compatible with a constitutional order in which law derives its force from public acceptance.

Where the real disagreements live

The four positions described here are not simply arranged on a single axis from "pro-surveillance" to "anti-surveillance." They disagree about different things, and the disagreements are layered.

Who is the primary threat? The privacy-as-control position is most concerned with how data is used to shape individual behavior and choices. The security position is most concerned with criminal and terrorist actors who exploit technological opacity. The surveillance-capitalism critique is most concerned with commercial platforms. The democratic-accountability position is most concerned with state overreach and its consequences for political freedom. These different threat models lead to different policies: end-to-end encryption helps with some threats and worsens others. GDPR-style data minimization addresses commercial surveillance but has little effect on state surveillance. Warrant requirements address state access but not what data brokers already have. The positions often talk past each other because they are worried about different adversaries.

Is the consent model salvageable? The privacy debate is substantially a debate about whether informed consent — users knowingly agreeing to data collection in exchange for services — is a workable framework for governing the data economy. The surveillance-capitalism critique says it is not, because the power asymmetry makes genuine consent impossible. The commercial data industry says it is, provided that consent is meaningful and revocable. European data protection law, embodied in the General Data Protection Regulation (GDPR), has tried to make consent more meaningful through requirements of specificity, active affirmation, and the right to erasure. Critics of GDPR argue that it has primarily produced consent fatigue — the proliferation of cookie banners that everyone dismisses without reading — rather than actual control. This is an empirical question with contested evidence, not a question that philosophy alone can settle.

What does the Third Party Doctrine mean in the digital age? The legal framework that governed government access to records for decades held that information voluntarily disclosed to a third party — a bank, a phone company — carries no Fourth Amendment protection, because the disclosure itself constituted consent to that party's use and disclosure. In the digital environment, nearly every interaction involves a third party: the cell carrier knows your location, the email provider has your correspondence, the search engine has your curiosity, the payment processor has your purchases. If the Third Party Doctrine applies without modification, digital life generates no Fourth Amendment protection at all. Carpenter v. United States (2018) began to modify this doctrine, holding that historical cell site location records require a warrant — but the Court explicitly declined to resolve where the modified doctrine ends. The question of what digital records the government may access without judicial authorization remains substantially unsettled.

Can surveillance infrastructure be contained to its stated purpose? This is the most fundamental disagreement between the security position and the democratic-accountability position. The security argument is that properly authorized surveillance, with appropriate oversight, can be targeted and controlled. The democratic-accountability argument is that surveillance infrastructure tends to expand — in scope, in the categories of people it targets, and in the purposes for which it is used — and that the appropriate response is institutional design that constrains this tendency, not trust in current custodians' intentions. Both positions appeal to history, and the historical record is genuinely mixed: the expansion of mass surveillance after September 11 supports the skeptical view; the fact that FISA reform after the Church Committee hearings produced meaningful constraints for two decades supports the optimistic view. The disagreement is partly about which history to weight.

What sensemaking surfaces

Holding this map whole, several things become visible that the public debate tends to obscure.

The privacy debate is several debates being conducted in the same vocabulary. "Privacy" means something different when the concern is government surveillance, commercial data extraction, social exposure, and individual informational control. These overlap but are not the same problem. A solution to one may do nothing about the others. Strong encryption protects against government access but not against the data broker who already has what the government wants. GDPR addresses the commercial data economy but has no purchase on national security surveillance. Warrant requirements constrain law enforcement but not what platforms do with behavioral data before law enforcement ever asks. Treating "privacy" as a single issue generates policies that solve one problem while leaving the others intact.

The "nothing to hide" argument is not the real argument being made by thoughtful defenders of surveillance. The actual argument is not that everyone should be visible to everyone; it is that the government should be able, through proper judicial authorization, to access specific information about specific people it has probable cause to investigate. This is a much narrower claim than "nothing to hide," and it is a claim that the civil liberties critique needs to actually engage rather than treating the worst-case version of the security argument as the whole of it. Solove's response — that the harm is not exposure per se, but the uses to which information may be put by a party whose interests are not aligned with yours — is more responsive to the actual argument than simple assertions about private citizens having nothing to fear.

The commercial surveillance economy has been largely absent from the political debate about privacy, which has focused on state surveillance because that is the frame that civil liberties law was built to address. But the scale of commercial data collection now dwarfs anything the state could accomplish through direct surveillance — and state agencies can purchase commercial data rather than subpoenaing it. This creates a regulatory arbitrage where the constitutional limits on government access can be circumvented by buying what cannot be seized. The implication is that any serious approach to digital privacy has to address the commercial ecosystem, not only the government one.

The debate has a structural asymmetry problem that none of the positions fully resolves. The people who bear the risks of surveillance — members of minority communities, political dissidents, people with health conditions they would prefer not to disclose, immigrants, people whose sexual orientation or gender identity might be used against them — are often not the people making decisions about how surveillance is conducted and to whom it applies. The people who make those decisions — legislators, judges, technology executives — are often situated such that surveillance presents minimal risk to them personally. This asymmetry is a persistent feature of the governance problem that any principled privacy framework has to reckon with rather than assume away.

Patterns at work in this piece

Several of the recurring patterns named in What sensemaking has taught Ripple so far appear in this map.

  • Who bears the costs. The costs of surveillance — chilling effects, misidentification, political targeting, behavioral manipulation — fall unevenly. The people most likely to experience surveillance as a threat are those whose identities, associations, or beliefs place them at risk from the parties conducting the surveillance. The cost-benefit calculation looks very different from inside that group than from outside it.
  • Compared to what. The security argument compares a world with surveillance tools to a world without them and centers the crimes that go unsolved. The privacy argument compares a world with surveillance infrastructure to a world without it and centers the political uses to which that infrastructure has historically been put. Neither comparison is wrong; they are emphasizing different parts of the same history.
  • The consent fiction. The gap between formal consent — tapping Accept — and substantive consent — genuinely understanding and choosing — runs through this debate in a way that mirrors similar gaps in the informed consent debates in bioethics and contract law. The question is not whether consent was obtained but whether it was meaningful given the power asymmetry and the cognitive impossibility of actually evaluating forty-three pages of terms of service.
  • Infrastructure tends to expand. Surveillance tools built for specific purposes tend to be applied to broader ones over time. This is not a conspiracy theory; it is a pattern that recurs across technologies and across political systems. The people who design the infrastructure and the people who eventually use it are often not the same people, operating under the same norms, facing the same constraints.

Further reading

  • Shoshana Zuboff, The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power (PublicAffairs, 2019) — the foundational text for the surveillance-capitalism critique: behavioral data as the raw material of a new economic logic, behavioral prediction as the product, and the elimination of the "right to a future tense" as the cost. Dense and polemical in places, but the core argument — that the commercial data economy is not about service provision but about behavioral futures markets — is the necessary starting point for any serious engagement with commercial surveillance.
  • Daniel J. Solove, Nothing to Hide: The False Tradeoff between Privacy and Security (Yale University Press, 2011) — the most systematic dismantling of the "nothing to hide" argument: the harm of surveillance is not exposure per se but the bureaucratic power to aggregate, contextualize, and act on information in ways the subject cannot anticipate or contest. Solove's taxonomy of privacy harms — information collection, processing, dissemination, and invasion — gives the privacy argument the conceptual precision it usually lacks.
  • Helen Nissenbaum, Privacy in Context: Technology, Policy, and the Integrity of Social Life (Stanford University Press, 2010) — the philosophical case for "contextual integrity" as the organizing principle of privacy: information is appropriately shared when it flows in ways that match the norms of the context in which it was disclosed. Health data shared with physicians appropriately flows to other treating physicians; it does not appropriately flow to employers. The framework explains why the same data can be both legitimately shared and invasively used depending on context.
  • Bruce Schneier, Data and Goliath: The Hidden Battles to Collect Your Data and Control Your World (W. W. Norton, 2015) — the security technologist's account of the surveillance ecosystem, written after the Snowden revelations: both government and corporate surveillance are larger than publicly known, the interests of the two frequently align, and the solutions involve technical, legal, and political reform rather than any single intervention. Schneier is one of the few voices in this debate who takes both the security argument and the privacy argument seriously on their own terms.
  • Glenn Greenwald, No Place to Hide: Edward Snowden, the NSA, and the U.S. Surveillance State (Metropolitan Books, 2014) — the reporter's account of the Snowden documents: bulk collection of telephone metadata, interception of international communications, infiltration of tech company infrastructure. The primary-source document for the democratic-accountability argument. Greenwald's framing is adversarial, but the underlying documents are the evidentiary basis for the claim that post-September 11 surveillance operated with minimal oversight and expanded far beyond its stated counterterrorism purpose.
  • Samuel D. Warren and Louis D. Brandeis, "The Right to Privacy", Harvard Law Review, vol. 4, no. 5 (1890) — the foundational common-law argument for privacy as "the right to be let alone," written in response to the commercialization of private life by photography and the penny press. The parallels to the current moment — new technologies enabling unprecedented collection of personal information for commercial gain — are not incidental. The essay established privacy as a legally cognizable interest rather than merely a sentiment, and its framework has structured American privacy law for 135 years.
  • Carpenter v. United States, 585 U.S. 296 (2018) — the Supreme Court's most consequential digital privacy ruling: historical cell phone location records require a search warrant; the Third Party Doctrine does not automatically apply to data that provides "an intimate window into a person's life." Chief Justice Roberts's majority opinion acknowledged that "seismic shifts in digital technology" require the Court to develop Fourth Amendment doctrine that does not simply extend pre-digital precedents. The decision is deliberately narrow — it did not resolve what other digital records require warrants — but it established that the constitutional framework is not static.
  • Frank Pasquale, The Black Box Society: The Secret Algorithms That Control Money and Information (Harvard University Press, 2015) — the argument that opacity is now the default operating condition of the entities that most shape contemporary life: financial firms, search engines, data brokers. The surveillance-capitalism critique focuses on how data is extracted; Pasquale's argument focuses on how the resulting systems are used and whether the people affected by algorithmic decisions have any access to the reasoning behind them. Pairs with Zuboff on the systemic argument and with Nissenbaum on the legal framework for accountability.

See also

  • Who belongs here? — the framing essay for the membership conflict running through surveillance: the people most exposed to data extraction, political monitoring, and misidentification are often migrants, dissidents, minorities, queer people, and others whose safety depends on not being made maximally legible to institutions that may not protect them.
  • Who gets to decide? — the framing essay for the authority conflict running through this map: when governments and platforms can see, infer, and act on intimate behavioral data at scale, what legitimates that power, what limits should constrain it, and who gets meaningful recourse when surveillance becomes ordinary infrastructure?
  • social media and democracy map — addresses the political consequences of the same technological infrastructure: when behavioral data drives algorithmic content curation, the information environment in which democratic deliberation occurs is no longer shared. The surveillance critique and the democracy critique are addressing the same architecture from different angles — one asking what it does to individuals, the other asking what it does to collectives.
  • technology and attention map — addresses the individual-level experience of the same commercial ecosystem: the design of platforms to capture and hold attention is the mechanism through which behavioral data is generated. The attention argument is about what the extraction costs the person; the surveillance-capitalism argument is about what the extraction produces for the extractor.
  • free speech on campus map — addresses the chilling-effect argument in a smaller-scale, more bounded institutional context. The concern that surveillance constrains what people are willing to say, explore, and associate with is the same concern that appears in debates about campus speech — the difference is that the digital surveillance version operates at population scale and without the institutional visibility that campus debates have.
  • surveillance capitalism map — addresses private-sector data commodification — a distinct but connected debate: where this map focuses on government surveillance (state actors, law enforcement, national security), the surveillance capitalism map addresses commercial behavioral data extraction and the economic logic that funds the broader data ecosystem. The two are not separable: state agencies can purchase commercial data they could not legally seize, which means the commercial surveillance infrastructure directly enables the government surveillance this map addresses.
  • predictive policing and surveillance technology map — addresses the law enforcement-specific application of the surveillance debate: where this map covers government surveillance broadly (NSA, FISA, encryption policy), the predictive policing map narrows to facial recognition, algorithmic patrol, and body cameras in law enforcement — and surfaces a connected governance gap: law enforcement can purchase behavioral and location data from commercial vendors rather than subpoenaing it, meaning the Fourth Amendment's limits on direct state collection are partially circumvented through the commercial data ecosystem this map and the surveillance capitalism map address.
  • platform accountability and content moderation map — addresses the speech governance dimension of the same platform infrastructure: where this map asks what platforms do with data about individuals, the platform accountability map asks who decides what speech is permissible and under what accountability framework; both debates are structured by a governance gap between platform power and democratic institutional capacity, and both are central concerns of the EU Digital Services Act and related regulatory frameworks.
  • Digital Identity and Biometrics: What Each Position Is Protecting — the identity infrastructure layer of the broader surveillance debate: biometric systems make it possible to link data across contexts at scale, transforming surveillance from a legally gated activity into a persistent ambient capability; the identity map addresses what happens when the infrastructure that establishes who you are is also the infrastructure that tracks what you do, and what accountability mechanisms — if any — apply at that junction.
  • Algorithmic Governance and Automated Decisions: What Each Position Is Protecting — the downstream consequence of the surveillance infrastructure this map addresses: data collected through digital surveillance is the raw material for automated decision systems that determine benefits eligibility, bail conditions, credit access, and parole; the privacy debate is about collection, the algorithmic governance debate is about use — but they describe the same pipeline, and the populations most subject to surveillance are the same populations most subject to automated consequential decisions without meaningful transparency or appeal.