Perspective Map
Predictive Policing and Surveillance Technology: What Each Position Is Protecting
In 2015, a man in New Orleans was arrested and held for more than twenty-four hours after a facial recognition algorithm matched his face to a still from a robbery surveillance video. The algorithm was wrong. The actual robber was shorter, lighter-skinned, and had a visible scar on his chin. The arrested man had no scar. He was released without charge. He later described the experience as the most frightening of his life — not primarily the jail cell, but the impossibility of disproving a machine's certainty. No one could tell him how the algorithm worked, what made it confident, or how he could challenge its output. The technology had made a decision about him. He had no mechanism to make a decision back.
In Chicago in 2016, a ShotSpotter acoustic sensor detected what its algorithm classified as gunfire at 11:43 p.m. in a residential block on the South Side. Officers were on the scene within ninety seconds — far faster than the three to four minutes that typically elapse between a 911 call and dispatch. They found a man who had been shot in the leg. He was alive when they arrived. His survival, the attending physician later estimated, was contingent on the speed of response. The officers who responded did not know in advance who was in danger. They were directed by a machine. The machine worked.
In 2016, ProPublica published an analysis of COMPAS — a proprietary risk assessment algorithm used in sentencing decisions across many U.S. jurisdictions to predict which defendants were likely to reoffend. The analysis found that Black defendants who did not go on to reoffend were nearly twice as likely as White defendants who did not reoffend to be classified as high-risk. White defendants who did go on to reoffend were more often classified as low-risk than equivalent Black defendants. The company that made COMPAS disputed the analysis's methodology. The defendants whose sentences were shaped by COMPAS scores had no access to the algorithm. In most jurisdictions, their attorneys had no access to it either.
These are not arguments for or against the technology. They are three parts of the same problem: that tools capable of producing both the second and the third outcome are already deployed across American law enforcement, that the debate about them is unresolved, and that the people most affected by that irresolution are the people who live in the neighborhoods where the tools are aimed.
What public safety technology advocates are protecting
The technology-forward position holds that data-driven and algorithmic tools make policing more effective, more accountable, and — when designed well — more equitable than the human discretion they supplement or replace. The case is not that the tools are perfect. It is that the alternative — officers making decisions based on experience, intuition, and implicit bias — produces its own, well-documented harms, and that the measured harms of algorithmic tools are a tractable engineering problem in a way that human bias is not.
They are protecting the lives saved by faster and better-targeted emergency response. Acoustic gunshot detection systems like ShotSpotter are deployed in more than 150 U.S. cities and alert dispatch to gunfire locations before any 911 call is placed — relevant in neighborhoods where residents do not call 911, either from distrust or from fear of retaliation. Body cameras, the most widely adopted police technology of the past decade, have in several studies reduced use-of-force incidents and civilian complaints, with the Rialto, California study — one of the earliest randomized trials — finding a 59 percent reduction in use-of-force incidents and an 88 percent reduction in citizen complaints in the year body cameras were introduced. Public safety technology advocates are protecting the population-level consequences of more efficient crime response: faster emergency medical access, earlier arrest of repeat offenders, and information-led patrol that concentrates resources where violence is likeliest to occur.
They are protecting officer accountability as a form of civilian protection. Body cameras are surveillance technology aimed, in the first instance, at officers, not at the public. The footage generated is evidence in misconduct complaints, use-of-force reviews, and prosecutions. Critics of police violence who might otherwise oppose surveillance technology have to reckon with the fact that the same cameras that record the public also record what officers do — and that some of the most consequential civil rights documentation of the last decade (the murder of George Floyd was captured on a civilian phone; subsequent investigations were aided by body camera footage from other officers on scene) depended on the existence of recording technology. The argument for police technology is not separable from the argument for officer accountability. They are the same argument.
They are protecting evidence-based policing as a replacement for bias-contaminated intuition. Hot spots policing — directing patrol resources to small geographic areas where crime is statistically concentrated, rather than distributing them evenly across precincts — has one of the strongest evidence bases in criminology. David Weisburd's decades of research on hot spots consistently finds crime reductions in targeted areas without equivalent displacement to neighboring areas. The argument from technology advocates is that algorithmic allocation of patrol is more disciplined and less discriminatory than an officer's decision to follow a young Black man because something about him seems wrong. The algorithm has a defined input. The intuition does not. Accountability requires that you be able to audit what drove a decision — and an algorithm, unlike a gut feeling, can be audited.
What civil liberties advocates are protecting
The civil liberties position holds that predictive and biometric surveillance tools constitute a categorical change in the nature of state power over individuals — one that existing legal frameworks were not designed to constrain and that democratic institutions have not adequately evaluated. The concern is not primarily that individual tools produce errors (though they do). It is that the logic of pre-crime surveillance is incompatible with foundational principles of legal innocence, and that the scale at which these tools operate creates a surveillance architecture with no historical precedent.
They are protecting the presumption of innocence as an operational principle, not just a legal formality. Andrew Guthrie Ferguson's The Rise of Big Data Policing: Surveillance, Race, and the Future of Law Enforcement (NYU Press, 2017) argues that predictive policing tools generate what he calls "data-driven suspicion" — a new category of pre-crime attention that attaches to people based on algorithms rather than individualized facts. A resident of a "predicted hot spot" who is stopped and questioned by a patrol officer directed by an algorithm has been subjected to state attention based on a probabilistic calculation, not on anything they did or that an officer observed. The Fourth Amendment's requirement of individualized suspicion — reasonable, articulable facts about a specific person — is the legal architecture of the presumption of innocence. Data-driven suspicion substitutes population-level statistics for individual facts and calls the substitution objectivity.
They are protecting the practical consequences of facial recognition error at the scale of deployment. Joy Buolamwini and Timnit Gebru's "Gender Shades" study (Proceedings of Machine Learning Research, 2018) documented that commercial facial recognition systems in widespread use at the time had error rates for darker-skinned women that were up to 34 percentage points higher than for lighter-skinned men — meaning that the population most likely to experience a false positive identification is the population that already bears the greatest cost of wrongful police contact. The civil liberties position is not that facial recognition is imprecise in the abstract. It is that imprecision in policing contexts falls hardest on the people who can least afford it — that a false positive for a corporate executive might mean an embarrassing hour at airport security, while a false positive for a Black man in Baltimore means a cell and a record. The error rate is not uniform in its consequences; the consequences track prior disadvantage.
They are protecting the distinction between investigation and dragnet. Historical privacy law built the distinction between targeted investigation — police attention directed at a specific person based on specific evidence — and mass collection of information about populations. The dragnet was understood to be categorically different: it presumed suspicion of everyone and placed the burden of demonstrating innocence on the individual. Surveillance camera networks, license plate readers logging every vehicle that passes, mobile device location tracking collected and retained by private vendors and subpoenaed by law enforcement — these create a continuous record of civilian movement and association that is available to police without individualized suspicion. Civil liberties advocates are protecting the principle that the state's ability to investigate you should not depend on whether you have been watched continuously since birth.
What algorithmic accountability critics are protecting
Algorithmic accountability critics occupy a position distinct from civil libertarians. Civil libertarians argue that the tools are categorically problematic regardless of their accuracy. Algorithmic accountability critics argue that the tools cannot be as accurate as advertised — that the claim of objectivity is technically false — because the data they are trained on is not a neutral record of reality but a product of prior discriminatory practice. The system appears objective. The appearance is the problem.
They are protecting the integrity of the data that governs high-stakes decisions. Rashida Richardson, Jason Schultz, and Kate Crawford's "Dirty Data, Bad Predictions" (NYU Law Review Online, 2019) examined the police departments whose crime data was used to train predictive policing algorithms and found that many of those departments had documented histories of systematic misconduct: evidence planting, falsification of arrest records, racially discriminatory stop-and-frisk practices that generated arrests bearing no relationship to actual crime rates. If the training data is a record of biased arrests rather than a record of crime, then a predictive system trained on it will predict where police have previously made biased arrests, not where crime actually occurs. It will replicate bias with mathematical precision — and the mathematical precision will be cited as evidence of objectivity. Richardson and colleagues named this the "dirty data" problem: the system is only as unbiased as the human decisions that generated its training set, and no cleaning procedure can remove what is not recorded.
They are protecting the right to understand and contest algorithmic decisions that govern liberty. COMPAS and tools like it are proprietary. The algorithms are trade secrets. Defense attorneys have been denied access to the code in jurisdictions where COMPAS scores influence sentencing recommendations. Bernard Harcourt's Against Prediction: Profiling, Policing, and Punishing in an Actuarial Age (University of Chicago Press, 2007) — written before the current generation of machine learning tools existed but structurally anticipating them — argued that actuarial methods in criminal justice are not merely a form of bias but a form of governance that is categorically incompatible with liberal principles of individual responsibility. You cannot deserve the risk score you have been assigned; the score is a prediction about a class of people, not a judgment about you. Harcourt's critique is not only that actuarial methods reproduce discrimination; it is that they undermine the conceptual foundations of individual desert on which the legitimacy of punishment depends.
They are protecting the feedback loop as a diagnostic category, not just a technical problem. Predictive systems direct police to areas where they will make arrests. Arrests in those areas are fed back into the system as evidence that predictions in those areas are accurate. The system confirms itself. Crime that occurs in areas not predicted — in low-surveillance neighborhoods, in white-collar settings, in online environments — is not recorded because no officer was directed there. The algorithm's confidence grows not because crime in non-predicted areas is declining but because no one is counting it. Cathy O'Neil's Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy (Crown, 2016) generalizes this pattern across domains — education, employment, credit — and argues that systems that are opaque, scalable, and consequential systematically amplify existing inequalities precisely because their feedback loops reward the status quo rather than correcting it. Policing is the domain where the amplification carries the most immediate costs for individuals.
What community accountability advocates are protecting
The community accountability position challenges not the accuracy of specific tools but the frame that the tools are answering the right question. The public safety technology advocate asks: how can we make policing more effective and less biased? The community accountability advocate asks: why is policing the mechanism we are optimizing? The disagreement is not primarily about facial recognition error rates or COMPAS methodology. It is about what problem is being solved.
They are protecting the right not to be governed through surveillance. Sarah Brayne's Predict and Surveil: Data, Discretion, and the Future of Policing (Oxford University Press, 2021), based on three years of fieldwork inside the Los Angeles Police Department as it deployed a Palantir-powered surveillance system, documents how big data policing extends the surveillance net far beyond people who have been arrested or even stopped — to associates of the system-involved, to people whose license plates appeared near an incident, to people whose names appear in databases maintained by private vendors under contracts not subject to public disclosure. The LAPD system, Brayne found, functioned as a mechanism for creating a "system-involved" population — a large group of people not formally accused of anything but continuously watched. Community advocates are protecting the principle that continuous surveillance of a residential population is not a neutral administrative practice. It is a form of social control with its own costs, independent of whether any individual encounter goes wrong.
They are protecting the recognition that technology choice is resource allocation choice. A department that spends $400,000 on a predictive policing system does not spend that money on trauma-informed mental health crisis responders, violence interrupters with community credibility, or the upstream investments — housing, employment, educational opportunity — that decades of criminological research identifies as the primary drivers of violent crime rates. The public safety technology debate is usually framed as a technical question about tool effectiveness; community advocates argue it is a resource question about what the jurisdiction has decided to invest in. The opportunity cost is not usually calculated.
They are protecting the communities most affected by over-policing as decision-makers, not just as subjects. The National Academies of Sciences, Engineering, and Medicine's Proactive Policing: Effects on Crime and Communities (National Academies Press, 2018) — a comprehensive review of the evidence on hot spots policing, predictive policing, and related strategies — found that proactive strategies can reduce crime in targeted areas but that the communities subjected to them experience significant intrusive effects and report diminished trust in law enforcement, with consequences for crime reporting and cooperation that may partially offset crime reduction gains. The community accountability position is not that crime is not a problem; it is that the communities where surveillance technology is concentrated have been neither adequately consulted about its deployment nor adequately protected from its consequences. They are the subjects of a governance decision made about them, not with them.
Where the real disagreement lives
The predictive policing debate is structured by several genuine disagreements that technical improvements cannot resolve.
The baseline problem. All four positions agree that existing policing involves bias and imprecision. They disagree about what the relevant comparison is. Technology advocates compare algorithmic tools to unguided human discretion — an officer's gut feeling about who to stop — and find algorithms preferable. Civil libertarians compare current surveillance architecture to the constitutional framework designed around targeted investigation and find algorithmic mass collection categorically more dangerous. Algorithmic critics compare the claimed accuracy of predictive systems to what the training data can actually support. Community advocates compare the resource expenditure on surveillance technology to what the same money would produce invested differently. These are not competing answers to the same question. They are each measuring something real and not measuring the others' concerns.
The asymmetry of error costs. All four positions agree that errors occur. They disagree about who bears the cost. In the technology-optimist frame, the cost of under-deployment is borne by crime victims in under-patrolled areas; the cost of false positives is an unfortunate but manageable operational problem. In the civil libertarian frame, the cost of a wrongful stop, arrest, or detention is catastrophic for the individual and constitutionally prohibited regardless of aggregate crime statistics. The positions are not primarily disagreeing about error rates; they are disagreeing about who has standing to count as a victim — the person harmed by crime that went un-prevented, or the person harmed by a wrongful algorithmic determination that went un-contested.
The accountability-through-technology claim. The strongest version of the technology-forward position is that cameras and data make officers more accountable. Civil libertarians and community advocates do not necessarily dispute that body cameras can produce accountability. What they dispute is whether accountability through surveillance is the same as accountability through democratic governance. A camera records what happened. It does not determine whether what happened was just, or who decides, or what follows. The body camera footage of George Floyd's murder was recorded by the city's own cameras, by civilian phones, and by bystanders. The footage did not prevent his death. It created the evidentiary record that subsequent accountability required. Community advocates argue that the accountability problem is not primarily about evidence generation; it is about whether the institutions that govern policing respond to evidence. Technology cannot close that gap.
The democratic legitimacy of deployment decisions. In most U.S. jurisdictions, decisions about which surveillance tools police departments adopt are made by police departments, often in vendor relationships not subject to public competitive bidding, without community input and without city council approval. The communities where the tools are deployed — the communities who bear the costs of false positives and of continuous surveillance — have the least institutional access to the decisions. This is not a technical problem solvable by better algorithms. It is a governance problem about whether the communities most affected by a decision have meaningful input into it. The technology debate cannot be resolved at the technical level because it is not, at bottom, a technical question.
See also
- Who gets to decide? — the framing essay for the legitimacy conflict underneath predictive policing: once departments can quietly buy and deploy systems that shape patrol patterns, suspicion thresholds, and real-time identification, the real dispute is who has authority to set those terms and what meaningful contestation exists for the people governed by them.
- Who bears the cost? — the framing essay for the asymmetry this page keeps returning to: predictive systems promise efficiency in the abstract, but the concrete burdens of false positives, intensified surveillance, and escalated police contact fall on the neighborhoods already asked to absorb the most enforcement.
- Police Reform: What Each Position Is Protecting — the broader map of which this predictive policing debate is a specific cluster: the arguments about abolition, reform, and community safety that provide the framework context in which any technology deployment is embedded. The technology question cannot be answered independently of the question of what policing is for.
- Digital Privacy and Surveillance: What Each Position Is Protecting — the government surveillance map focused on state collection of communications data, encryption policy, and national security; distinct from this map in its primary institutions (NSA, FISA courts, intelligence agencies) but connected through the Fourth Amendment doctrine that governs both, and through the regulatory arbitrage that allows law enforcement to purchase from commercial vendors the data it cannot constitutionally collect itself.
- Surveillance Capitalism: What Each Position Is Protecting — the commercial data economy map; connected to this map because police departments increasingly purchase behavioral and location data from commercial vendors rather than subpoenaing it, and because the private facial recognition systems deployed by law enforcement are the same systems trained on consumer photos scraped from social platforms. The commercial and government surveillance ecosystems are not cleanly separable.
- Criminal Justice: What Both Sides Are Protecting — the debate about what incarceration is for and what justice requires; the COMPAS-type algorithmic risk assessment tools are used not only in policing but in bail, sentencing, and parole decisions, meaning the algorithmic accountability critique connects the policing debate to the carceral debate at multiple junctions.
- Disability and the Criminal Legal System: What Each Position Is Protecting — the intersection with disability: people with psychiatric disabilities are disproportionately represented in police encounters and in the system-involved population that predictive tools track, and the co-responder and crisis intervention alternatives being developed in both the disability and police reform debates represent the most concrete operational alternatives to sending armed officers with surveillance backing to mental health crises.
- Algorithmic Hiring and Fairness: What Each Position Is Protecting — the civil-context parallel: both maps involve algorithmic systems making consequential decisions about people's life chances based on predictions from historical data, both surface the governance gap between those most affected and those who design and deploy the tools, and both introduce the same pattern — decisions framed as technical are political decisions about accountability, and treating them as technical is itself a political choice about who gets to make them. The predictive policing map is about criminal risk scores; the algorithmic hiring map is about employability scores; the structural critique of both is the same.
- Algorithmic Governance and Automated Decisions: What Each Position Is Protecting — the generalized frame for the structural patterns this map identifies in law enforcement: when public institutions deploy automated systems to make consequential decisions about people, who bears accountability for disparate impact, who holds meaningful appeal rights, and who decides what values the system encodes. COMPAS is this map's central case; the algorithmic governance map situates it within the broader pattern — benefits eligibility, credit scoring, bail decisions — and names what the COMPAS debate made legible across the entire domain: that the choice between competing definitions of algorithmic fairness is not a technical optimization problem but a political question about whose errors are more acceptable.
- Digital Identity and Biometrics: What Each Position Is Protecting — facial recognition is the most visible joint between predictive policing and the broader biometric governance debate: the technical systems that allow police departments to identify individuals from CCTV footage in real-time are the same systems whose documented accuracy disparities — higher false positive rates for darker-skinned women, documented by Buolamwini and Gebru — the digital identity map examines. The predictive policing debate narrows to the law enforcement context; the identity map asks the upstream question: what accountability framework should govern the deployment of these systems before they are integrated into policing infrastructure, and what remedies exist when a system identifies the wrong person for consequences they cannot easily contest.
Further reading
- Andrew Guthrie Ferguson, The Rise of Big Data Policing: Surveillance, Race, and the Future of Law Enforcement (NYU Press, 2017) — the most comprehensive treatment of the constitutional and civil liberties dimensions of data-driven policing: Ferguson, a law professor and former public defender, traces how predictive analytics, social network analysis, and fusion center intelligence produce a form of "data-driven suspicion" that operates outside the Fourth Amendment's requirement of individualized, articulable facts; he proposes a constitutional framework for evaluating data policing tools, organized around the degree to which they generate suspicion before rather than after a specific observable act.
- Bernard Harcourt, Against Prediction: Profiling, Policing, and Punishing in an Actuarial Age (University of Chicago Press, 2007) — the foundational philosophical critique of actuarial methods in criminal justice, written before machine learning made the methods ubiquitous but structurally anticipating the current debate: Harcourt argues that statistical profiling creates a ratchet — as police focus more intensively on a predicted group, more members of that group are arrested, which raises the group's measured offending rate, which justifies more intensive focus, in a feedback loop that cannot be broken from within; and that beyond the ratchet, actuarial prediction fundamentally conflicts with liberal principles of individual desert because it punishes people for what a class of people like them has done, not for what they did.
- Julia Angwin, Jeff Larson, Surya Mattu, and Lauren Kirchner, "Machine Bias", ProPublica, May 23, 2016 — the investigative analysis that brought COMPAS to wide public attention: using records from Broward County, Florida, Angwin and colleagues found that Black defendants who did not reoffend were nearly twice as likely as White defendants who did not reoffend to be classified as high-risk by COMPAS, while White defendants who did reoffend were more likely than equivalent Black defendants to have received low-risk scores; Northpointe (the algorithm's maker) disputed the methodology, generating a subsequent academic debate about what "fairness" means for an algorithmic risk score — a debate whose technical content (equalized false positive rates vs. predictive parity are mathematically incompatible) is itself a political argument about which population should bear the cost of algorithmic error.
- Joy Buolamwini and Timnit Gebru, "Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification", Proceedings of Machine Learning Research 81: 77–91, 2018 — the study that documented racially disparate facial recognition error rates in commercial systems deployed by major technology vendors: the worst-performing systems had error rates of up to 34.7 percent for darker-skinned women, compared to under 1 percent for lighter-skinned men; subsequent audits of law enforcement facial recognition deployments found comparable disparities; Buolamwini's follow-up work with the Algorithmic Justice League has documented real-world wrongful arrests attributable to facial recognition, including Robert Williams in Michigan (2020) and Nijeer Parks in New Jersey (2019), both Black men.
- Rashida Richardson, Jason M. Schultz, and Kate Crawford, "Dirty Data, Bad Predictions: How Civil Rights Violations Impact Police Data, Predictive Policing Systems, and Justice", 94 NYU Law Review Online 192 (2019) — documents the specific jurisdictions whose crime data was used to train the predictive policing algorithms deployed by major vendors (PredPol, HunchLab, ShotSpotter) and finds that many of those departments had documented histories of systematic misconduct — Rampart scandal, Chicago's Homan Square facility, New Orleans' consent decree — meaning that the algorithms were trained on records of biased policing rather than records of crime; the concept of "dirty data" as training-set contamination from prior civil rights violations is the paper's central analytical contribution.
- Sarah Brayne, Predict and Surveil: Data, Discretion, and the Future of Policing (Oxford University Press, 2021) — based on three years of ethnographic fieldwork inside the Los Angeles Police Department during its adoption of a Palantir-powered data analytics system; Brayne documents how big data policing extends the surveillance perimeter far beyond the system-involved (arrestees, probationers, parolees) to include their associates, their vehicles' appearances in proximity to incidents, and individuals in commercial databases purchased by the department; she introduces the concept of "system avoidance" — the finding that people who have prior contact with the criminal legal system avoid institutions that generate data records (hospitals, banks, formal employment) in response to expanded surveillance — and argues that surveillance has health, economic, and civic costs beyond the policing encounter itself.
- Cathy O'Neil, Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy (Crown, 2016) — the accessible synthesis of how opaque, scalable, and consequential algorithmic systems amplify inequality across multiple domains; the criminal justice chapters (on COMPAS-type risk scoring and on police patrol optimization) are relevant here, but O'Neil's broader argument — that feedback loops in consequential systems produce self-confirming predictions that entrench status quo inequalities precisely because they mistake correlation in biased data for ground truth — is the analytical frame that connects the policing debate to the wider algorithmic accountability critique in education, employment, and credit.
- National Academies of Sciences, Engineering, and Medicine, Proactive Policing: Effects on Crime and Communities (National Academies Press, 2018) — the most comprehensive systematic review of the evidence on hot spots policing, predictive patrol, and related proactive strategies: the review concludes that hot spots policing has consistent evidence of crime reduction in targeted areas and some evidence of diffusion of benefits to surrounding areas; that proactive strategies produce measurable intrusive effects and reduced community trust in the populations subjected to them; and that the evidence base for whether crime reductions outweigh legitimacy costs is insufficient to draw policy conclusions without context-specific community engagement. Both sides in this debate regularly cite the same report for different propositions.
- David Weisburd, "The Law of Crime Concentration and the Criminology of Place", Criminology 53(2): 133–157, 2015 — the empirical foundation for the hot spots policing argument: synthesizing street-level crime data from multiple cities across multiple countries, Weisburd establishes what he calls the "law of crime concentration at place" — that in city after city, roughly 5 percent of street segments account for approximately 50 percent of all crime, a concentration pattern that is stable across years and that holds regardless of overall city crime rates; the finding is the evidentiary basis for the claim that algorithmic patrol allocation concentrates resources where they are most needed rather than distributing them in ways that generate more discretionary stops; Weisburd's experimental research on hot spots policing in Kansas City, Sacramento, and other jurisdictions provides the strongest randomized evidence that targeted patrol reduces crime — the empirical counterpart that technology advocates cite against arguments for redistribution rather than optimization.
- Kristian Lum and William Isaac, "To Predict and Serve?" Significance (Royal Statistical Society), October 2016 — a rigorous technical analysis of PredPol, the most widely deployed predictive policing platform in the United States at the time, using actual drug crime data from Oakland: Lum and Isaac show that when PredPol's algorithm is trained on historical drug arrest data and used to predict future drug enforcement, it concentrates predictions in the same neighborhoods where past over-policing occurred — not because drug use is more prevalent there (self-reported use rates from survey data show comparable use across demographics) but because past enforcement concentrated arrests there; the paper demonstrates technically what Richardson et al.'s "dirty data" argument claims conceptually — that predictive systems trained on biased arrest records cannot be made unbiased by improving the algorithm, because the problem is in the data's generative process; Lum and Isaac's contribution is to show this using the algorithm's actual predictions, not hypothetical scenarios, which gives the critique a concreteness that the earlier philosophical arguments lacked.
- Priscillia Hunt, Jessica Saunders, and John S. Hollywood, Evaluation of the Shreveport Predictive Policing Experiment, RAND Corporation Report RR-531-NIJ (Santa Monica: RAND, 2014) — the first published randomized controlled trial of a dedicated predictive policing program; the Shreveport Police Department ran a six-district experiment in which the PILOT system's predictions were used to direct patrol in treatment districts but withheld from control districts; the result was no statistically significant reduction in property crime relative to standard hot-spots policing; the study is essential for calibrating proponent claims because it tests the technology in the most favorable possible framing — a randomized design in which the counterfactual is not "no policing" but "conventional targeted patrol" — and finds no measurable additional benefit, suggesting that the gains attributed to predictive systems in observational studies may be attributable to the concentration of patrol rather than to the algorithm's predictions specifically.
- David Robinson and Logan Koepke, Stuck in a Pattern: Early Evidence on Predictive Policing and Civil Rights (Upturn, August 31, 2016) — the first systematic survey of how predictive policing tools were actually deployed across U.S. cities; Koepke and Robinson documented rapid adoption with minimal public accountability, vendor agreements that shielded system details from FOIA disclosure, and police department evaluations that assessed crime statistics but not civil liberties effects; their central finding — that the governance structures surrounding predictive policing were categorically inadequate to assess whether the tools produced discriminatory effects — established the procedural accountability frame that has anchored most subsequent civil rights critiques of the technology; the report is the accountability-reform text most focused on deployment context rather than algorithmic design, making it the natural complement to the Richardson et al. "dirty data" analysis.