Perspective Map
Algorithmic Pricing and Platform Monopoly Power: What Each Position Is Protecting
In October 2022, ProPublica reporter Heather Vogell published a story about a Texas software company called RealPage. Its product, YieldStar, helped landlords set rental prices by pooling data from competing property managers — not public listing prices, but non-public, forward-looking lease data — and returning algorithmic rent recommendations. In one Seattle neighborhood, 70 percent of apartments were managed by landlords using the same software. Greystar, the largest property manager in the country, reported that buildings using YieldStar "outperformed their markets" by 4.8 percent. RealPage's own marketing promised clients the software could help them outperform by 3 to 7 percent.
Within three days of publication, lawsuits were filed. Within weeks, the DOJ opened an investigation. In August 2024, the Justice Department sued RealPage under Section 1 of the Sherman Act, alleging that competing landlords sharing non-public pricing data through a common algorithm constituted unlawful price-fixing. In November 2025, a consent decree was filed: no fines, no admissions of wrongdoing, but RealPage would have to stop using competitors' current lease data, close the asymmetric price-floor loophole, and accept a compliance monitor for seven years.
The RealPage case did not begin the debate over algorithmic pricing, but it made it impossible to treat as theoretical. Around the same time, the FTC's unredacted complaint against Amazon revealed Project Nessie: a secret algorithm deployed from 2014 to 2019 that identified products where competitors would follow Amazon's price increases, raised prices on more than 8 million items in a single month in 2018, and generated an estimated $1.4 billion in excess profits over its lifetime. In 2024, the DOJ sued Live Nation and Ticketmaster, alleging that a 80 percent share of major concert venue ticketing had allowed the company to impose dynamic pricing on fans without the competitive check that would otherwise limit it.
These cases involve different mechanisms — shared data, unilateral algorithmic exploitation, and vertical monopoly — but they converge on a single question: what does it mean to "fix" a price when the fixing is done not by humans meeting in a hotel room but by software operating at the speed of computation, without explicit agreement, and sometimes without intent?
The answer depends on what you think the problem actually is. And on that question, the positions are genuinely divided.
What antitrust modernists and neo-Brandeisians are protecting
The recognition that the Sherman Act was written for a world where collusion required humans to meet, agree, and coordinate — and that algorithmic markets have made that requirement structurally obsolete. Ariel Ezrachi, the Slaughter and May Professor of Competition Law at Oxford, and Maurice Stucke, a competition law professor at the University of Tennessee, laid out the theoretical framework in Virtual Competition (Harvard University Press, 2016). Their central argument: as pricing shifts from humans to algorithms, collusion no longer requires a "meeting of minds." In what they call the "predictable agent" scenario, competitors independently deploy reactive pricing algorithms that, without any communication, converge on supracompetitive prices because each can predict how the others will respond. The economic effect is identical to a cartel. The legal mechanism — an "agreement" under Section 1 of the Sherman Act — is absent. Existing law has no tool for this, and that gap is not accidental. The Sherman Act of 1890 was designed to catch what humans could design. It was not designed to catch what machines can discover.
The empirical evidence that algorithms in oligopolistic markets systematically produce supracompetitive prices — not as a side effect but as a mathematically predictable outcome. Emilio Calvano and co-authors published the key study in the American Economic Review in 2020. Using Q-learning algorithms — a standard form of reinforcement learning — in a repeated oligopoly model, they found that algorithms "consistently learn to charge supracompetitive prices, without communicating with one another." The average profit gain upon convergence was 84.9 percent above competitive levels. The algorithms discovered tit-for-tat punishment strategies on their own, sustaining the collusive equilibrium without any human intervention. Results were robust to changes in cost structure, number of competitors, and levels of market uncertainty. A 2024 follow-up by Assad and colleagues studied retail gasoline markets in Germany after competing stations adopted algorithmic pricing software: in local duopolies where both firms adopted algorithms, margins increased by 28 percent. The effect did not appear in monopoly markets — only where mutual adoption created the conditions for tacit coordination. This is the neo-Brandeisian factual predicate: the harm is structural, not intentional, and it is happening now.
The case for Lina Khan's critique of the consumer welfare standard as structurally incapable of capturing how platform monopolies actually harm competition. Khan's 2017 Yale Law Journal article, "Amazon's Antitrust Paradox," argued that the standard antitrust focus on price effects — does this raise prices for consumers? — is the wrong lens for platforms that grow by pricing low, accumulating data, and vertically integrating until competition disappears. The RealPage case temporarily obscures this argument because its harm is straightforwardly price-based. But Project Nessie illustrates the structure Khan was describing: Amazon didn't fix prices by setting them too low, it operated an algorithm that exploited its market position to raise prices on millions of products when competitors would follow. The FTC's complaint also alleged that Amazon's Buy Box algorithm punishes third-party sellers who offer lower prices on competing platforms, creating an artificial price floor across e-commerce more broadly. What the neo-Brandeisian position protects is the recognition that power precedes pricing — and that once you have enough power, you can use algorithms to capture it quietly, without leaving the fingerprints that traditional antitrust enforcement was designed to find.
What consumer welfare standard defenders are protecting
The case that the Sherman Act, properly applied, already handles the harm that is actually occurring — and that the neo-Brandeisian alternative replaces a tractable legal standard with an untestable political one. Herbert Hovenkamp, the leading American antitrust treatise author (the multi-volume Hovenkamp's Antitrust Law) and a University of Pennsylvania Carey Law School professor, has been the most systematic critic of the neo-Brandeisian agenda. His core argument: the consumer welfare standard (CWS) does not require the narrow, Borkian price-only reading its critics assign to it — it incorporates quality, innovation, and long-run dynamic effects. The RealPage case succeeded under existing law because the hub-and-spoke theory — competitors sharing non-public competitive data through a common intermediary — provides an "agreement" under the Sherman Act. The DOJ didn't need new legislation; it needed to bring the case. Hovenkamp's critique of the neo-Brandeisian alternative is technical and pointed: in "Is Antitrust's Consumer Welfare Principle Imperiled?" (45 Journal of Corporation Law 101, 2019), he argues that neo-Brandeisians have not provided "a calculus for determining how these goals should be applied to specific practices" and that their framework amounts to "driving antitrust by political theory rather than economics" — with the consequent risk of protecting competitors rather than competition.
The argument that algorithmic pricing, in the majority of market structures, intensifies competition rather than dampening it — and that blanket regulatory intervention could eliminate genuine consumer benefits. The pro-efficiency case is not merely the position of self-interested vendors. It has empirical support. Brown and MacKay's 2023 study of OTC drug markets found that sellers with faster algorithms charge lower prices than those with slower technology in competitive settings — algorithmic speed can be a vehicle for competitive intensity, not cartel coordination. Research on ride-sharing surge pricing finds that it improves supply-demand matching, expands driver availability, and reduces wait times — benefits that accrue disproportionately to riders who most urgently need a car. The Mercatus Center's 2025 review of the literature argues that the evidence for consumer harm is concentrated in specific structures: mutual adoption in oligopolies with limited entry, or hub-and-spoke data sharing that replicates cartel information exchange. Extending this finding to all algorithmic pricing conflates the harmful mechanism with the general class of tool.
The concern that market definition, not algorithmic coordination, is the live variable in most of these cases — and that getting it wrong will produce bad law with broad reach. Hovenkamp notes that Amazon's share of e-commerce is approximately 40 percent but its share of all commerce (including brick-and-mortar retail) is roughly 4 percent. How you define the relevant market determines whether there is a monopoly. The FTC v. Amazon complaint defines the relevant market as "online superstores" — a market Amazon dominates. Amazon defines it as retail commerce broadly — a market Amazon competes in. This is not a trivial definitional dispute. If courts adopt market definitions tailored to produce findings of dominance, the consumer welfare standard defenders argue, antitrust law will become an instrument for suppressing successful competitors rather than protecting competitive markets. The harm of the wrong standard is not only bad economics; it is a legal framework whose application will be unpredictable and politically manipulable.
What transparency and structural regulation advocates are protecting
The recognition that hub-and-spoke algorithmic collusion is a tractable problem — the mechanism is clear, the fix is known, and what's missing is political will to mandate it broadly. The DOJ's proposed RealPage consent decree operationalized a regulatory framework: use only historical data (at least 12 months old), not real-time competitor data; make price guardrails symmetric, not asymmetric; require human-in-the-loop parameters for auto-accept functions; ban shared market surveys among competitors. Law firms across the industry published client alerts within weeks characterizing the settlement as a de facto blueprint for "safer" algorithmic pricing. Senator Amy Klobuchar's Preventing Algorithmic Collusion Act (S. 232, introduced January 2025), co-sponsored by eight senators, would codify this in statute: making it unlawful to use or distribute a pricing algorithm that incorporates non-public competitor data, creating a presumption of price-fixing when competitors share such data through an algorithm, and requiring algorithmic disclosure to regulators on request. What this position protects is the structural insight that most algorithmic collusion does not happen because anyone intends it to — it happens because the architecture of shared data creates the conditions for it, and changing the architecture is the most direct response.
The evidence that state and local action is moving faster than federal enforcement — and that legislative clarity would prevent a patchwork of inconsistent state rules. New York enacted the first state ban on algorithmic pricing in residential rental markets in 2025, prohibiting landlords from using algorithms that incorporate non-public competitor data to set rents — the same mechanism the DOJ targeted in the RealPage case. Colorado enacted transparency requirements for algorithmic pricing systems. Philadelphia and San Francisco banned revenue management software for residential rentals. By mid-2025, 51 bills targeting algorithmic pricing had been introduced across 24 states in a single year. The EU Digital Markets Act, in force since 2022, addresses related concerns through a different mechanism: requiring platform gatekeepers to disclose their ranking and pricing algorithms to regulators, prohibiting them from using data from third-party sellers to advantage their own products, and imposing fines of up to 10 percent of global annual revenue for violations. Apple was fined €500 million and Meta €200 million in April 2025 under early DMA enforcement. The structural regulation position holds that the question is not whether to regulate — states and the EU are already doing it — but whether federal clarity would produce a more coherent framework than fifty separate state laws and an Atlantic divide in approach.
The concern that the most sophisticated harms — AI agents that independently discover collusive equilibria — will outpace any enforcement framework built around human intent. A 2024 study by Fish and colleagues at used GPT-4 as a pricing agent in oligopoly experiments. LLM-based pricing agents "quickly and consistently collude in oligopoly settings, even when instructed only to seek long-run profits, with no explicit or implicit suggestion of collusion." This is the Ezrachi/Stucke scenario now running in a laboratory. The Stigler Center at the University of Chicago Booth School has proposed a technical solution: introducing statistical "noise" into the price signals that algorithms receive, making coordination harder without banning algorithms outright. The idea is mathematically tractable — the highest "safe" level of noise can be calculated — but it requires regulatory authority to mandate its application. What the transparency/regulation position protects is the insight that enforcement after the fact catches what already happened; architecture-level intervention changes what is possible.
What dynamic pricing and pro-market efficiency advocates are protecting
The case that dynamic pricing solves a real problem — matching supply and demand in real time — and that fixing prices at static levels reintroduces the shortages and inefficiencies that dynamic pricing was designed to eliminate. Surge pricing's defenders often cite the New Year's Eve 2014 incident when Uber's surge system failed for 26 minutes: the result was predictable supply collapse and matching failure. In normal operation, when surge activates, more drivers come online (earnings incentive) and low-urgency riders drop out of the queue. The market clears at a higher price — but for the rider who actually needs to get somewhere urgently, the car appears. Research comparing rideshare to taxi systems found that Lyft drivers spend approximately 19 percent of time idle; taxi drivers spend 48 percent. That efficiency gain is inseparable from the pricing mechanism that produces it. The same logic applies to airline yield management, hotel room pricing, concert ticket platinum pricing, and electricity spot markets. Dynamic pricing is not a novel exploitation of consumers. It is the technological implementation of price-clearing mechanisms that economics has understood for a century.
The argument that regulatory intervention risks eliminating the pro-competitive uses of pricing algorithms — which are the majority of uses — in order to stop the anti-competitive minority. The National Retail Federation opposed the Preventing Algorithmic Collusion Act on exactly this basis: the bill's definition of "non-public competitor data" is broad enough to capture legitimate uses of market intelligence, syndicated retail data, and competitive benchmark studies that retailers have used for decades. A blanket prohibition on algorithms trained with competitive data would prohibit not only RealPage's rent coordination but any pricing system that uses the kind of information that human pricing managers routinely obtain. The pro-market position is not that algorithmic collusion doesn't happen — the RealPage case established that it does — but that the appropriate response is targeted enforcement against the specific mechanism (shared non-public real-time competitor data, asymmetric guardrails, auto-accept functions) rather than a statutory presumption that treating competitive data as relevant information is equivalent to price-fixing.
The concern that the Live Nation case illustrates how platform dominance and algorithmic pricing get conflated — and that the remedy for monopoly is structural breakup, not pricing regulation. The DOJ's Live Nation antitrust lawsuit, filed in May 2024, argued that dynamic pricing (Ticketmaster's "Platinum" tickets capturing secondary market prices in the primary market) was enabled by market dominance — 80 percent of major concert venue ticketing, combined with ownership of the promotion and venue infrastructure. The argument was not that dynamic pricing is inherently harmful; it was that without competitive alternatives, artists and venues had no leverage to refuse it. The DOJ reached a settlement in March 2026 requiring amphitheater divestitures, service fee caps, and technology access for competitors. But more than 30 states rejected the settlement as insufficient and continued litigating toward structural breakup — separating Ticketmaster from Live Nation entirely. The pro-market efficiency position holds that this is the right instinct: fix the structure, not the pricing mechanism. A market with genuine competition self-limits dynamic pricing without regulatory intervention. The target should be the monopoly, not the algorithm.
What cuts across all four positions
- The "agreement" requirement in Section 1 of the Sherman Act is doing enormous legal work — and all four positions are either defending it, trying to work around it, or calling for its replacement. The consumer welfare standard defenders say existing law is sufficient because the hub-and-spoke theory provides an "agreement." The transparency advocates say a statutory presumption is needed precisely because the "agreement" standard fails in tacit coordination cases. The antitrust modernists say effect-based liability should supplement or replace it. The dynamic pricing advocates say abandoning the "agreement" requirement would sweep in single-firm pricing decisions that no one considers anticompetitive. Every substantive policy disagreement traces back to this legal architecture question: what does it take to make algorithmic coordination legally cognizable?
- The market structure matters more than the tool. The empirical evidence is consistent: single-firm adoption in a competitive market tends to intensify competition; mutual adoption in an oligopoly tends to produce supracompetitive prices, without any communication. This finding cuts across the debate in an inconvenient way. It means the neo-Brandeisians are right that the harm is structural and emergent, not intentional. It also means the pro-market advocates are right that the same algorithm in a competitive market produces efficiency gains. The relevant variable is market concentration — which returns the debate to the pre-digital antitrust question: how much concentration is too much, and what creates it?
- The RealPage settlement raised prices for renters for years and the penalty was zero — and what you think about that tells you a lot about what you think the purpose of enforcement is. The DOJ reached a consent decree in which RealPage admits no wrongdoing, pays no fines, and faces a seven-year compliance monitor. Eight state AGs who joined the complaint did not sign the settlement. Private class actions by renters remain active. The pro-enforcement position holds that the structural remedies — restricting data sharing, closing the asymmetric ratchet — are more valuable than a fine that would have been a litigation settlement anyway. The critics hold that a zero-penalty outcome for a system that may have raised rents for millions of tenants for years sends a signal about the cost-benefit calculation any future vendor will make. This is not primarily a technical dispute about enforcement economics. It is a dispute about whether antitrust law is primarily a corrective or a deterrent — and for whom.
- The EU and the U.S. are running different experiments, and the Trump administration has characterized the EU's approach as a trade barrier. The EU Digital Markets Act imposes structural obligations on platform gatekeepers — transparency, non-discrimination, algorithmic disclosure — that go well beyond what U.S. antitrust enforcement has required. The Trump administration's U.S. Trade Representative identified both the DMA and the Digital Services Act as "unfair trade barriers" in its 2025 National Trade Estimates Report, creating a transatlantic tension over the legitimate scope of platform regulation. This is not primarily a dispute about consumer welfare. It is a dispute about whether the entity that gets to define "fair competition" in digital markets is the market itself, the U.S. government, or a transnational regulatory body with its own conception of the public interest. The outcome will shape not just how algorithms are regulated but who gets to make that determination.
See also
- Who bears the cost? — the framing essay for the distributive conflict behind pricing systems: when firms use algorithms to extract more from fragmented, captive, or data-legible consumers, who absorbs the burden of efficiency claims, and who gets protected from market power?
- Who gets to decide? — the framing essay for the authority conflict behind this map: when pricing is delegated to opaque software and platform coordination systems, who is actually making the decision, and what kind of oversight or accountability should follow?
- Big Tech and Antitrust — the structural debate over whether dominant platforms should be broken up, regulated as common carriers, or left to market competition; this map focuses on the specific mechanism of algorithmic pricing within that broader antitrust landscape
- Surveillance Capitalism — the use of personal data to set individualized prices is the logical extension of surveillance capitalism's core model; the FTC's 2024 surveillance pricing investigation targeted this specifically, asking whether using browsing history, location, and credit data to personalize prices constitutes an unfair commercial practice
- Housing Finance and Algorithmic Discrimination — RealPage's rent coordination affected the same populations facing discriminatory automated underwriting; the intersection of algorithmic pricing and disparate racial impact is underexplored in most mainstream antitrust coverage
- Platform Labor Governance — Uber and Lyft's surge pricing is simultaneously a dynamic pricing question and a labor question: if drivers are independent contractors, the platform is technically setting prices for nominally competing businesses, raising the same "agreement" issues that animate the RealPage litigation
- Algorithmic Governance and Automated Decisions — the governance question that underlies all of these cases: who is accountable when an algorithm makes a consequential decision, and what standards of transparency and auditability should apply?
- Gig Economy and Worker Classification — whether platform workers are employees or independent contractors is the threshold question that determines whether platform pricing is lawful employer-set compensation or unlawful price-fixing among competitors
References and further reading
- Heather Vogell: "Rent Going Up? One Company's Algorithm Could Be Why," ProPublica, October 15, 2022 — the investigative report that directly preceded the DOJ's RealPage lawsuit; documented 70 percent algorithmic pricing penetration in one Seattle neighborhood and RealPage's own marketing claims of 3–7 percent market outperformance for clients
- United States v. RealPage, Inc., M.D.N.C. (filed August 23, 2024; amended January 7, 2025; proposed consent decree November 24, 2025) — the foundational federal case establishing that competing landlords sharing non-public forward-looking lease data through a common algorithm constitutes unlawful price-fixing under Sherman Act Section 1; the proposed settlement's terms (historical data only, symmetric guardrails, compliance monitor) function as a de facto regulatory standard for hub-and-spoke algorithmic pricing
- FTC v. Amazon.com, Inc., W.D. Wash. (filed September 2023) — the FTC's monopolization case alleging that Amazon used Project Nessie and related platform controls to blunt price competition and preserve its retail monopoly; a key platform-power case tying algorithmic pricing to exclusionary conduct rather than rent software alone
- United States v. Live Nation Entertainment, S.D.N.Y. (filed May 23, 2024) — DOJ suit against the dominant concert ticketing and promotion platform; by March 2026 the federal government had announced a proposed settlement, while 32 states and the District of Columbia continued to trial seeking stronger structural relief
- Ariel Ezrachi and Maurice E. Stucke: Virtual Competition: The Promise and Perils of the Algorithm-Driven Economy, Harvard University Press (2016) — the foundational text of algorithmic collusion theory, introducing the taxonomy of messenger, hub-and-spoke, predictable agent, and digital eye scenarios; the primary theoretical framework for understanding why the Sherman Act's "agreement" requirement misses most algorithmic coordination
- Emilio Calvano, Giacomo Calzolari, Vincenzo Denicolò, and Sergio Pastorello: "Artificial Intelligence, Algorithmic Pricing, and Collusion," American Economic Review 110(10): 3267–3297 (2020) — the most cited empirical paper on algorithmic collusion; Q-learning algorithms converge on supracompetitive prices with 84.9 percent profit gains above competitive levels without any communication, discovering tit-for-tat punishment strategies independently
- Salim Assad et al.: "Examining Algorithmic Pricing and Competition in the German Retail Gasoline Market" (working paper, rev. 2024) — found sizable margin increases in local duopolies where both firms adopted algorithmic pricing software, with no comparable effect in monopoly markets; key evidence that the harm is structural (mutual adoption) rather than inherent to the tool
- Lina Khan: "Amazon's Antitrust Paradox," 126 Yale Law Journal 710 (2017) — argues the consumer welfare standard (focus on price effects) is structurally incapable of capturing platform monopoly power that grows through low pricing and data accumulation; the theoretical foundation for the neo-Brandeisian antitrust agenda
- Herbert Hovenkamp: "Is Antitrust's Consumer Welfare Principle Imperiled?", 45 Journal of Corporation Law 101 (2019) — the leading defense of the consumer welfare standard against neo-Brandeisian critique; argues that the alternative frameworks "drive antitrust by political theory rather than economics" and lack workable standards for specific practices
- Tim Wu: The Curse of Bigness: Antitrust in the New Gilded Age, Columbia Global Reports (2018) — the neo-Brandeisian manifesto; argues that antitrust law has been made toothless by focusing only on short-term price effects and that platform dominance creates political as well as economic risks that the consumer welfare standard ignores
- S. 232, Preventing Algorithmic Collusion Act of 2025 (119th Congress, introduced January 23, 2025) — Sen. Amy Klobuchar's bill to codify the RealPage regulatory framework in statute: unlawful to use pricing algorithms trained on non-public competitor data, presumption of price-fixing when competitors share such data, mandatory algorithmic disclosure to regulators; referred to Senate Judiciary Committee
- Oren Bar-Gill, Cass R. Sunstein, and Inbal Talgam-Cohen: "Algorithmic Harm in Consumer Markets" (2023) — examines how ML algorithms can exploit cognitive biases and consumer misperceptions to extract surplus beyond ordinary price discrimination; the behavioral economics dimension of algorithmic pricing harm
- NBER Working Paper 32540: "Algorithmic Pricing: Implications for Marketing Strategy and Regulation" (2024; rev. 2025) — comprehensive review of the empirical literature; finds consumer harm is broader than collusion alone, with algorithmic pricing capable of raising prices even in competitive markets through single-firm dynamics
- Sara Fish, Yannai A. Gonczarowski, and Ran I. Shorrer: "Algorithmic Collusion by Large Language Models" (2024; rev. 2026) — LLM-based pricing agents autonomously collude in oligopoly settings even when instructed only to seek long-run profits; the clearest demonstration of the Ezrachi/Stucke "digital eye" scenario emerging in practice
- Niuniu Zhang, Stigler Center at Chicago Booth (ProMarket): "Preventing Algorithmic Collusion by Adding Noise to Market Data" (December 19, 2025) — proposes a precision regulatory tool: introducing calculated statistical noise into price signals that algorithms receive, making coordination harder without banning algorithms outright; the highest "safe" noise level can be mathematically determined
- EU Digital Markets Act (entered into force November 1, 2022; compliance deadline March 6, 2024) — requires designated gatekeepers (Alphabet, Amazon, Apple, ByteDance, Meta, Microsoft) to disclose ranking and pricing algorithms to regulators, prohibit self-preferencing, and allow business users to steer customers to alternative channels; first fines issued April 23, 2025 (Apple EUR500 million, Meta EUR200 million); penalties up to 10% of global annual revenue
- Cody Taylor: "The Case for Algorithmic Pricing: Consumer Welfare, Market Efficiency, and Policy Missteps," Mercatus Center (May 14, 2025) — the most thorough current statement of the pro-market efficiency critique of algorithmic pricing regulation; argues that harm is concentrated in specific oligopolistic structures and that blanket intervention would eliminate genuine consumer benefits
- Zach Y. Brown and Alexander MacKay: "Competition in Pricing Algorithms," American Economic Journal: Microeconomics 15(2): 109–156 (2023) — uses online OTC drug pricing data to show how differences in pricing technology shape outcomes; a key pro-efficiency reference because firms with faster pricing tools can intensify competition in some settings even as algorithmic pricing can also raise prices and merger effects in others