Sensemaking for a plural world

Perspective Map

Algorithmic Pricing and Platform Monopoly Power: What Each Position Is Protecting

April 2026

In October 2022, ProPublica reporter Heather Vogell published a story about a Texas software company called RealPage. Its product, YieldStar, helped landlords set rental prices by pooling data from competing property managers — not public listing prices, but non-public, forward-looking lease data — and returning algorithmic rent recommendations. In one Seattle neighborhood, 70 percent of apartments were managed by landlords using the same software. Greystar, the largest property manager in the country, reported that buildings using YieldStar "outperformed their markets" by 4.8 percent. RealPage's own marketing promised clients the software could help them outperform by 3 to 7 percent.

Within three days of publication, lawsuits were filed. Within weeks, the DOJ opened an investigation. In August 2024, the Justice Department sued RealPage under Section 1 of the Sherman Act, alleging that competing landlords sharing non-public pricing data through a common algorithm constituted unlawful price-fixing. In November 2025, a consent decree was filed: no fines, no admissions of wrongdoing, but RealPage would have to stop using competitors' current lease data, close the asymmetric price-floor loophole, and accept a compliance monitor for seven years.

The RealPage case did not begin the debate over algorithmic pricing, but it made it impossible to treat as theoretical. Around the same time, the FTC's unredacted complaint against Amazon revealed Project Nessie: a secret algorithm deployed from 2014 to 2019 that identified products where competitors would follow Amazon's price increases, raised prices on more than 8 million items in a single month in 2018, and generated an estimated $1.4 billion in excess profits over its lifetime. In 2024, the DOJ sued Live Nation and Ticketmaster, alleging that a 80 percent share of major concert venue ticketing had allowed the company to impose dynamic pricing on fans without the competitive check that would otherwise limit it.

These cases involve different mechanisms — shared data, unilateral algorithmic exploitation, and vertical monopoly — but they converge on a single question: what does it mean to "fix" a price when the fixing is done not by humans meeting in a hotel room but by software operating at the speed of computation, without explicit agreement, and sometimes without intent?

The answer depends on what you think the problem actually is. And on that question, the positions are genuinely divided.

What antitrust modernists and neo-Brandeisians are protecting

The recognition that the Sherman Act was written for a world where collusion required humans to meet, agree, and coordinate — and that algorithmic markets have made that requirement structurally obsolete. Ariel Ezrachi, the Slaughter and May Professor of Competition Law at Oxford, and Maurice Stucke, a competition law professor at the University of Tennessee, laid out the theoretical framework in Virtual Competition (Harvard University Press, 2016). Their central argument: as pricing shifts from humans to algorithms, collusion no longer requires a "meeting of minds." In what they call the "predictable agent" scenario, competitors independently deploy reactive pricing algorithms that, without any communication, converge on supracompetitive prices because each can predict how the others will respond. The economic effect is identical to a cartel. The legal mechanism — an "agreement" under Section 1 of the Sherman Act — is absent. Existing law has no tool for this, and that gap is not accidental. The Sherman Act of 1890 was designed to catch what humans could design. It was not designed to catch what machines can discover.

The empirical evidence that algorithms in oligopolistic markets systematically produce supracompetitive prices — not as a side effect but as a mathematically predictable outcome. Emilio Calvano and co-authors published the key study in the American Economic Review in 2020. Using Q-learning algorithms — a standard form of reinforcement learning — in a repeated oligopoly model, they found that algorithms "consistently learn to charge supracompetitive prices, without communicating with one another." The average profit gain upon convergence was 84.9 percent above competitive levels. The algorithms discovered tit-for-tat punishment strategies on their own, sustaining the collusive equilibrium without any human intervention. Results were robust to changes in cost structure, number of competitors, and levels of market uncertainty. A 2024 follow-up by Assad and colleagues studied retail gasoline markets in Germany after competing stations adopted algorithmic pricing software: in local duopolies where both firms adopted algorithms, margins increased by 28 percent. The effect did not appear in monopoly markets — only where mutual adoption created the conditions for tacit coordination. This is the neo-Brandeisian factual predicate: the harm is structural, not intentional, and it is happening now.

The case for Lina Khan's critique of the consumer welfare standard as structurally incapable of capturing how platform monopolies actually harm competition. Khan's 2017 Yale Law Journal article, "Amazon's Antitrust Paradox," argued that the standard antitrust focus on price effects — does this raise prices for consumers? — is the wrong lens for platforms that grow by pricing low, accumulating data, and vertically integrating until competition disappears. The RealPage case temporarily obscures this argument because its harm is straightforwardly price-based. But Project Nessie illustrates the structure Khan was describing: Amazon didn't fix prices by setting them too low, it operated an algorithm that exploited its market position to raise prices on millions of products when competitors would follow. The FTC's complaint also alleged that Amazon's Buy Box algorithm punishes third-party sellers who offer lower prices on competing platforms, creating an artificial price floor across e-commerce more broadly. What the neo-Brandeisian position protects is the recognition that power precedes pricing — and that once you have enough power, you can use algorithms to capture it quietly, without leaving the fingerprints that traditional antitrust enforcement was designed to find.

What consumer welfare standard defenders are protecting

The case that the Sherman Act, properly applied, already handles the harm that is actually occurring — and that the neo-Brandeisian alternative replaces a tractable legal standard with an untestable political one. Herbert Hovenkamp, the leading American antitrust treatise author (the multi-volume Hovenkamp's Antitrust Law) and a University of Pennsylvania Carey Law School professor, has been the most systematic critic of the neo-Brandeisian agenda. His core argument: the consumer welfare standard (CWS) does not require the narrow, Borkian price-only reading its critics assign to it — it incorporates quality, innovation, and long-run dynamic effects. The RealPage case succeeded under existing law because the hub-and-spoke theory — competitors sharing non-public competitive data through a common intermediary — provides an "agreement" under the Sherman Act. The DOJ didn't need new legislation; it needed to bring the case. Hovenkamp's critique of the neo-Brandeisian alternative is technical and pointed: in "Is Antitrust's Consumer Welfare Principle Imperiled?" (45 Journal of Corporation Law 101, 2019), he argues that neo-Brandeisians have not provided "a calculus for determining how these goals should be applied to specific practices" and that their framework amounts to "driving antitrust by political theory rather than economics" — with the consequent risk of protecting competitors rather than competition.

The argument that algorithmic pricing, in the majority of market structures, intensifies competition rather than dampening it — and that blanket regulatory intervention could eliminate genuine consumer benefits. The pro-efficiency case is not merely the position of self-interested vendors. It has empirical support. Brown and MacKay's 2023 study of OTC drug markets found that sellers with faster algorithms charge lower prices than those with slower technology in competitive settings — algorithmic speed can be a vehicle for competitive intensity, not cartel coordination. Research on ride-sharing surge pricing finds that it improves supply-demand matching, expands driver availability, and reduces wait times — benefits that accrue disproportionately to riders who most urgently need a car. The Mercatus Center's 2025 review of the literature argues that the evidence for consumer harm is concentrated in specific structures: mutual adoption in oligopolies with limited entry, or hub-and-spoke data sharing that replicates cartel information exchange. Extending this finding to all algorithmic pricing conflates the harmful mechanism with the general class of tool.

The concern that market definition, not algorithmic coordination, is the live variable in most of these cases — and that getting it wrong will produce bad law with broad reach. Hovenkamp notes that Amazon's share of e-commerce is approximately 40 percent but its share of all commerce (including brick-and-mortar retail) is roughly 4 percent. How you define the relevant market determines whether there is a monopoly. The FTC v. Amazon complaint defines the relevant market as "online superstores" — a market Amazon dominates. Amazon defines it as retail commerce broadly — a market Amazon competes in. This is not a trivial definitional dispute. If courts adopt market definitions tailored to produce findings of dominance, the consumer welfare standard defenders argue, antitrust law will become an instrument for suppressing successful competitors rather than protecting competitive markets. The harm of the wrong standard is not only bad economics; it is a legal framework whose application will be unpredictable and politically manipulable.

What transparency and structural regulation advocates are protecting

The recognition that hub-and-spoke algorithmic collusion is a tractable problem — the mechanism is clear, the fix is known, and what's missing is political will to mandate it broadly. The DOJ's proposed RealPage consent decree operationalized a regulatory framework: use only historical data (at least 12 months old), not real-time competitor data; make price guardrails symmetric, not asymmetric; require human-in-the-loop parameters for auto-accept functions; ban shared market surveys among competitors. Law firms across the industry published client alerts within weeks characterizing the settlement as a de facto blueprint for "safer" algorithmic pricing. Senator Amy Klobuchar's Preventing Algorithmic Collusion Act (S. 232, introduced January 2025), co-sponsored by eight senators, would codify this in statute: making it unlawful to use or distribute a pricing algorithm that incorporates non-public competitor data, creating a presumption of price-fixing when competitors share such data through an algorithm, and requiring algorithmic disclosure to regulators on request. What this position protects is the structural insight that most algorithmic collusion does not happen because anyone intends it to — it happens because the architecture of shared data creates the conditions for it, and changing the architecture is the most direct response.

The evidence that state and local action is moving faster than federal enforcement — and that legislative clarity would prevent a patchwork of inconsistent state rules. New York enacted the first state ban on algorithmic pricing in residential rental markets in 2025, prohibiting landlords from using algorithms that incorporate non-public competitor data to set rents — the same mechanism the DOJ targeted in the RealPage case. Colorado enacted transparency requirements for algorithmic pricing systems. Philadelphia and San Francisco banned revenue management software for residential rentals. By mid-2025, 51 bills targeting algorithmic pricing had been introduced across 24 states in a single year. The EU Digital Markets Act, in force since 2022, addresses related concerns through a different mechanism: requiring platform gatekeepers to disclose their ranking and pricing algorithms to regulators, prohibiting them from using data from third-party sellers to advantage their own products, and imposing fines of up to 10 percent of global annual revenue for violations. Apple was fined €500 million and Meta €200 million in April 2025 under early DMA enforcement. The structural regulation position holds that the question is not whether to regulate — states and the EU are already doing it — but whether federal clarity would produce a more coherent framework than fifty separate state laws and an Atlantic divide in approach.

The concern that the most sophisticated harms — AI agents that independently discover collusive equilibria — will outpace any enforcement framework built around human intent. A 2024 study by Fish and colleagues at used GPT-4 as a pricing agent in oligopoly experiments. LLM-based pricing agents "quickly and consistently collude in oligopoly settings, even when instructed only to seek long-run profits, with no explicit or implicit suggestion of collusion." This is the Ezrachi/Stucke scenario now running in a laboratory. The Stigler Center at the University of Chicago Booth School has proposed a technical solution: introducing statistical "noise" into the price signals that algorithms receive, making coordination harder without banning algorithms outright. The idea is mathematically tractable — the highest "safe" level of noise can be calculated — but it requires regulatory authority to mandate its application. What the transparency/regulation position protects is the insight that enforcement after the fact catches what already happened; architecture-level intervention changes what is possible.

What dynamic pricing and pro-market efficiency advocates are protecting

The case that dynamic pricing solves a real problem — matching supply and demand in real time — and that fixing prices at static levels reintroduces the shortages and inefficiencies that dynamic pricing was designed to eliminate. Surge pricing's defenders often cite the New Year's Eve 2014 incident when Uber's surge system failed for 26 minutes: the result was predictable supply collapse and matching failure. In normal operation, when surge activates, more drivers come online (earnings incentive) and low-urgency riders drop out of the queue. The market clears at a higher price — but for the rider who actually needs to get somewhere urgently, the car appears. Research comparing rideshare to taxi systems found that Lyft drivers spend approximately 19 percent of time idle; taxi drivers spend 48 percent. That efficiency gain is inseparable from the pricing mechanism that produces it. The same logic applies to airline yield management, hotel room pricing, concert ticket platinum pricing, and electricity spot markets. Dynamic pricing is not a novel exploitation of consumers. It is the technological implementation of price-clearing mechanisms that economics has understood for a century.

The argument that regulatory intervention risks eliminating the pro-competitive uses of pricing algorithms — which are the majority of uses — in order to stop the anti-competitive minority. The National Retail Federation opposed the Preventing Algorithmic Collusion Act on exactly this basis: the bill's definition of "non-public competitor data" is broad enough to capture legitimate uses of market intelligence, syndicated retail data, and competitive benchmark studies that retailers have used for decades. A blanket prohibition on algorithms trained with competitive data would prohibit not only RealPage's rent coordination but any pricing system that uses the kind of information that human pricing managers routinely obtain. The pro-market position is not that algorithmic collusion doesn't happen — the RealPage case established that it does — but that the appropriate response is targeted enforcement against the specific mechanism (shared non-public real-time competitor data, asymmetric guardrails, auto-accept functions) rather than a statutory presumption that treating competitive data as relevant information is equivalent to price-fixing.

The concern that the Live Nation case illustrates how platform dominance and algorithmic pricing get conflated — and that the remedy for monopoly is structural breakup, not pricing regulation. The DOJ's Live Nation antitrust lawsuit, filed in May 2024, argued that dynamic pricing (Ticketmaster's "Platinum" tickets capturing secondary market prices in the primary market) was enabled by market dominance — 80 percent of major concert venue ticketing, combined with ownership of the promotion and venue infrastructure. The argument was not that dynamic pricing is inherently harmful; it was that without competitive alternatives, artists and venues had no leverage to refuse it. The DOJ reached a settlement in March 2026 requiring amphitheater divestitures, service fee caps, and technology access for competitors. But more than 30 states rejected the settlement as insufficient and continued litigating toward structural breakup — separating Ticketmaster from Live Nation entirely. The pro-market efficiency position holds that this is the right instinct: fix the structure, not the pricing mechanism. A market with genuine competition self-limits dynamic pricing without regulatory intervention. The target should be the monopoly, not the algorithm.

What cuts across all four positions
  • The "agreement" requirement in Section 1 of the Sherman Act is doing enormous legal work — and all four positions are either defending it, trying to work around it, or calling for its replacement. The consumer welfare standard defenders say existing law is sufficient because the hub-and-spoke theory provides an "agreement." The transparency advocates say a statutory presumption is needed precisely because the "agreement" standard fails in tacit coordination cases. The antitrust modernists say effect-based liability should supplement or replace it. The dynamic pricing advocates say abandoning the "agreement" requirement would sweep in single-firm pricing decisions that no one considers anticompetitive. Every substantive policy disagreement traces back to this legal architecture question: what does it take to make algorithmic coordination legally cognizable?
  • The market structure matters more than the tool. The empirical evidence is consistent: single-firm adoption in a competitive market tends to intensify competition; mutual adoption in an oligopoly tends to produce supracompetitive prices, without any communication. This finding cuts across the debate in an inconvenient way. It means the neo-Brandeisians are right that the harm is structural and emergent, not intentional. It also means the pro-market advocates are right that the same algorithm in a competitive market produces efficiency gains. The relevant variable is market concentration — which returns the debate to the pre-digital antitrust question: how much concentration is too much, and what creates it?
  • The RealPage settlement raised prices for renters for years and the penalty was zero — and what you think about that tells you a lot about what you think the purpose of enforcement is. The DOJ reached a consent decree in which RealPage admits no wrongdoing, pays no fines, and faces a seven-year compliance monitor. Eight state AGs who joined the complaint did not sign the settlement. Private class actions by renters remain active. The pro-enforcement position holds that the structural remedies — restricting data sharing, closing the asymmetric ratchet — are more valuable than a fine that would have been a litigation settlement anyway. The critics hold that a zero-penalty outcome for a system that may have raised rents for millions of tenants for years sends a signal about the cost-benefit calculation any future vendor will make. This is not primarily a technical dispute about enforcement economics. It is a dispute about whether antitrust law is primarily a corrective or a deterrent — and for whom.
  • The EU and the U.S. are running different experiments, and the Trump administration has characterized the EU's approach as a trade barrier. The EU Digital Markets Act imposes structural obligations on platform gatekeepers — transparency, non-discrimination, algorithmic disclosure — that go well beyond what U.S. antitrust enforcement has required. The Trump administration's U.S. Trade Representative identified both the DMA and the Digital Services Act as "unfair trade barriers" in its 2025 National Trade Estimates Report, creating a transatlantic tension over the legitimate scope of platform regulation. This is not primarily a dispute about consumer welfare. It is a dispute about whether the entity that gets to define "fair competition" in digital markets is the market itself, the U.S. government, or a transnational regulatory body with its own conception of the public interest. The outcome will shape not just how algorithms are regulated but who gets to make that determination.

See also

  • Who bears the cost? — the framing essay for the distributive conflict behind pricing systems: when firms use algorithms to extract more from fragmented, captive, or data-legible consumers, who absorbs the burden of efficiency claims, and who gets protected from market power?
  • Who gets to decide? — the framing essay for the authority conflict behind this map: when pricing is delegated to opaque software and platform coordination systems, who is actually making the decision, and what kind of oversight or accountability should follow?
  • Big Tech and Antitrust — the structural debate over whether dominant platforms should be broken up, regulated as common carriers, or left to market competition; this map focuses on the specific mechanism of algorithmic pricing within that broader antitrust landscape
  • Surveillance Capitalism — the use of personal data to set individualized prices is the logical extension of surveillance capitalism's core model; the FTC's 2024 surveillance pricing investigation targeted this specifically, asking whether using browsing history, location, and credit data to personalize prices constitutes an unfair commercial practice
  • Housing Finance and Algorithmic Discrimination — RealPage's rent coordination affected the same populations facing discriminatory automated underwriting; the intersection of algorithmic pricing and disparate racial impact is underexplored in most mainstream antitrust coverage
  • Platform Labor Governance — Uber and Lyft's surge pricing is simultaneously a dynamic pricing question and a labor question: if drivers are independent contractors, the platform is technically setting prices for nominally competing businesses, raising the same "agreement" issues that animate the RealPage litigation
  • Algorithmic Governance and Automated Decisions — the governance question that underlies all of these cases: who is accountable when an algorithm makes a consequential decision, and what standards of transparency and auditability should apply?
  • Gig Economy and Worker Classification — whether platform workers are employees or independent contractors is the threshold question that determines whether platform pricing is lawful employer-set compensation or unlawful price-fixing among competitors

References and further reading