Algorithmic Pricing Compliance
In 2025, the FTC collected $2.56 billion in algorithmic pricing settlements from two companies. New York, California, and Colorado enacted laws that make every AI-driven price a potential violation. If your pricing engine runs on a third-party algorithm, consumer data, or reinforcement learning, the question isn't whether regulators will look. It's whether you can answer their questions when they do.
$2.56B
FTC pricing settlements, 2025
Instacart $60M + Amazon $2.5B
51 Bills
State algorithmic pricing proposals
Across 24 states in 2025
180 Days
RealPage compliance deadline
DOJ consent decree, Nov 2025
Regulators are pursuing algorithmic pricing on two distinct fronts. Most companies prepare for one and ignore the other.
Your algorithm charges different users different prices for the same product based on personal data. This becomes illegal when those price differences correlate with protected demographics.
The Instacart case made this concrete: the Eversight pricing tool generated up to five different prices for the same item at the same store, with variation reaching 23%. The FTC's $60M settlement didn't hinge on intentional discrimination. It hinged on the outcome: consumers in certain profiles paid systematically more.
The technical trap is proxy variables. Your algorithm doesn't see race or income. But it sees ZIP code, device type, browsing time, and app version. A user browsing on an older Android device from a lower-income ZIP code at 11 PM receives different pricing treatment than an iPhone user in a high-income suburb at 2 PM. Census data shows these input clusters correlate with racial and income demographics at rates that would fail disparate impact analysis. The algorithm never intended to discriminate. The output is discriminatory anyway.
Your algorithm converges on higher prices in coordination with competitors, even without any explicit agreement. This is the theory behind FTC v. Amazon, set for trial in 2026.
Amazon's Project Nessie extracted $1.4 billion by predicting when competitors would match a price increase, then raising prices on 8 million items. The algorithm identified that most competitors ran tit-for-tat pricing rules. When Amazon raised prices, the competitor algorithm automatically followed. No meeting. No agreement. No phone call. Just two algorithms reaching the same supra-competitive equilibrium.
The risk multiplies when you use a third-party pricing vendor. If your vendor serves your competitors and its algorithm pools data across clients, you may have hub-and-spoke conspiracy exposure even if you never exchanged a word with a competitor. California's new Cartwright Act amendments (effective January 2026) codify this: a "common pricing algorithm" with two or more users that influences prices using competitor information creates statutory liability.
This table tracks every active law, settlement precedent, and enforcement action that affects algorithmic pricing. Updated April 2026.
| Jurisdiction | Law / Precedent | Key Requirement | Penalty | Status |
|---|---|---|---|---|
| New York | Algorithmic Pricing Disclosure Act | Conspicuous disclosure when prices use personal consumer data | $1,000/violation | Enacted Nov 2025; enforcement paused pending NRF injunction |
| California | Cartwright Act (AB 325 / SB 763) | Prohibits "common algorithms" using competitor data to set prices; bars coercion of algorithmic recommendations | Greater of $6M or 2x gain/loss; treble damages in private suits | Effective Jan 1, 2026 |
| Colorado | AI Act (SB 24-205) | Impact assessments for high-risk AI systems making "consequential decisions" including pricing | AG enforcement; injunctive relief | Effective June 30, 2026 |
| Federal (FTC) | FTC Act Section 5 | Prohibits "unfair methods of competition." FTC v. Amazon trial will test whether algorithmic tacit collusion qualifies | Injunctive relief + disgorgement (Amazon: $2.5B settlement) | Trial set Oct 2026 |
| Federal (DOJ) | RealPage Consent Decree | No competitor data <12 months old; no sub-state geography; symmetric guardrails; antitrust compliance officer | 7-year monitoring term | Active since Nov 2025; 180-day compliance deadline |
| Federal (Case Law) | Gibson v. Cendyn (9th Cir.) | Safe harbor: same vendor OK if no pooled non-public data, no "raise prices" marketing, no non-anonymized competitor data | Defensive precedent | Decided Aug 2025 |
| European Union | EU AI Act (High-Risk Provisions) | Impact assessments, transparency documentation, anti-discrimination measures for AI systems making consequential decisions | €35M or 7% global turnover | High-risk obligations effective Aug 2, 2026 |
| 24 States | 51 Proposed Bills (2025) | Various: disclosure mandates, surveillance pricing bans, algorithmic audit requirements | Varies by state | TN, NM bills active in 2026; more expected |
Sources: FTC press releases, DOJ Office of Public Affairs, Wilson Sonsini antitrust alerts, Cleary Gottlieb publications, Arnold & Porter advisories. Updated April 2026.
If you're evaluating options, here's what each category of provider actually delivers and where the gaps are.
| Provider Type | Examples | What They Do | Compliance Gap | Typical Cost |
|---|---|---|---|---|
| Pricing Platforms | Pricefx, PROS, Zilliant, Competera | Optimize prices using AI/ML. Some named in FTC 6(b) surveillance pricing orders. | No fairness testing. No disclosure automation. No collusion monitoring. Their algorithm may be your liability. | $200K-$1M+/yr |
| Big 4 / Large SIs | Deloitte, PwC, Accenture, McKinsey | Antitrust advisory, risk assessment memos, regulatory relationship management | Consulting-only. No automated compliance tooling. Engagements take months and deliver PDFs, not infrastructure. Some named in FTC 6(b) orders themselves. | $500K-$5M+ |
| Antitrust Law Firms | Wilson Sonsini, Cleary Gottlieb, Arnold & Porter | Legal opinions, design guidelines, litigation defense | Legal advice, not technical implementation. Can tell you what to build but not build it. Essential partners, not alternatives. | $800-$2,000/hr |
| Algorithmic Auditors | ORCAA, FTI Consulting | Point-in-time algorithmic audits, expert witness testimony, bias assessments | Snapshot audits, not continuous monitoring. No pricing-specific tooling. Valuable for litigation but not for ongoing compliance. | $100K-$400K per audit |
| Specialized AI Consultancy | Veriprajna | Build pricing compliance infrastructure: audit layers, disclosure automation, collusion monitoring, audit trails | Cannot solve organizational resistance to pricing transparency or fundamental data quality issues in your transaction logs. We build the technical layer, not the cultural change. | $150K-$500K |
We don't optimize prices. We don't compete with your pricing platform. We sit on top of whatever engine you run and make it provably compliant.
We map every data input to your pricing engine and test each for demographic proxy correlation. ZIP code, device type, browsing session length, time of day, app version: we measure correlation with race, income, and age demographics using census-linked geographic data and device ownership statistics.
Then we run counterfactual simulations. For each pricing decision in a sample set, we hold all demand drivers constant and vary only the proxy variable. If prices shift by more than 20% of the highest-group rate (the four-fifths threshold adapted from EEOC disparate impact standards), that input is flagged.
The output is a risk scorecard across five dimensions drawn from the RealPage consent decree framework and the Duane Morris design guidelines: data sourcing, recommendation granularity, independence preservation, transparency, and human override capability.
We build the compliance middleware between your pricing engine and checkout. For New York: real-time classification of whether each price used personal consumer data, with conditional disclosure rendering. For California: data firewall verification confirming your vendor doesn't pool competitor data across clients.
For Colorado (effective June 2026): automated impact assessment generation tied to your model version history. For the EU (effective August 2026): Article 13/14 transparency documentation exported in the format the AI Office expects.
The middleware uses jurisdiction detection based on user geolocation, so disclosure rules adapt automatically. One API layer handles all jurisdictions. When Tennessee or New Mexico enact their pending bills, we add the rules without touching your pricing engine.
We audit your pricing vendor relationship against the Gibson v. Cendyn three-part test: does the vendor pool non-public competitor data, does it market the ability to elevate pricing industry-wide, does it share non-anonymized competitor information? If any prong fails, your vendor relationship creates hub-and-spoke conspiracy exposure.
For your own algorithm, we run collusion simulation testing. We deploy your pricing model against three competitor agent archetypes (tit-for-tat rule matchers, Bertrand competition agents, and reinforcement learning agents) and measure whether supra-competitive equilibria emerge within 10,000 simulated market cycles.
For ongoing monitoring, we build dashboards that flag pricing convergence patterns: simultaneous price movements, narrowing price dispersion across competitors, and margin compression that reverses without demand-side explanation.
We build the audit trail infrastructure before you need it. An event-driven logging layer captures every pricing decision in real time: data inputs used, model version, raw recommendation, constraint checks applied, whether the recommendation was overridden, disclosure status, and final displayed price.
Storage is append-only and immutable. The logging schema is modeled on what FTC Civil Investigative Demands actually request, based on the Instacart and Amazon CID structures that are now part of the public record.
When a CID arrives, you produce compliant documentation packages in 48-72 hours. Most companies without this infrastructure spend 6-12 months in reactive forensic extraction, often discovering gaps in their data that weaken their position. The cost of building this proactively is a fraction of a single month of emergency outside counsel at CID-response rates.
Here's what happens when we audit a Multi-Armed Bandit pricing engine for proxy discrimination. This is one of four audit tracks; we walk through this one because MAB-based systems are the most common in e-commerce dynamic pricing and the architecture Instacart's Eversight used.
We extract the full feature vector from your MAB's context input. In a typical e-commerce MAB, this includes: user segment ID, session count, device type, operating system, screen resolution, geographic coordinates or ZIP code, time of day, day of week, cart composition, historical purchase frequency, and sometimes browsing dwell time.
For each feature, we compute Pearson correlation coefficients against census-derived demographic distributions at the ZIP+4 level. A feature with |r| > 0.3 against any protected-class proxy (race, income quintile, age bracket) is flagged for counterfactual testing. In our experience, ZIP code and device type almost always exceed this threshold. Session time and browsing depth often do as well.
For each flagged feature, we generate counterfactual user profiles. We take 10,000 real pricing decisions from your production logs and create synthetic variants where only the flagged proxy variable changes. A user from ZIP code 10021 (Upper East Side, median household income $138K) becomes a user from ZIP code 10456 (South Bronx, median household income $27K) with all other demand signals held constant.
We feed both the original and counterfactual profiles through your MAB and measure the price delta. If the average delta exceeds 20% of the highest-group price (the four-fifths threshold), the feature creates legally actionable disparate impact. We report the exact delta, the demographic groups most affected, and the number of production transactions where this pattern occurred.
For features that fail the counterfactual test, we build constraint layers that bound the MAB's action space. This isn't a simple threshold (which the algorithm will optimize to the edge of). We use fairness-aware reward shaping: the MAB's reward function is modified to penalize price recommendations that create cross-group variance above the threshold. The constraint is baked into the optimization, not bolted on as a post-hoc filter. The result is a pricing engine that still optimizes revenue but cannot generate discriminatory outcomes, with the constraint's impact on revenue typically in the 1-3% range.
A typical engagement runs 10-14 weeks from kickoff to production monitoring. The timeline depends on how many pricing systems you run, how many jurisdictions you operate in, and whether your data infrastructure can support real-time logging.
Weeks 1-3
Inventory all pricing systems, vendor relationships, and data flows. Map your jurisdiction exposure (where your customers are, not where your servers are). Review vendor contracts for data firewall provisions and CID response obligations.
Deliverable: Pricing Compliance Risk Map with severity ratings across discrimination, collusion, disclosure, and investigation readiness dimensions.
Weeks 3-7
Build the audit infrastructure: event-driven pricing decision logs, disclosure middleware, constraint validation layers. Run the discrimination audit and collusion simulation. Design the vendor risk assessment framework specific to your pricing tools.
Deliverable: Working compliance layer in staging, discrimination audit results, vendor risk assessment.
Weeks 7-10
Deploy the compliance layer in shadow mode alongside your production pricing. Every pricing decision runs through the constraint checks and disclosure logic without affecting what the customer sees. We compare constrained vs. unconstrained pricing to measure revenue impact and verify that all jurisdiction-specific disclosures trigger correctly.
Deliverable: Shadow mode validation report with revenue impact analysis and compliance coverage metrics.
Week 10+ (ongoing)
Move to production. The compliance layer enforces constraints, triggers disclosures, and logs decisions in real time. Monitoring dashboards track disparate impact metrics, pricing convergence patterns, disclosure compliance rates, and audit trail completeness.
Quarterly re-audits catch model drift. When new legislation passes (Tennessee, New Mexico, or the next state), we update jurisdiction rules without touching your pricing engine.
What this engagement does not include: We don't redesign your pricing strategy, select or replace your pricing vendor, provide legal opinions, or serve as expert witnesses. Those functions belong to your pricing team, your antitrust counsel, and your economic consultants respectively. We build the technical compliance infrastructure that makes their recommendations enforceable and auditable.
Answer seven questions about your pricing infrastructure. The assessment maps your exposure across discrimination, collusion, disclosure, and investigation readiness, with specific next steps you can take with or without external help.
Question 1 of 7
The disclosure obligation falls on the business serving the consumer, not the pricing vendor. You need a real-time classification layer that determines whether each price shown was generated using personal consumer data (browsing history, location, purchase patterns) versus aggregate market data. If personal data influenced the price, the mandated disclosure must appear before the consumer commits to the transaction.
The technical challenge is that most third-party pricing tools (Pricefx, PROS, Competera) don't expose which data inputs drove each specific price recommendation. You need middleware that intercepts the pricing API response, inspects which data categories were used, and conditionally renders the disclosure.
The $1,000-per-violation penalty applies per transaction, so a high-volume e-commerce platform processing 100,000 orders per day in New York faces material exposure even at low non-compliance rates. We build the classification and disclosure layer as an API middleware that sits between your pricing engine and your checkout flow, with jurisdiction detection so the disclosure rules adapt based on the consumer's location.
The RealPage consent decree (DOJ, November 2025) established five specific technical prohibitions that antitrust attorneys are already using as a compliance template beyond multifamily housing. The core requirements: no training on competitor data less than 12 months old, no geographic analysis narrower than state level, no sharing of unaffiliated property data even in aggregated form, symmetric guardrails (if the algorithm can push prices above a ceiling, users must equally be able to push below a floor), and mandatory antitrust compliance officers with annual certification.
For e-commerce, the most immediately relevant provisions are the data firewall requirements and the symmetric guardrail mandate. If your pricing vendor ingests competitor pricing data and uses it to generate your recommendations, you likely have exposure under the same theory the DOJ used against RealPage.
We audit your vendor data flows against the consent decree framework, test whether your guardrails are symmetric, and build the data lineage documentation that demonstrates compliance.
The most common mistake companies make is "fairness through unawareness": removing race, gender, and income from the model's inputs and assuming the algorithm can no longer discriminate. This fails because proxy variables carry the same demographic signal. Pew Research data shows iPhone ownership is 30% higher among households earning $100K+ versus those under $30K. Census data at the ZIP+4 level correlates ZIP code with racial composition at r=0.6 or higher in most metro areas. Your algorithm never sees demographics directly, but it sees their statistical shadows.
Detection requires testing interactions between variables, not just individual inputs. ZIP code alone might show moderate demographic correlation, but ZIP code combined with device type and session time creates a compound proxy that's far more predictive. We test both individual features and feature interaction clusters using mutual information analysis, which captures nonlinear relationships that Pearson correlation misses. A common finding: browsing dwell time on product pages has near-zero standalone correlation with income, but when combined with referral source (organic search vs. price comparison site), the pair predicts income quintile with surprising accuracy.
The practical approach is to run detection before deployment (catch the obvious proxies), then continuously in production (catch emergent interactions as the model retrains). We flag proxy candidates for review but don't automatically remove them, because some proxies are also legitimate demand signals. The decision to constrain a specific input is a business and legal judgment, not a purely statistical one. We provide the evidence; your legal team makes the call.
Yes, depending on the vendor's data architecture. The Gibson v. Cendyn decision (Ninth Circuit, August 2025) established that merely subscribing to the same pricing software as your competitors is not automatically anticompetitive. But the court flagged three conditions that elevate risk substantially: if the vendor pools non-public competitively sensitive data from multiple clients to train or tune recommendations, if the vendor markets the tool's ability to coordinate or elevate pricing across an industry, or if the software facilitates exchange of non-anonymized competitor data.
Most e-commerce companies don't audit their pricing vendor's data architecture at this level. We conduct a vendor risk assessment that maps exactly which data flows into and out of your pricing tool, whether competitor data (even aggregated) influences your recommendations, and whether your vendor contract includes adequate data firewall provisions.
Under California's new Cartwright Act amendments (AB 325, effective January 2026), a "common pricing algorithm" with two or more users that uses competitor information creates potential liability with treble damages, and the lowered pleading standard means plaintiffs can survive a motion to dismiss more easily.
An FTC CID typically demands comprehensive documentation within 30-45 days: all data inputs to your pricing models, model architecture and training documentation, decision logs showing how prices were set for specific transactions, any A/B testing or experimentation protocols, communications about pricing strategy, and vendor contracts and data sharing agreements.
Most companies spend 6-12 months in reactive forensic data extraction because they never built the logging infrastructure to answer these questions. The practical preparation steps are: first, implement immutable audit logging on every pricing decision today. Each log entry should capture the timestamp, user context data used, model version, raw recommendation, any constraint checks applied, whether the recommendation was overridden, and the final displayed price. Second, document your model architecture and training data lineage in a format a non-technical FTC attorney can understand. Third, inventory all vendor data flows and ensure your contracts allocate CID response obligations. Fourth, run a mock CID response exercise.
We build the audit trail infrastructure as an event-driven logging layer that captures pricing decisions in real time, stores them in append-only storage, and generates CID-formatted export packages on demand. The goal is to produce compliant documentation in 48-72 hours when the demand arrives, not 6 months.
The California Cartwright Act amendments (AB 325 and SB 763, effective January 1, 2026) create significantly higher liability exposure than federal antitrust law for companies using algorithmic pricing. Three specific changes matter.
First, the Act now expressly defines "common pricing algorithms" as technology with two or more users that uses competitor information to influence prices, and prohibits using such algorithms to collude or coercing users to adopt recommendations. This codifies liability that federal law still treats as ambiguous.
Second, the pleading standard is lower: plaintiffs no longer need to allege facts that exclude the possibility of independent action at the motion to dismiss stage. Under federal Sherman Act standards (Twombly/Iqbal), most algorithmic pricing cases get dismissed early because parallel pricing can be explained by independent algorithm behavior. California eliminated that defense at the pleading stage.
Third, penalties increased to the greater of $6 million or twice the pecuniary gain or loss (up from $1 million), with treble damages and attorneys' fees available in private litigation. For an e-commerce company operating in California, this means a class action plaintiff can now survive dismissal with weaker allegations, and the damages exposure is substantially higher. We help companies assess their California-specific exposure by mapping their pricing vendor relationships, data flows, and recommendation compliance against the new statutory definitions.
The interactive whitepapers behind this solution page. These provide the full technical analysis, case forensics, and architectural frameworks.
Forensic analysis of the Instacart/Eversight pricing collapse. Neuro-symbolic constraint architectures for pricing fairness. FTC Act and NY Disclosure Act compliance frameworks.
Post-mortem of Amazon's $2.5B settlement. Reinforcement learning collusion mechanics. RealPage consent decree analysis. Gibson v. Cendyn safe harbor framework.
Instacart's $60M settlement began with pricing experiments they assumed were routine optimization.
A compliance program costs a fraction of a single enforcement action. We start with a 3-week risk mapping engagement that inventories your pricing systems, tests for proxy discrimination, and assesses your vendor exposure across every active jurisdiction.