Housing AI Compliance

Your Screening Algorithm and Your Pricing Algorithm Are Both Liability Vectors

Property management companies face simultaneous legal exposure on two fronts: tenant screening that discriminates under the Fair Housing Act, and revenue management that coordinates pricing under the Sherman Act. We audit both, engineer compliant architectures, and map your systems against every jurisdiction that matters.

$140M+

Landlord class action settlements for algorithmic pricing

Fortune, Oct 2025

$2.275M

SafeRent settlement for discriminatory tenant screening

Cohen Milstein, Nov 2024

4 States

New housing AI laws active in 2026 (CA, NY, CO, IL)

State legislatures, 2025-2026

Two Algorithms, Two Legal Theories, One Company

Most property management companies treat screening compliance and pricing compliance as separate problems. Courts and regulators do not.

Front 1: Screening Discrimination

SafeRent's Registry ScorePLUS scored housing voucher holders low because it weighted credit history heavily without accounting for the guaranteed income stream vouchers provide. The algorithm treated credit score as a neutral predictor. It is not. Median FICO scores break along racial lines: 727 (White), 667 (Hispanic), 627 (Black). When your screening model uses credit history as a primary feature for subsidized tenants, it encodes those disparities directly into approval rates.

The court rejected SafeRent's argument that it was a "neutral vendor" not subject to the Fair Housing Act. If a landlord relies primarily on a third-party score, the provider of that score shares liability for discriminatory outcomes.

Legal theory: Fair Housing Act, disparate impact. Key test: Disparate Impact Ratio (four-fifths rule). If your approval rate for any protected group is below 80% of the highest-approval group, you have a presumptive violation.

Front 2: Pricing Coordination

RealPage's AIRM and YieldStar collected non-public rental rates, lease terms, and occupancy data from competing landlords, then used that data to generate pricing recommendations designed to move prices "in unison." The DOJ treated this as a "hub-and-spoke" cartel: RealPage was the hub, and each landlord sharing data through the platform was a spoke.

The auto-accept features made it worse. AIRM's defaults automatically accepted price recommendations within a 3% daily change and 8% weekly change. Most landlords never adjusted these settings, meaning the algorithm effectively set prices without human review.

Legal theory: Sherman Act Section 1, state antitrust laws. Key defense: Provable data isolation. Yardi won its California case specifically because Revenue IQ's architecture made cross-client data contamination impossible by design.

Why This Matters More in 2026: Agentic Leasing AI

The next wave of PropTech is autonomous leasing agents that handle inquiries, schedule tours, pre-screen applicants, and negotiate lease terms without human involvement. One platform operating in one out of every twelve U.S. multifamily units claims 65% faster lead-to-lease timelines. But every decision an autonomous agent makes is a potential fair housing violation or antitrust touchpoint. An agent that varies response quality by applicant demographics, steers certain applicants toward certain properties, or applies pricing concessions unevenly creates liability that scales with every interaction. The compliance architecture for agentic leasing systems does not exist yet. That is what we build.

The Regulatory Map You Need for Internal Meetings

Housing AI compliance is not one regulation. It is a patchwork of federal statutes, DOJ settlements, state laws, and emerging international frameworks. This table covers what is enforceable right now and what takes effect in 2026.

Regulation Scope Key Requirements Penalties Status
Fair Housing Act (federal) Tenant screening No disparate impact on protected classes. Tech vendors share liability. HUD May 2024 guidance targets credit, eviction, and criminal background data. $26,262 first offense, $131,308 repeat (2025 adjusted) Active
Sherman Act (federal) Algorithmic pricing No coordination of pricing through shared algorithms using competitor data. DOJ settlement: 12-month data aging, CSI prohibition, governor symmetry, configurable auto-accept. Criminal penalties + treble damages in private actions Active (7-year term)
FCRA (federal) Tenant screening Two-step adverse action notice process. Specific reasons for rejection required. Algorithmic scores that function as consumer reports must comply. $100-$1,000 per violation (statutory), actual damages, attorney fees Active
California AB 325 Algorithmic pricing Prohibits "common" pricing algorithms (2+ users) using competitor data. Rejects federal pleading standard for plaintiffs. Dual enforcement via CalPrivacy + AG. Cumulative with Cartwright Act remedies Effective Jan 1, 2026
New York S.7882 Algorithmic pricing (residential) Blanket ban on pricing tools with "coordinating function" using data from multiple owners. No public/nonpublic distinction. Tenant private right of action. Donnelly Act penalties + private actions Effective Dec 15, 2025 (RealPage stay pending)
Colorado SB 205 Tenant screening (as "consequential decision") Annual impact assessments. Risk management programs. Adverse decision disclosures describing AI's role, data sources, and appeal processes. AG enforcement + consumer remedies Effective June 30, 2026
EU AI Act Tenant screening + pricing (high-risk) Conformity assessments. Documentation. Human oversight. Bias testing. Applies to firms with EU tenants or operations. Up to €35M or 7% global turnover Phased enforcement 2025-2026

Enforcement Reality Check

Federal enforcement has weakened under the current administration. HUD removed its AI guidance from its website in early 2025. The CFPB has reduced staff and enforcement capacity. A presidential executive order directed agencies to "deprioritize" disparate impact enforcement. But state enforcement is filling the gap aggressively. California, New York, Colorado, and Illinois are all enacting AI-specific housing laws. Tenant private rights of action under the amended Donnelly Act and Cartwright Act mean enforcement does not depend on government initiative. The $140M+ in landlord settlements came primarily through private class actions, not regulatory enforcement.

Who Does What in Housing AI Compliance

No single vendor covers both tenant screening fairness and algorithmic pricing antitrust compliance. This table shows where each approach falls short.

Approach What It Covers What It Misses Typical Cost
AI Governance Platforms (Credo AI, Holistic AI, FairNow) General-purpose fairness metrics. Policy management. Multi-framework mapping (EU AI Act, NIST). NYC LL144 for Credo AI. Not housing-specific. No HUD guidance mapping. No antitrust data isolation verification. No LDA search. No state-level housing AI law coverage. $18K-$100K+/yr
Open-Source Toolkits (IBM AIF360, Fairlearn) 70+ fairness metrics (AIF360). Scikit-learn integration (Fairlearn). Free. No compliance mapping. No consulting layer. No adverse action notice generation. Requires in-house ML expertise to operate. No antitrust coverage. Free (+ internal eng cost)
Big 4 / Large SIs (Deloitte, PwC, EY, KPMG) Brand trust. Existing client relationships. Scale for large PMCs. Policy and governance frameworks. Generalist teams staffed with juniors. Slow to deliver technical solutions. Will audit your model but not rebuild it. $300-$600/hr means a basic audit costs $100K+. Antitrust compliance is a separate practice from AI fairness, so you get two teams with two budgets. $100K-$500K+
Screening Vendors (SafeRent, TransUnion SmartMove, CoreLogic) Built-in compliance features (SmartMove's ResidentScore predicts evictions 15% better than raw credit). FCRA compliance layers. They are the models being audited, not the auditors. SafeRent is under a 5-year injunction. Vendor self-assessment is not independent verification. No pricing compliance. Per-report pricing
Antitrust Law Firms Legal analysis of pricing algorithm risk. Settlement compliance advisory. Defense in litigation. Legal advice, not engineering. Cannot build data-isolated pricing architectures or run fairness metric computations. Cannot conduct LDA searches or implement technical remediation. $500-$1,500/hr
Veriprajna Both screening fairness and pricing antitrust as unified compliance. LDA search. Data isolation architecture. Multi-state regulatory mapping. Agentic AI guardrails. Not a law firm. Cannot provide legal opinions or represent you in court. For legal interpretation of settlement terms, you need antitrust counsel working alongside us. Engagement-based

What We Build for Housing AI Compliance

Four capabilities that address both fronts of housing AI liability. Each engagement is custom-scoped to your portfolio size, vendor stack, and jurisdictional exposure.

Tenant Screening Fairness Audit + LDA Search

We take your screening model (whether it is SafeRent, TransUnion SmartMove, a custom model, or an AppFolio integration), run full disparate impact analysis across every protected class, and then run a Least Discriminatory Alternative search. The LDA search uses integer programming (Gurobi/CPLEX) to explore the model multiplicity space and find configurations that maintain your predictive accuracy while maximizing the Disparate Impact Ratio.

Output: Pareto frontier chart (accuracy vs. fairness), current DIR per protected class, top 5 recommended model configurations, HUD guidance compliance map, FCRA adverse action notice audit, remediation roadmap.

Antitrust-Safe Pricing Architecture

We design and implement pricing systems with data isolation as a first-class engineering constraint, not a policy overlay. Each client's data resides in structurally separated environments where cross-client contamination is impossible by design. This is the architecture that won Yardi's California summary judgment.

Output: Data-isolated pricing architecture, data provenance logging for every recommendation, governor symmetry verification, auto-accept configuration audit, independent verification artifact for legal counsel.

Multi-Jurisdiction Compliance Mapping

If you manage properties in California, New York, and Colorado, you are subject to AB 325, S.7882, and SB 205 simultaneously, on top of the FHA, Sherman Act, and FCRA. Each law has different definitions of prohibited conduct, different enforcement mechanisms, and different disclosure requirements. We map your entire AI system portfolio against every applicable regulation and produce a jurisdiction-by-jurisdiction compliance matrix.

Output: Compliance matrix with gap analysis per jurisdiction, remediation priorities ranked by exposure severity, disclosure template library, impact assessment frameworks for Colorado SB 205.

Agentic Leasing AI Guardrails

Autonomous leasing agents make dozens of micro-decisions per tenant interaction: which units to recommend, how quickly to respond, what concessions to offer, how aggressively to negotiate. Each decision is a potential fair housing or antitrust touchpoint. We build deterministic guardrail layers that override the neural model on protected-class decisions, with real-time fairness metrics and circuit breakers for human escalation.

Output: Policy enforcement layer, audit logging with per-interaction fairness scores, drift detection and circuit breaker configuration, steering detection module, pricing concession uniformity verification.

How an Engagement Works

Every engagement starts with understanding your current exposure. Timelines vary by portfolio size and the number of jurisdictions involved.

01

Exposure Assessment (2-3 weeks)

We inventory every AI system that touches tenant screening or pricing across your portfolio. For each system, we map: what data it ingests, who else uses the same vendor, what jurisdictions it operates in, and what disclosures it currently provides. The output is a risk heat map that tells you exactly where your highest exposure sits.

02

Technical Audit (3-6 weeks)

For screening systems: we run disparate impact analysis, LDA search, FCRA adverse action review, and feature-level bias attribution. For pricing systems: we verify data isolation, test governor symmetry, audit auto-accept configurations, and trace data provenance for every recommendation in a sample period. This phase requires access to model artifacts, training data metadata, and system architecture documentation.

03

Architecture + Remediation (4-12 weeks)

Based on audit findings, we either remediate your existing systems or design new architectures. Screening remediation typically involves feature re-engineering, threshold recalibration, and LDA-guided model selection. Pricing remediation involves building data-isolated architectures, implementing provenance logging, and reconfiguring governor and auto-accept settings. For agentic systems, we build the guardrail layer as a separate service that sits between the agent and the decision point.

04

Ongoing Monitoring (continuous)

Fairness metrics drift. Regulations change. New state laws take effect. We provide continuous monitoring dashboards that track DIR, SPD, and Equalized Odds across your screening systems, and data isolation verification for pricing systems. When a new regulation takes effect (Colorado SB 205 on June 30, 2026, for example), we update your compliance matrix and flag required changes proactively.

Housing AI Compliance Risk Assessment

Answer six questions about your current AI systems to see your exposure profile across both screening fairness and pricing antitrust. Results include specific regulatory citations and recommended next steps.

Questions Property Management Teams Actually Ask

How do we audit our tenant screening algorithm for Fair Housing Act compliance?

A proper screening audit goes beyond running a disparate impact ratio across one dimension. We start by mapping every feature your model uses to its predictive relationship with actual lease performance, not just creditworthiness. Credit history, eviction records, and criminal background are HUD's three high-risk categories, and each requires separate analysis. For credit scores specifically, the racial disparity is structural: median FICO scores are 727 (White), 667 (Hispanic), and 627 (Black). If your model weights credit history heavily without accounting for subsidized income like housing vouchers, you are almost certainly below the four-fifths threshold for voucher holders. We run the full battery: Statistical Parity Difference, Disparate Impact Ratio, Equalized Odds, and Counterfactual Fairness across every protected class. Then we run a Least Discriminatory Alternative search using integer programming to find model configurations that maintain your predictive accuracy while maximizing the DIR. The output is a Pareto frontier showing exactly where your current model sits and which alternatives exist. For FCRA compliance, we verify that your adverse action notices correctly attribute the specific features that drove each rejection, not generic reason codes that mask the algorithm's actual decision logic.

What does the RealPage DOJ settlement actually require us to change in our pricing software?

The settlement establishes five technical requirements that now function as the industry baseline. First, data ingestion: you cannot use non-public competitively sensitive information (CSI) from rival properties. Second, model training: any non-public data must be at least 12 months old and not associated with active leases. Third, runtime isolation: real-time pricing recommendations cannot incorporate non-public competitor data like current occupancy or lease terms. Fourth, governor symmetry: your pricing floor and ceiling parameters must work identically. If a user can set recommendations to exceed ceilings by 5%, they must also be able to dip below floors by 5%. Fifth, auto-accept configuration: automated acceptance of pricing recommendations must be a manual opt-in by each user, not a default setting. The settlement runs for seven years. Critically, Yardi won its California state antitrust case specifically because Revenue IQ proved data isolation by design. The court found that Revenue IQ "does not, and by design cannot, use any client's confidential pricing information to recommend pricing for any other client." That architectural proof was dispositive. We help you build that same provable isolation into your pricing systems.

Do California AB 325 and New York S.7882 apply to our property management company?

If you manage properties in California or New York and use any multi-tenant pricing tool, yes. California AB 325 (effective January 1, 2026) amends the Cartwright Act to prohibit using or distributing a "common" pricing algorithm that uses competitor data to influence pricing. A pricing algorithm is "common" if it has two or more users and incorporates competitor data. The law also makes it easier for plaintiffs to survive early dismissal by rejecting the federal pleading standard. New York S.7882 (effective December 15, 2025) is broader. It prohibits any software with a "coordinating function" that collects and analyzes data from multiple property owners for rent-setting. Unlike the federal standard, New York does not distinguish between public and nonpublic information. RealPage is currently challenging S.7882 on First Amendment grounds and has obtained a stay of enforcement pending its preliminary injunction motion. However, this stay only protects RealPage and its direct customers. If you use a different pricing vendor, or your own multi-tenant tool, the law applies to you now. Colorado's AI Act (SB 205, effective June 30, 2026) adds another layer: tenant screening is classified as a "consequential decision" requiring annual impact assessments, risk management programs, and specific adverse decision disclosures.

How do we prove data isolation if our pricing algorithm is challenged in court?

Yardi's California victory provides the template. The court granted summary judgment because Yardi demonstrated that Revenue IQ's architecture makes cross-client data contamination impossible by design. To build a comparable defense, you need three things. First, architectural separation: each client's data must reside in isolated environments where the pricing model for Client A physically cannot access Client B's non-public data. This is not just access controls; it is structural isolation at the database, compute, and model-training layers. Second, audit trails: every data input to every pricing recommendation must be logged with its provenance. When a plaintiff's attorney asks "where did this price recommendation come from?" you need to produce a complete lineage showing only your own historical data and publicly available market information. Third, independent verification: a third-party technical audit confirming that the architecture enforces isolation, not just that a policy says it should. We design pricing architectures with isolation as a first-class engineering constraint, not a policy overlay. The deliverable is both the system and the audit artifact that proves it works.

What fair housing risks do agentic AI leasing tools create?

Agentic AI in leasing multiplies every existing compliance risk. An autonomous agent that handles tenant inquiries, schedules tours, pre-screens applicants, and negotiates lease terms is making dozens of potentially discriminatory micro-decisions per interaction. Three specific risks stand out. First, steering: an agent that recommends different units or communities based on applicant characteristics violates the FHA even without explicit programming to do so. If the agent learned from historical interaction data where certain demographics were shown certain properties, it will reproduce that pattern. Second, differential treatment in communication: agents that vary response times, information depth, or follow-up frequency based on applicant profile create measurable disparate treatment. Third, pricing negotiation: an agent authorized to offer concessions or adjust lease terms has to apply those offers uniformly. If it negotiates more aggressively with certain demographic profiles because of patterns in training data, that is a fair housing violation. We build guardrail layers for agentic leasing systems: deterministic policy enforcement that overrides the neural model on protected-class decisions, audit logging of every agent action with fairness metrics computed in real time, and circuit breakers that escalate to human review when the agent's behavior drifts outside fairness bounds.

Can we use existing AI governance platforms like Credo AI or Holistic AI for housing compliance?

These platforms are strong for general-purpose AI governance but have significant gaps for housing-specific compliance. Credo AI offers policy management and regulatory mapping including NYC Local Law 144, but it does not map to HUD's tenant screening guidance, the SafeRent settlement injunctive requirements, or the DOJ's algorithmic pricing data isolation standards. Holistic AI provides multi-dimensional risk quantification across fairness, robustness, and explainability, but it is horizontal, not verticalized for the housing regulatory stack. FairNow focuses specifically on continuous fairness monitoring but is built for HR and financial services, not housing. None of these platforms address antitrust compliance for algorithmic pricing. None offer Least Discriminatory Alternative search. None map to the emerging state-level patchwork: California AB 325, New York S.7882, and Colorado SB 205 each have different definitions of prohibited conduct, different enforcement mechanisms, and different remedies. The gap is integration. Housing compliance requires simultaneously satisfying Fair Housing Act disparate impact standards, FCRA adverse action requirements, Sherman Act data isolation requirements, and state-specific prohibitions. We build compliance systems that address all of these as a unified architecture rather than separate audits against separate frameworks.

Technical Research

The interactive whitepapers behind this solution page. Each provides deep technical analysis of one dimension of housing AI compliance.

Algorithmic Integrity and the $2.2M SafeRent Precedent

Fair Housing Act liability for tenant screening algorithms, disparate impact analysis, Least Discriminatory Alternative methodology, and the SafeRent settlement's injunctive requirements.

The Sovereign Algorithm: Antitrust Liability in the Post-RealPage Era

DOJ-RealPage settlement analysis, data isolation architecture for antitrust defense, California AB 325 and New York S.7882 compliance, and differential privacy for market intelligence.

A Single Fair Housing Violation Costs $26,262. A Pricing Antitrust Class Action Starts at $2.8M.

The cost of an exposure assessment is a fraction of a single penalty.

We work with property management companies and PropTech vendors to audit screening and pricing algorithms, build compliant architectures, and map regulatory exposure across every relevant jurisdiction.

Compliance Audit

  • ✓ Screening fairness audit with LDA search
  • ✓ Pricing data isolation verification
  • ✓ Multi-state regulatory compliance mapping
  • ✓ FCRA adverse action notice review

Architecture + Engineering

  • ✓ Antitrust-safe pricing architecture design
  • ✓ Screening model remediation and LDA implementation
  • ✓ Agentic leasing AI guardrail systems
  • ✓ Continuous monitoring and compliance dashboards