AI Hiring Compliance · Multi-Jurisdictional Bias & Accessibility

Your AI hiring stack now answers to six live regimes, three class actions, and a regulator that just learned how to audit.

As of April 2026, the CHRO or General Counsel running AEDTs in New York, Colorado, Illinois, Texas, California, or the EU is inside a regulatory window most of their vendors were not built for. Illinois HB 3773 went live January 1. Texas TRAIGA went live January 1. California's FEHA ADS amendments went live last October. Colorado SB 24-205 takes force June 30. The EU AI Act treats recruitment as high-risk beginning August 2. The NY State Comptroller just published a December 2025 audit finding 17 LL144 violations where DCWP had found one, and DCWP agreed to shift to proactive enforcement. Mobley v. Workday is in discovery. Kistler v. Eightfold asks whether AI hiring platforms are FCRA consumer reporting agencies. This page exists because no single vendor on your shortlist can answer all of that for you honestly.

17 vs. 1

LL144 violations found by NY State auditors vs. DCWP in the same 32-company sample

NY State Comptroller, Dec 2, 2025

4.6%

Of 391 NYC employers had published a bias audit — the "Null Compliance" finding

Cornell / Data & Society / Consumer Reports, FAccT 2024

64M

Applicant records exposed when a McHire / Paradox admin account used password "123456"

Carroll & Curry disclosure, June 30, 2025

The next 120 days in AI hiring law

Three of these events are already live. Two go hot before the end of summer. Two are in active litigation. None of them wait for your annual compliance cycle.

LIVE · Oct 1, 2025

California FEHA Automated-Decision-System amendments

Employers must retain ADS inputs, outputs, bias-testing results, and selection criteria for at least four years. Liability attaches to discriminatory employment practices caused by an ADS, intentional or not. Applies to any employer hiring in California regardless of HQ location.

LIVE · Jan 1, 2026

Illinois HB 3773 (amends Illinois Human Rights Act)

Prohibits using AI that has a discriminatory effect on recruitment, hiring, promotion, discipline, or discharge. Explicitly bans zip codes as proxies for protected classes. Notice required whenever AI is used to "influence or facilitate" any employment decision. Enforced by the Illinois Department of Human Rights, which published draft notice rules in late 2025.

LIVE · Jan 1, 2026

Texas Responsible AI Governance Act (TRAIGA)

Bans intentional discrimination via AI. Texas explicitly rejected disparate impact as a standalone theory, diverging from the LL144 and Colorado frameworks. Enforced exclusively by the Texas Attorney General. Violators get notice and a 60-day cure period; penalties range from $12,000 for curable violations to $200,000 for uncurable ones.

LIVE ENFORCEMENT SHIFT · Dec 2, 2025

NY State Comptroller audit of LL144

State auditors found 17 potential LL144 violations in the same 32-company sample where DCWP found one. 75% of 311 AEDT complaint calls were misrouted. DCWP admitted it lacked technical expertise to evaluate AEDTs and agreed to adopt proactive enforcement. Penalty structure unchanged: up to $1,500 per day per violation. "Null Compliance" — self-classifying your tool out of scope — is no longer a defensible posture in New York City.

ACTIVE LITIGATION

Mobley v. Workday, Inc. (N.D. Cal.)

Judge Rita F. Lin denied Workday's motion to dismiss, holding that an AI hiring vendor can be directly liable as an "agent" of employers when its tool participates in decision-making by recommending or filtering candidates. Preliminary collective certification granted May 16, 2025; the opt-in window for over-40 applicants closed March 7, 2026. The court subsequently ordered Workday to produce an exhaustive list of employers who enabled HiredScore Spotlight and Fetch, rejecting Workday's attempt to exclude the post-acquisition products from the collective.

NEW FRONT · Jan 20, 2026

Kistler v. Eightfold AI (Contra Costa Superior)

First test of whether AI hiring platforms are FCRA "consumer reporting agencies." Plaintiffs allege Eightfold scraped data from LinkedIn, GitHub, Stack Overflow, and public databases, built candidate dossiers from "more than 1.5 billion global data points," and produced a 0–5 "likelihood of success" score without candidate certification, notification, disclosure, authorization, or dispute process. If the court holds Eightfold is a CRA, every similar platform owes every scored candidate an adverse-action notice and a dispute workflow. Statutory damages under the FCRA are $100–$1,000 per consumer per violation.

~12 WEEKS · Jun 30, 2026

Colorado AI Act (SB 24-205), as delayed by SB 25B-004

Governor Polis signed the postponement bill on August 28, 2025, moving the effective date from February 1 to June 30, 2026. Deployers must adopt a risk-management program, run initial and annual impact assessments, issue pre-decision and adverse-decision consumer notices, and publish website disclosures. The Colorado Attorney General holds exclusive enforcement authority. The rebuttable presumption defense requires documented reasonable care.

~16 WEEKS · Aug 2, 2026

EU AI Act — Annex III high-risk obligations

Recruitment, screening, job-ad targeting, application filtering, and candidate evaluation all fall under Annex III high-risk. By August 2, 2026, providers must complete conformity assessments, publish technical documentation (Art. 11 / Annex IV), implement a data-governance program (Art. 10), ensure human oversight (Art. 14), register in the EU database, and affix CE marking. Penalties reach €35M or 7% of global annual turnover for prohibited-practice violations, €15M or 3% for high-risk obligations. The late-2025 Digital Omnibus proposal may push Annex III to December 2027, but that extension has not been enacted and prudent compliance treats August 2 as binding.

Your HR tech vendors are defendants, witnesses, or vague

The table below is not a buyer's guide. It is a map of where each platform sits relative to the current legal regime, and what that means for a CHRO who needs to renew or replace a contract in 2026. We are vendor-neutral and have no commercial relationship with any company listed here.

Vendor / Product Current regulatory posture The honest gap the CHRO owns
Workday + HiredScore (Spotlight, Fetch) Published a Secretariat third-party analysis; ships LL144 audit configs; actively defending in Mobley Judge Lin rejected Workday's attempt to exclude HiredScore from the class. If a customer deployed Spotlight during the class period, that customer's name is in the court-ordered list.
Eightfold AI (Match) Publishes bias-audit documentation; enterprise customers include Microsoft, Morgan Stanley Named defendant in Kistler. If the FCRA theory survives, every customer that used Match scoring may owe candidates adverse-action notices retroactively.
HireVue Dropped facial-coding analysis in January 2021; pivoted to structured text-and-video assessments Named in the March 2025 ACLU complaint (D.K. v. Intuit/HireVue) on ADA, Title VII, and Colorado Anti-Discrimination Act grounds. HireVue's CEO denies AI assessment was used; the ASR pipeline itself is still subject to the disparate WER problem documented in accessibility research.
Paradox (Olivia) No specific compliance differentiation; reactive patching Exposed 64M records in June 2025 because a 2019 admin test account used 123456 as its password. Root cause was configuration, not ML. Your DPO and CISO need to be in every renewal conversation.
Pymetrics / Harver Game-based assessment with a public bias-audit history Still subject to the ADA theory in the ACLU's Aon/Cangrade FTC complaint: personality-trait instruments that mirror clinical diagnostic criteria function as disability screens.
iCIMS, Greenhouse, Lever, SmartRecruiters, Ashby ATS layer — some export LL144 bias reports; generally not named defendants The ATS is a data warehouse, not an AEDT by itself. The compliance question sits with whatever scoring or ranking plugin your recruiters turned on, which the ATS vendor does not audit for you.
FairNow, Holistic AI, Credo AI, Warden AI, Fairly AI Governance platforms and audit tooling; some LL144-specialized A platform tells you what your metrics look like. It does not replace vendor due diligence, it does not audit accessibility separately from adverse impact, and it does not reconcile conflicting regimes. Most useful as a dashboard after the strategy is set.
DCI Consulting, ORCAA, Secretariat Legally recognized independent LL144 auditors; roughly $50K–$200K per system per year Gold standard for the annual snapshot LL144 requires. Not continuous, not cross-jurisdictional, and not designed to rewrite your AEDT architecture.
Deloitte, KPMG, EY, PwC AI practices Advisory arms with employment-law relationships and audit credibility Engagements typically start at $500K and run $2M+. Strong on governance deliverables, weak on shipping working code. Good answer for a Fortune 50 with an unlimited budget, wrong answer for a mid-market CHRO with a quarter to get compliant.

Why not hire a Big 4 firm and move on?

The honest answer is that Deloitte, KPMG, EY, and PwC are the right call if you need a board-ready governance report and don't care what the number on the invoice is. Their methodology is sound, their brand protects you politically, and their regulatory relationships are real. They are the wrong call when the underlying problem is a neural network producing 0–5 scores on 2M candidates a quarter, your deadline is twelve weeks out, and you need someone who will sit with your ML team and rewrite feature-engineering pipelines. Big firms subcontract that work; small specialist firms do it directly. We charge a fraction of what a Big 4 audit engagement costs because we are not funding an office tower, and we deliver working code instead of a 200-slide deck. If you need the deck, hire the Big 4. If you need the code, keep reading.

The seven places the regimes actively conflict

A bias audit that satisfies NYC LL144 does not satisfy Colorado. An audit that satisfies Colorado does not satisfy the EU AI Act. Some requirements are technically incompatible. This is not a design flaw the market will correct — it is the legal environment you are buying into.

1. Intersectional impact ratios are mandatory in NYC, absent in Colorado

LL144 requires intersectional impact ratios computed across race × sex for every selection stage. Colorado's "reasonable care" standard in SB 24-205 does not specify methodology, and the Illinois Human Rights Act focuses on discriminatory effect without prescribing a statistical test. Running a single universal audit produces outputs that are too coarse for LL144 and too detailed to be recognized as the Colorado-specific impact assessment. Each jurisdiction gets a differently-shaped deliverable or each fails on its own terms.

2. Illinois's zip-code ban collides with the EU's representativeness mandate

Illinois HB 3773 explicitly prohibits using zip codes as proxies for protected classes. EU AI Act Article 10(3) requires that training data be "relevant, representative and to the best extent possible, free of errors and complete" — which typically means including geographic features to avoid regional coverage gaps. Remove zip codes and you fail the EU representativeness audit; keep them and you violate Illinois. The practical answer is an explicit residence feature with a coarser geographic granularity for EU training data and a full geographic mask for Illinois inference — which requires two deployment configurations of the same model, not one.

3. Self-classification collapses under the Mobley "agent" theory

Under LL144 the employer self-determines whether a tool "substantially assists" a hiring decision. Under Mobley the vendor is directly liable when its tool recommends or filters candidates, independent of what the employer claims. A self-classification memo from 2024 that says "our Workday Spotlight deployment is not an AEDT" is now plaintiffs' exhibit A. The only defensible posture is to treat any scoring, ranking, filtering, or routing tool as in scope and document accordingly.

4. Texas disparate-impact carve-out changes the audit logic

Texas TRAIGA is the first state law to explicitly reject disparate impact as a standalone basis for AI hiring liability. This does not mean Texas employers are safe — federal Title VII and Texas Commission on Human Rights Act claims still apply — but it means the TRAIGA compliance deliverable is intent-focused rather than statistics-focused. A federal Title VII adverse-impact analysis, a LL144 four-fifths report, and a TRAIGA intent assessment are three separate engagements with three separate evidentiary standards.

5. FCRA liability is a different theory than bias liability

Kistler v. Eightfold is not an adverse-impact case. The FCRA framework asks whether the platform functions as a consumer reporting agency, regardless of whether its scores are fair. A perfectly unbiased platform can still owe every scored candidate a pre-adverse-action notice, a copy of the "report," and a dispute pathway. Bias audits do not produce any of those artifacts. A buyer optimizing only for LL144 and Colorado compliance can still be a class-action defendant because the FCRA question was never asked.

6. Accessibility is not covered by any of the bias-audit frameworks

LL144 does not require disability-impact testing. Colorado requires it under the general "reasonable care" standard but does not specify methodology. The ACLU complaint on behalf of an Indigenous Deaf employee (D.K. v. Intuit / HireVue) argues that ASR-driven video interviews disparately disadvantage people whose speech patterns were underrepresented in training data. The underlying technical problem is measurable: published research shows Whisper's multilingual average WER is roughly three times its English WER, and the 2025 Interspeech Speech Accessibility Project Challenge winning team achieved an 8.11% WER on impaired speech — still multiples of standard benchmarks. Bias audits that only track race and sex will not surface this, and a company that passes LL144 can still face an ADA complaint.

7. Supply-chain security is a compliance problem, not a CISO problem

The Paradox / McHire breach exposed 64M candidate records because an admin test account from 2019 was still active with 123456 as both username and password, no MFA, and an IDOR in an internal API. None of this had anything to do with ML. All of it now sits on the CHRO because candidate data exposed through a hiring tool is still hiring data, subject to GDPR breach notification, CCPA private right of action, and loss of lawful basis for continued processing. Vendor due diligence on AI hiring tools has to cover credential hygiene, API authorization boundaries, and default-password audits — which is not what a standard SOC 2 letter tells you.

What we actually build

These are consulting engagements, not products. Every one is bespoke. What makes us useful is that we write code and we sit through your vendor meetings. We are vendor-neutral: we do not resell HR tech, and we will tell you when your Workday or Eightfold deployment is defensible as-is.

01 · Inventory & classification

AEDT discovery and jurisdictional mapping

We enumerate every tool that touches a hiring decision — ATS plugins, scoring engines, scheduling bots, video interviewers, reference-checking APIs, background-check integrations, LinkedIn Recruiter's native AI, anything that surfaces or hides candidates. For each tool, we classify it against the LL144 "substantially assists" definition, the Colorado deployer/developer distinction, the Illinois "influence or facilitate" test, the Texas intent-only test, and the EU Annex III high-risk classification. The output is a AEDT register that stands up in a regulator investigation and a vendor list that tells legal which contracts need amendment.

Deliverable: AEDT register, jurisdictional exposure matrix, vendor contract amendment list, remediation priority queue.

02 · Bias audit that actually reconciles

Multi-regime adverse impact testing

We run the full four-fifths analysis at the intersectional level LL144 requires, compute the Colorado "reasonable care" impact assessment in the format the CO AG's draft rules are gravitating toward, generate the Illinois notice artifacts, and produce the EU AI Act Article 10 data-governance documentation. Where regimes conflict we write a memo that says exactly where and why, and we make the legal-strategy recommendation (which regime you optimize for, which you accept exposure on, what the magnitude of the exposure actually is). We are not an independent auditor under LL144 — that role belongs to DCI, ORCAA, or Secretariat — but we get your system into the state where the independent audit finds nothing worth writing up.

Deliverable: LL144 intersectional report, Colorado impact assessment, Illinois notice template, EU Art. 10/11 documentation pack, conflict memo.

03 · Accessibility & ADA pipeline review

ASR, captioning, and personality-instrument disability testing

We treat this as a separate discipline because no bias audit framework covers it. We benchmark your video-interview ASR pipeline against the Speech Accessibility Project corpus and published WER disparities for Deaf, HOH, and accented speakers. We evaluate personality instruments against the ACLU's Aon theory: if the questions mirror clinical diagnostic criteria, they function as a disability screen under the ADA. We design the human-in-the-loop escalation path for candidates who disclose an accommodation need, including a CART provider SLA that does not leave candidates waiting 72 hours. This is where an ACLU complaint or a DOJ ADA investigation starts, so we document it to the evidentiary standard those proceedings demand.

Deliverable: ASR WER disparity report, personality-instrument ADA review, accommodation workflow spec, CART provider SLA template.

04 · FCRA readiness

Kistler exposure assessment and adverse-action infrastructure

We assess whether your AI hiring vendors meet the factual pattern the Kistler plaintiffs are pursuing: scraping third-party data, building candidate profiles, producing numerical scores, and using those scores to filter. Where the pattern applies, we build the FCRA adverse-action notice pipeline, the candidate-facing dispute workflow, the data-provenance log that shows what information was used in a score, and the dispute resolution timeline your GC can defend in court. If the court in Kistler holds Eightfold is a CRA, you are ready on day one. If it doesn't, you have a candidate experience your DEI team will thank you for anyway.

Deliverable: FCRA applicability memo, adverse-action notice templates, candidate dispute portal spec, data-provenance logging architecture.

05 · AEDT security review

Post-Paradox vendor due diligence

After McHire, a SOC 2 letter is not enough. We perform an AI-aware security review on every HR tech vendor in your stack: default credentials, MFA enforcement on admin accounts, API authorization boundaries (specifically IDOR class bugs like the one that exposed 64M records), candidate chat transcript retention, prompt-injection hardening on conversational interfaces, and data residency for GDPR lawful-basis analysis. The deliverable is a vendor risk memo the CISO can co-sign and a contractual rider your procurement team can attach to every renewal.

Deliverable: vendor risk memo, IDOR / auth-boundary test results, contractual security rider, DPIA update for GDPR.

06 · Architectural remediation

Glass-box rewrites where the vendor can't or won't

When the audit reveals a tool that cannot be made compliant without changing the model, we have two options. Option one: replace it with a glass-box architecture we build — a knowledge-graph skill matcher or a constraint-enforced symbolic rule engine where every decision traces to an auditable node rather than a floating-point weight. Option two: layer a compliance overlay on top of the existing vendor, intercepting scores, applying jurisdiction-specific adjustments, and generating an independent audit trail that survives discovery. We do not start from scratch unless the alternative genuinely is worse; most of the time a narrow overlay solves the problem.

Deliverable: remediation option memo, prototype or overlay code, integration spec for the existing ATS.

How an engagement runs

We do not sell retainers. Each phase below is its own statement of work with its own deliverable. Clients stop after Phase 1 and feel good about it; others continue through architectural remediation. No phase depends on committing to the next one.

1

Discovery & exposure review (3–4 weeks)

We shadow one full hiring cycle, enumerate every AEDT, map your operating jurisdictions, and produce an exposure memo that says which regimes apply and where you are currently out of compliance. Working sessions with HR, Legal, Procurement, and IT Security. Fixed fee. End state: your GC can walk into a board meeting with a numbered list.

2

Multi-regime audit & documentation (6–10 weeks)

The actual statistical work. Intersectional adverse-impact tables, Colorado impact assessment, Illinois notice artifacts, EU Art. 10/11 documentation, FCRA applicability memo, ASR WER benchmarks. We produce the pre-audit package that the independent LL144 auditor of your choice (DCI, ORCAA, Secretariat) will accept without rewriting. We identify irreconcilable conflicts and write the legal-strategy memo that tells leadership which exposures they are knowingly accepting.

3

Vendor due diligence & contract amendment (4–6 weeks, parallel)

For each named vendor, we do an AI-aware security review, review DPAs against the seven collision zones, and draft the contract riders your procurement team will attach at renewal. For Workday / HiredScore customers, we specifically review the Mobley exposure and document where your deployment diverges from the fact pattern. For Eightfold customers, we flag the Kistler risk and recommend interim mitigations.

4

Architectural remediation (scope-dependent)

Only if needed. An overlay that intercepts scores before they hit a recruiter screen, a candidate-facing dispute portal integrated with your ATS, a knowledge-graph skill matcher for the one role family where the existing vendor cannot pass. We build, we hand over the code, we document it for your engineering team. Typical engagement: one to three quarters.

5

Continuous-monitoring handover (optional)

We help you stand up ongoing monitoring on top of a governance platform you already own or have chosen (FairNow, Holistic AI, Credo AI are all fine for this; we are not reselling any of them). We train your internal team on how to interpret the dashboards in the language each regulator uses. Then we go away.

On timelines and honesty: A company that has not touched this since 2024 cannot be fully compliant across six regimes by August 2, 2026. We will tell you exactly which exposures remain, write the memo, and help you make an informed legal-strategy decision about what you are accepting. The alternative — claiming full coverage by a date that was never achievable — creates the exact "bad-faith" record that regulators and plaintiffs look for.

AEDT jurisdictional exposure tool

Enter where you hire and what you use. The tool computes which regimes apply, which of your vendors have live litigation or regulatory exposure, and what the priority order of remediation is. Nothing is sent anywhere — this runs entirely in your browser. Use the output in your next compliance meeting whether you hire us or not.

Questions we get from CHROs and General Counsel

Phrased the way clients actually phrase them on the first call, not cleaned up for the website.

Do I really need a separate bias audit for each jurisdiction?

Not separate audits, but separate deliverables derived from one well-designed audit. The underlying statistical work — adverse impact ratios, intersectional analysis, feature attributions — is common. The formatting and what gets computed is not. LL144 demands intersectional race-by-sex impact ratios and a publicly posted summary in a specific format. Colorado wants a documented impact assessment as part of a risk management program, with no explicit four-fifths requirement. Illinois needs a notice artifact and proxy-variable documentation. The EU wants Art. 10 data-governance records and Art. 11 technical documentation, which is much deeper than any US framework. A single auditor running a "universal" LL144 audit and calling it good for Colorado is giving you something a regulator will reject. One audit engagement produces four or five differently-shaped deliverables if it is designed correctly. We design engagements that way; most general-purpose audit vendors do not.

We use HiredScore through Workday. Are we in Mobley?

You may or may not be. The collective that received preliminary certification in May 2025 covers job applicants 40 and over who were screened out by a Workday customer using Workday's tools. Judge Lin subsequently rejected Workday's attempt to exclude HiredScore Spotlight and Fetch from the collective even though those products were acquired post-complaint, and ordered Workday to produce an exhaustive list of employers who enabled HiredScore features. The opt-in window for over-40 applicants closed March 7, 2026, which means discovery now proceeds on identified class members. Your immediate action items are: pull your HiredScore deployment history with dates, identify the roles and geographies where Spotlight or Fetch filtered candidates, preserve all applicant-level data from that period, and document the human-review checkpoint if there was one. A "the vendor handled it" defense is exactly what Judge Lin's "agent" ruling was designed to defeat. The sooner your GC has a factual memo, the better your posture when plaintiffs' counsel reaches out.

What does a LL144 bias audit actually cost in 2026, and who can legally sign one?

Independent auditors recognized under LL144 — primarily DCI Consulting, ORCAA, Secretariat, and a handful of smaller firms — typically charge $50,000 to $200,000 per AEDT per year depending on data volume, demographic availability, and complexity of the selection workflow. Turnaround from data-ready to signed audit is 15 to 20 business days once the auditor has everything they need. The "getting to data-ready" phase is where most time disappears, and it is what we do: data pipeline, demographic imputation method choice, intersectional grouping, selection-rate computation, artifact formatting. An auditor who receives a clean pre-audit package from us can sign off in the posted 15-20 days; an auditor who has to clean your data themselves will bill against the high end of the range and take longer. We are not an independent auditor — we cannot sign the final LL144 deliverable, by design — but we are the reason the audit your independent auditor signs is clean.

If we comply with the EU AI Act, are we covered for NYC and Colorado automatically?

No. The EU AI Act is the most demanding regime on documentation, data governance, and human oversight, but it does not require the specific intersectional adverse-impact ratios that LL144 demands, and it does not produce the Colorado impact assessment format that the CO AG's draft rules contemplate. EU compliance is necessary but not sufficient for US state compliance. Going the other direction, LL144 compliance does not get you within reach of the EU bar — LL144 is essentially a narrow statistical audit and a posted summary, whereas the EU AI Act wants a full quality management system (Art. 17), a conformity assessment, a technical documentation dossier, post-market monitoring, and CE marking. If your footprint is both US and EU, plan two workstreams. The conflict zones where one regime's requirement violates another's (zip code being the clearest example) are real and need a legal-strategy decision, not an engineering one.

How do we defend against an ACLU-style accessibility complaint when LL144 doesn't cover disability?

You defend against it with a separate ADA pipeline review, which most bias audit vendors do not offer because it is a different legal theory and a different evidentiary base. For video interview tools, that means benchmarking your ASR component against the Speech Accessibility Project corpus and similar Deaf, HOH, and accented-English test sets. The published disparities are large: the 2025 SAP Challenge's winning team achieved an 8.11% WER on impaired speech, which is still several multiples of the benchmark WER for standard English, and Whisper's multilingual performance is roughly 3x worse than its English performance on average. For personality and game-based assessments, the question is whether the instrument mirrors clinical diagnostic criteria — the theory in the ACLU's Aon FTC complaint. For the candidate experience, you need an accommodation workflow that does not leave a Deaf applicant waiting 72 hours for human CART support, because "the process was available" is not a defense when the process requires the applicant to know to ask. The D.K. v. Intuit/HireVue complaint is the template plaintiffs' counsel is using now; read it and align your controls to each specific allegation, because that is exactly what discovery will walk through.

What is the actual cost of getting this wrong?

Depends on which theory catches you. LL144 is $1,500/day per violation, which runs to $547,500 per year per unaudited tool before any private right of action layers on. Texas TRAIGA tops out at $200,000 per uncurable violation. The EU AI Act caps at the greater of €15M or 3% of global annual turnover for high-risk obligations. The EEOC settled its first AI hiring case (iTutorGroup, 2023) for $365,000, which is small until you realize it is now the floor for individual discrimination claims. Class actions scale differently: Mobley's collective potentially covers a meaningful fraction of the over-40 applicants who passed through Workday's ecosystem, which the court noted may exceed a billion. Kistler's FCRA theory, if it holds, attaches $100–$1,000 statutory damages per consumer per violation to every candidate scored by a platform found to be a consumer reporting agency. A company scoring 2 million candidates a year and found to owe even the low end is looking at $200M per year of exposure. The cost of a multi-regime audit and remediation program is measured in the low six figures for most of our clients, and in the low seven figures only for the biggest. The math on insurance-versus-exposure is not close.

Can't we just run a FairNow or Holistic AI trial and get a dashboard?

Yes, and for ongoing monitoring a governance platform is genuinely useful. We have clients who run FairNow for continuous LL144 tracking and Holistic AI for cross-risk visibility, and we recommend both in the right context. What a platform does not do is sit through your vendor security review with your CISO, write the legal-strategy memo when Illinois and EU requirements collide, run a SAP-corpus ASR benchmark on your HireVue deployment, or rewrite a scoring pipeline to intercept outputs before they hit the recruiter screen. A platform is a dashboard; the engagement we run is what fills the dashboard with meaningful data in the first place. Run the platform and hire the specialist for the engagement. They are not substitutes.

Our vendor says the tool "doesn't substantially assist" the hiring decision. Is that good enough?

It was good enough in 2024. It is not good enough now. The Cornell / Data & Society / Consumer Reports research coined the term "Null Compliance" for exactly this pattern: 391 NYC employers studied, only 4.6% published a bias audit, and the rest relied on self-classification arguments that put the tool outside LL144's scope. The December 2025 NY State Comptroller audit then demonstrated that self-classification does not survive a rigorous third-party review — state auditors found 17 potential violations in the same 32-company sample where DCWP had found one. DCWP agreed to adopt proactive enforcement, which means the regulator now runs its own analysis rather than waiting for employers to disclose. On the vendor liability side, Mobley's "agent" theory explicitly rejects the idea that a screening tool that recommends or filters candidates is not participating in the decision. The practical posture for any CHRO is: assume every scoring, ranking, or filtering tool is in scope, treat the "vendor said we're not an AEDT" memo as a future discovery exhibit, and audit accordingly. A self-classification defense that requires the regulator and the plaintiff to agree with the vendor's characterization is not a defense, it is a hope.

Technical research behind this page

This solution page draws on eight of our interactive whitepapers. Each covers a separate slice of the AI hiring compliance problem. The serious buyer should skim all of them before any engagement conversation.

Twelve weeks to Colorado. Sixteen weeks to the EU.

An unaudited AEDT in NYC runs up to $547,500 per year in LL144 penalties alone. Mobley and Kistler add class-action exposure that scales with your application volume. The cost of a proper multi-regime audit engagement is a small fraction of one class-action defense.

We start with a fixed-fee discovery and exposure review. You walk away with a defensible AEDT register and a numbered memo — whether you continue the engagement or not.

AEDT Exposure Review

  • · Full AEDT inventory across your hiring stack
  • · Jurisdictional exposure matrix (NYC, CO, IL, TX, CA, EU)
  • · Vendor-specific risk memos (Workday, Eightfold, HireVue, Paradox, others)
  • · Prioritized 90-day remediation plan

Multi-Regime Audit & Build

  • · LL144 intersectional adverse-impact pre-audit package
  • · Colorado, Illinois, California, EU documentation
  • · ADA / accessibility pipeline review
  • · Glass-box overlay or remediation code where required