For General Counsel & Legal4 min read

AI Hiring Bias: Your Vendor Could Be Your Liability

A federal court just ruled AI hiring vendors are legally liable for discrimination — and your company is on the hook too.

The Problem

One applicant applied to more than 100 jobs through Workday's platform. He was rejected from nearly all of them — often within minutes, outside of business hours. No human ever reviewed his resume. Derek Mobley, an African American man over 40 with disabilities, filed a lawsuit that has now reshaped employment law in the United States. In May 2025, a federal court certified a nationwide collective action covering potentially millions of applicants over age 40 who were denied employment recommendations through Workday's AI screening tools.

The numbers are staggering. During the relevant period, Workday's own filings showed its software processed approximately 1.1 billion application rejections. That is not a typo. Billions of automated decisions, many made without meaningful human oversight.

Here is what changed for your business: In July 2024, Judge Rita Lin of the Northern District of California ruled that AI vendors like Workday are not neutral software providers. They are "agents" of the employers who use them. That means they face direct liability under Title VII, the ADA, and the ADEA — the same federal anti-discrimination laws that apply to you. If your AI hiring tool discriminates, both you and your vendor can be held responsible. The court drew a clear line: an AI system that actively scores, ranks, and recommends candidates is performing a core hiring function, not just sorting a spreadsheet.

Why This Matters to Your Business

This ruling changes your risk profile in three concrete ways.

Financial exposure is massive and growing. The Workday collective action covers every applicant over 40 denied a recommendation since September 2020. The court expanded its scope in July 2025 to include applicants processed by HiredScore AI, a technology Workday acquired. Your potential liability scales with every application your AI tools reject.

Regulatory penalties are already here. New York City's Local Law 144 requires annual independent bias audits of automated employment decision tools. Penalties start at $500 for the first offense and climb to $1,500 per violation per day for repeat offenses. Multiple cities and states are following New York's lead.

Reputational damage is the hidden cost. The court can order companies to notify millions of affected applicants. Imagine your company's name on a court-ordered list sent to every person your AI rejected. The EEOC's May 2023 guidance makes your obligations explicit: you are responsible for the results of algorithmic tools, even if a third party designed or ran them.

Here is the bottom line for your budget:

  • $500–$1,500 per day in NYC penalties for missing bias audits
  • 1.1 billion rejections processed through a single platform during the litigation period
  • 100+ rejections for one applicant, often within minutes — the pattern that started it all
  • Every applicant over 40 since September 2020 is a potential plaintiff in this case

If your vendor cannot explain why a candidate was rejected — or if they disclaim all liability for bias — your company carries 100% of the risk.

What's Actually Happening Under the Hood

Most AI hiring tools run on large language models (LLMs) — systems designed to predict the most likely next word, not to make fair decisions. Think of them as autocomplete engines operating at massive scale. They are built for plausibility, not accuracy. When you wrap a thin application layer around one of these models and call it a "hiring solution," you get three dangerous failure modes.

First, they lose information in the middle. Research shows that standard AI models pay close attention to the beginning and end of documents but develop what researchers call an "attention trough" in the middle. In a 10-page resume, critical certifications or recent achievements in the middle pages are statistically more likely to be overlooked.

Second, they make things up. When an LLM cannot find a specific qualification, it often generates a plausible assumption based on surrounding text. Your system might reject a qualified candidate based on a "fact" that exists nowhere in their actual file.

Third, they discriminate through proxies. An AI model does not need to see your age to guess it. It learns that an @aol.com email address, 15+ years of experience, references to older technologies like Lotus Notes, or early-career job titles from the 1990s all correlate with being over 40. When the model is trained on a company's "high performers" who happen to skew younger, it treats these proxy signals as negative indicators. This creates a feedback loop: the system replicates and amplifies the exact biases it was supposed to eliminate.

The EEOC uses the "Four-Fifths Rule" to test for discrimination. If the selection rate for a protected group falls below 80% of the highest group's rate, that is adverse impact. In the Workday case, the patterns of rejection across age groups raised exactly these red flags.

What Works (And What Doesn't)

Let's start with three common approaches that fail in regulated hiring environments.

"We added a system prompt telling the AI to be fair." System prompts are suggestions, not rules. They are easily overridden and provide zero legal defense. A judge will not accept "we told the AI to be nice" as a compliance strategy.

"We audit once a year." Annual audits catch problems after the damage is done. If your model drifts between audits — and models do drift — you are accumulating liability every day you are not monitoring.

"Our vendor handles compliance." The court in Mobley v. Workday ruled that delegating hiring functions to an automated system extends the chain of liability — it does not end it. Your vendor's compliance is your compliance.

What does work is an architecture that separates language processing from decision-making. Here is how it works in three steps:

  1. Input and Translation. A specialized AI reads a resume or transcript and extracts specific facts: "Candidate has 5 years of Python experience." At this stage, input guardrails check for data quality issues, personally identifiable information leaks, and adversarial manipulation attempts. The AI is the translator, not the decision-maker.

  2. Structured Reasoning. Those extracted facts are mapped to a knowledge graph — a structured map that defines how skills, roles, and requirements relate to each other. A rule engine then applies your business logic deterministically: IF experience is at least 5 years AND skill equals Python, THEN eligible equals true. The AI cannot "hallucinate" a policy because the rules are written in code, not generated by prediction.

  3. Auditable Output. Every recommendation generates a clear logic trail showing exactly which rule was triggered, by which data point, in which document. Output guardrails scan for bias, errors, and policy violations before any result reaches a recruiter. Techniques like SHAP — which assigns a specific contribution value to each factor in a decision — let you show a regulator or a judge that "Skill X contributed +15 to this candidate's score" and nothing else drove the outcome.

This is the difference that matters to your compliance team. When a regulator asks "why was this person rejected," you can point to a specific rule applied to a specific fact. You are not asking an AI to guess at its own reasoning after the fact.

Your organization should also implement adversarial debiasing during model training. This technique trains a secondary model to detect whether the primary model's outputs reveal protected characteristics like race or age. If the secondary model succeeds, the primary model is penalized and retrained. The result is a system that actively removes discriminatory patterns rather than hoping they do not appear.

Finally, build a three lines of defense model for AI risk management. Your business units own day-to-day AI risk, including training data selection and candidate anonymization. A risk and compliance function maintains model registries and monitors selection rates continuously. An independent audit function verifies everything — including the mandatory annual bias audits required under laws like NYC Local Law 144.

The shift toward sovereign AI deployment — running models inside your own infrastructure rather than through third-party APIs — gives you control over data provenance and traceability. Your proprietary hiring data stays yours. Your models do not change unpredictably because a vendor pushed an update.

The HR and talent technology industry is moving from automation to verification. The companies that build fairness audits and bias mitigation into their core architecture — not as an afterthought — will be the ones still standing when the next ruling comes down. You can read the full technical analysis or explore the interactive version for deeper detail on the legal precedents and technical architecture.

Key Takeaways

  • A federal court ruled in 2024 that AI hiring vendors qualify as legal "agents" — making them directly liable for discrimination under Title VII, the ADA, and the ADEA.
  • Workday's platform processed roughly 1.1 billion application rejections during the litigation period, and a nationwide collective action now covers potentially millions of applicants over 40.
  • AI models discriminate through proxy variables like email domains, years of experience, and outdated technology references — without ever seeing a candidate's age.
  • Annual bias audits alone are not enough; continuous monitoring, deterministic rule engines, and auditable logic trails are required to defend your hiring decisions in court.
  • NYC Local Law 144 penalties range from $500 to $1,500 per violation per day — but the real cost is having your company named in a court-ordered notice to millions of rejected applicants.

The Bottom Line

The Mobley v. Workday ruling means your AI hiring vendor's discrimination is now your discrimination. Every automated rejection your company cannot explain is a liability. Ask your vendor: can you show me the exact rule, applied to the exact data point, that caused each candidate rejection — and can you produce that logic trail in court?

FAQ

Frequently Asked Questions

Can AI hiring tools be sued for discrimination?

Yes. In July 2024, a federal court in California ruled that AI vendors like Workday qualify as 'agents' under federal anti-discrimination laws including Title VII, the ADA, and the ADEA. This means AI hiring vendors face direct liability for discriminatory outcomes, even if the employer did not intend to discriminate.

What is the Workday AI hiring lawsuit about?

Derek Mobley alleged he was rejected from over 100 jobs through Workday's AI screening tools, often within minutes and outside business hours. In May 2025, the court certified a nationwide collective action covering applicants over 40. Workday's filings showed approximately 1.1 billion applications were rejected through its platform during the relevant period.

How do AI hiring tools discriminate by age without using age as a factor?

AI models use proxy variables — neutral features that correlate with age. These include legacy email domains like @aol.com, total years of experience exceeding 15 years, references to older technologies like Lotus Notes, and early-career job titles from the 1990s. When trained on younger 'high performers,' the model treats these proxies as negative signals, creating a feedback loop that amplifies age bias.

Build Your AI with Confidence.

Partner with a team that has deep experience in building the next generation of enterprise AI. Let us help you design, build, and deploy an AI strategy you can trust.

Veriprajna Deep Tech Consultancy specializes in building safety-critical AI systems for healthcare, finance, and regulatory domains. Our architectures are validated against established protocols with comprehensive compliance documentation.