For General Counsel & Legal4 min read

AI Hiring Tools Can Discriminate Against Disabled Candidates

The ACLU's complaint against Aon reveals how personality-scoring AI screens out neurodivergent talent — and what your company should do now.

The Problem

Aon's AI hiring tool scored autistic candidates low on "liveliness." In May 2024, the ACLU filed a formal complaint with the Federal Trade Commission against Aon Consulting. The ACLU alleged that Aon marketed its AI-powered hiring assessments as "bias free" and claimed they "improve diversity" with "no adverse impact." In reality, these tools likely screen out qualified candidates based on race and disability.

Here is what happened. Aon built a personality test called ADEPT-15 that measures 15 traits, including "positivity," "emotional awareness," and "liveliness." The ACLU found that these traits track closely with clinical diagnostic criteria for autism and mental health conditions. If you score as "reserved" instead of "outgoing," the algorithm marks you down. It does not ask about your disability. It does not need to. The math does the discrimination automatically.

Aon also deployed a video interview tool called vidAssess-AI. It uses natural language processing — software that interprets spoken words — to score candidates against those same personality traits. For someone with autism whose speech includes flat intonation or atypical pauses, the AI may interpret those patterns as "lack of confidence." The tool created what the complaint describes as "double jeopardy": candidates were judged on their answers and on a machine-interpreted version of their personality filtered through biased language models.

This complaint was supported by the Autistic Self-Advocacy Network and other civil rights groups. It is not a fringe lawsuit. It is a signal that the era of unchecked algorithmic hiring is over. If your company uses AI to screen job applicants, you need to understand what went wrong here.

Why This Matters to Your Business

The financial and legal exposure from biased AI hiring tools is large and growing. Consider the numbers:

  • The EEOC secured hundreds of millions of dollars in monetary relief for discrimination victims in fiscal year 2024. Algorithmic bias is now a top enforcement priority.
  • The FTC fined DoNotPay $193,000 for unsubstantiated claims about its AI capabilities. The agency has the power to permanently ban companies from selling high-risk software.
  • By 2027, the global talent economy will be valued at $30 billion. Companies that cannot prove their hiring tools are fair will lose access to top candidates.
  • Aon's ADEPT-15 test draws from a database of 350,000 unique items to evaluate personality. The scale of the system makes it nearly impossible to audit without specialized tools.

Your legal exposure does not stop at the vendor. The EEOC has made clear that employers are legally responsible for discrimination caused by AI tools they purchase from third parties. Buying a "bias-free" tool does not transfer your liability. It increases it.

Here is what this means for your organization:

  • Litigation risk: Class-action lawsuits and EEOC complaints targeting your hiring process, not just your vendor's product.
  • Regulatory fines: The FTC's "Operation AI Comply" initiative is actively enforcing substantiation requirements for AI claims.
  • Reputational damage: A public complaint linking your company to disability discrimination can erode employer brand overnight.
  • Talent loss: Neurodivergent individuals often excel at pattern recognition, attention to detail, and creative problem-solving. A biased screen filters out the very people who drive innovation.

If your AI vendor cannot provide empirical proof that their tools do not discriminate, you are carrying unquantified risk on your balance sheet.

What's Actually Happening Under the Hood

Think of most AI hiring tools like a fun-house mirror. They reflect something real — your qualifications — but the reflection is distorted by the mirror's shape. The "shape" in this case is the training data, which overwhelmingly reflects neurotypical behavior patterns.

The core failure is something called the "recursive bias loop." Machine learning models train on historical hiring data. That data reflects decades of preferences for neurotypical communication styles. When the AI deploys, its decisions feed back into future training sets. Candidates who "look like" past successful hires get promoted in the algorithm. Everyone else gets filtered out. The bias compounds over time.

Research at Duke University found that large language models — the AI engines behind many hiring tools — systematically associate neurodivergent terms with negative meaning. The phrase "I have autism" is viewed by these models as more negative than "I am a bank robber." When these same language models power hiring tools through an API connection, they embed discriminatory associations into your recruitment process. The developer never intended this. The data did it automatically.

Aon's ADEPT-15 illustrates this precisely. It maps 15 personality constructs along polarities like "Reserved vs. Outgoing" and "Stoic vs. Compassionate." The ACLU mapped these constructs against the Autism Spectrum Quotient (AQ), a standard 50-item clinical screening tool. The overlap is undeniable. When a hiring algorithm asks questions that mirror clinical criteria — "I focus intensely on details" or "I prefer working alone" — it creates a hidden path between disability status and hiring outcome. The algorithm never asks about a diagnosis. It does not have to.

A simple "wrapper" — a tool that passes your data through a foundation model like GPT-4 and presents the output — cannot fix this. Wrappers inherit the biases of their underlying models. They lack the ability to distinguish between genuine job qualifications and noise that correlates with protected characteristics.

What Works (And What Doesn't)

Three common approaches that fail:

  • "We trained on big data, so it's fair." More data does not mean less bias. If your training data reflects 30 years of neurotypical hiring preferences, your model will replicate those preferences at scale.
  • "Our vendor gave us a model card." Vendor-provided documentation is not an independent audit. The Aon case shows that marketing claims of "no adverse impact" can directly contradict how the tool actually performs.
  • "We removed protected fields from the input." Removing disability status from the data does not help when traits like "liveliness" and "emotional awareness" serve as proxies. The algorithm finds the back door.

What actually works is a three-step approach that breaks the link between protected characteristics and hiring decisions:

  1. Map the hidden paths (Input). Use Structural Causal Modeling — a method that diagrams how different data points influence each other — to identify every route through which disability status could affect the hiring outcome. This is not a statistical check. It is a causal map of your entire assessment pipeline. Your goal is to find proxy variables before regulators find them for you.

  2. Strip protected information from model logic (Processing). Deploy adversarial debiasing — a technique where a second AI model tries to guess a candidate's protected characteristic from the primary model's internal data. If the second model succeeds, the primary model is still leaking protected information. The system penalizes the primary model and forces it to unlearn biased patterns. This is especially effective for video and personality assessments where behavioral signals serve as disability proxies.

  3. Test every decision with counterfactual simulation (Output). Generate synthetic variations of a real candidate's data. Change only the sensitive attribute — for example, switch from neurotypical to neurodivergent speech patterns — while holding all other variables constant. If the AI's recommendation changes, you have a fairness failure. This gives your compliance team a concrete audit trail for every hiring decision.

The audit trail is what matters most to your legal and compliance teams. Group-level statistics — "10% of our hires are disabled" — are not enough. You need individual-level evidence that each candidate received equal treatment. Counterfactual simulation provides this. When the EEOC or FTC asks why your system rejected a specific candidate, you can show the math.

Under the NIST AI Risk Management Framework, organizations in the HR and talent technology space should aim for "Adaptive" maturity. This means a documented process for tracking discrimination drift — the phenomenon where AI models become more biased over time as they interact with real-world data. Annual independent bias audits, not vendor self-reports, are essential. Every automated assessment should include a clear option for candidates to request a human alternative without penalty.

You can read the full technical analysis for a deeper look at the causal modeling and adversarial debiasing methods. An interactive walkthrough is also available for your team.

Key Takeaways

  • The ACLU filed an FTC complaint alleging Aon's AI hiring tools market themselves as 'bias free' while likely screening out candidates based on disability.
  • Personality traits like 'liveliness' and 'emotional awareness' in hiring AI closely mirror clinical diagnostic criteria for autism, creating hidden discrimination.
  • Employers — not just vendors — are legally responsible for discrimination caused by the AI tools they purchase, according to the EEOC.
  • Simple fixes like removing protected fields or relying on vendor model cards do not work because the algorithm finds proxy variables automatically.
  • Counterfactual simulation — testing whether a decision changes when only the disability attribute changes — provides the individual-level audit trail regulators now demand.

The Bottom Line

The Aon-ACLU complaint proves that 'bias-free' marketing claims cannot substitute for empirical evidence of fairness. If your AI hiring vendor cannot explain exactly how their system prevents personality traits from serving as disability proxies, you are carrying unquantified legal and financial risk. Ask your vendor: can you show me a counterfactual audit proving your tool gives the same score to a neurodivergent candidate as a neurotypical one with identical qualifications?

FAQ

Frequently Asked Questions

Can AI hiring tools discriminate against people with disabilities?

Yes. The ACLU filed an FTC complaint in May 2024 alleging that Aon's AI hiring tools measure personality traits like 'liveliness' and 'emotional awareness' that closely track clinical diagnostic criteria for autism. These tools can screen out neurodivergent candidates without ever asking about a disability.

Who is legally responsible when an AI hiring tool discriminates?

The employer is responsible. The EEOC has stated that employers are legally liable for discrimination caused by AI tools they purchase from vendors. Buying a tool marketed as 'bias free' does not transfer your legal responsibility.

How can companies audit AI hiring tools for disability bias?

Companies should conduct independent annual bias audits using techniques like counterfactual simulation, which tests whether the AI gives the same score when only the disability attribute changes. The NIST AI Risk Management Framework recommends tracking discrimination drift over time and providing candidates with a clear option to request a human alternative.

Build Your AI with Confidence.

Partner with a team that has deep experience in building the next generation of enterprise AI. Let us help you design, build, and deploy an AI strategy you can trust.

Veriprajna Deep Tech Consultancy specializes in building safety-critical AI systems for healthcare, finance, and regulatory domains. Our architectures are validated against established protocols with comprehensive compliance documentation.