An investigation by the Center for Democracy & Technology, shared with Autominous ahead of publication, has found that AI-powered hiring tools used by an estimated 83% of Fortune 500 companies systematically disadvantage candidates with disabilities - and that most employers using these tools are unaware of the discriminatory impact.
The study analysed eight widely-used AI hiring platforms that assess candidates through video interviews, resume screening, and gamified assessments. Researchers submitted matched candidate profiles - identical qualifications, different disability presentations - through each system.
The results were consistent across platforms. Candidates with autism spectrum conditions were scored 22-35% lower on "communication" and "cultural fit" metrics during AI-assessed video interviews. The systems penalised atypical eye contact patterns, flat affect, and monotone speech - all common characteristics of autism that have no bearing on job performance for most roles.
Candidates with cerebral palsy who exhibited involuntary facial movements were flagged as "low confidence" by systems that analyse facial expressions. Candidates with speech impediments - stuttering, dysarthria - were scored lower on "clarity" and "professionalism" by voice analysis tools.
"These systems were trained on data from candidates who got hired," said Alexandra Reeve Givens, president and CEO of the Center for Democracy & Technology. "The people who got hired were disproportionately neurotypical and non-disabled. The AI learned that neurotypicality equals competence. That's not a bug. It's the training data working as designed."
HireVue, one of the largest AI video interview providers, discontinued its facial analysis features in 2021 after public backlash but still uses voice analysis. Other platforms, including Pymetrics (now Harver), myInterview, and Vervoe, continue to use various forms of behavioural assessment that rely on neurotypical baselines.
Under the Americans with Disabilities Act and the EU's Employment Equality Directive, employers are required to provide reasonable accommodations and cannot discriminate based on disability. But when the discrimination is embedded in a third-party AI tool, employers may not realise it's happening.
"We asked 200 HR directors whether their AI hiring tools had been audited for disability bias," said Dr. Lydia X.Z. Brown, an AI policy researcher who contributed to the CDT study. "Eleven percent said yes. Sixty-three percent said they didn't know. The rest said they had never considered it."
The EEOC has opened an investigation into AI hiring discrimination and issued guidance stating that employers are liable for discriminatory outcomes from AI tools they use, even if the discrimination was not intentional and the tool was developed by a third party.
Illinois, New York City, and Colorado have passed laws requiring bias audits of AI hiring tools. But enforcement is uneven, and most audits focus on race and gender - not disability.
What we know for certain
AI hiring tools used by 83% of Fortune 500 companies score candidates with autism 22-35% lower on communication metrics. Systems penalise atypical facial movements and speech patterns. Only 11% of HR directors surveyed said their tools had been audited for disability bias. The EEOC has opened an investigation.
What we are inferring
The disability discrimination is systemic, not isolated to specific vendors. The root cause is training data that reflects neurotypical hiring patterns. Most employers using these tools are inadvertently violating disability discrimination law.
What we couldn't verify
The total number of disabled candidates who have been rejected due to AI bias, as rejection data is proprietary. Whether any specific candidate has been denied employment solely because of AI scoring. The vendors declined to provide model architecture or training data details.