The Algorithm Rejected You Before the Interview. Federal Regulators Are Starting to Notice.

AI screening tools filter millions of job applications before a human reads them. Federal regulators have confirmed the bias problem is real, and the legal exposure is growing.

Before a human hiring manager reads your resume, there is a reasonable chance an algorithm already decided you were not worth their time. Automated screening tools now sit at the front of the hiring process at most large companies. They score resumes, rank candidates, flag keywords, and in some cases conduct AI-analysed video interviews before a person is ever involved. The problem is that several of these tools have been found to reproduce and amplify exactly the kinds of bias employment law was designed to prevent. Federal regulators noticed. The legal exposure for companies using them is growing, and most of those companies do not realise it yet.

This matters directly to the broader pattern we have tracked: AI is not just eliminating jobs through automation, it is reshaping who gets access to the jobs that remain. As we covered in our analysis of the Stanford 2026 AI Index, entry-level roles are disappearing fastest and the workers most affected are those with the least leverage to push back. Add a discriminatory screening layer on top of a shrinking market and the structural problem gets worse.

How AI Hiring Tools Actually Work

Most large employers receive hundreds of applications for each open role. The volume alone creates pressure to automate initial screening. The tools that have emerged to handle this fall into a few categories.

Resume screening software scans applications for keywords, qualifications, and patterns associated with successful past hires. Applicant tracking systems score and rank candidates before a recruiter sees the list. Video interview platforms, of which HireVue is the most widely deployed, record candidate responses and analyse facial expressions, word choice, tone, and pacing against a model trained on previous hires. Predictive scoring tools claim to assess candidate fit based on psychometric testing or game-based assessments.

What all of these share is a reliance on historical data to build their models. They are trained to find people who look like the people who were previously hired and performed well. That seems logical. The problem is that historical hiring data at most organisations reflects historical bias. If a company spent a decade hiring predominantly white men for technical roles, a model trained on that company’s data learns that white men are good candidates for technical roles. It encodes the bias as a feature, not a bug.

job applicant being screened by AI algorithm automated hiring software
An estimated 99% of Fortune 500 companies now use automated screening tools at some stage of the hiring process. | Pexels

Amazon Built This Problem and Then Documented It

The clearest early case study is Amazon’s internal AI recruiting tool, which the company quietly scrapped in 2018 after discovering it systematically downgraded resumes from women. Reuters broke the story in October 2018 with reporting that remains the most detailed public account of how this failure unfolded in practice.

Amazon had trained the tool on a decade of its own hiring data, skewed heavily toward men in technical roles. The model penalised resumes that included the word “women’s” as in “women’s chess club” or “women’s college.” It downgraded graduates of two all-women’s colleges. It had learned that male candidates were preferred because male candidates had historically been selected, and it optimised for that pattern.

Amazon’s engineers tried to correct the bias but found they could not reliably do so. The company disbanded the team working on the project. Reuters confirmed the tool was never used to evaluate candidates after the bias was discovered, though it had been in development for years before the problem was identified. What Amazon found was not unique to Amazon. It is a structural property of training these models on historical data from organisations with non-representative hiring histories, which is most organisations.

Federal Regulators Have Entered the Conversation

In May 2023, the Equal Employment Opportunity Commission issued technical assistance clarifying that Title VII of the Civil Rights Act and the Age Discrimination in Employment Act apply to AI hiring tools the same way they apply to human decision-makers. The EEOC’s position is explicit: a company cannot escape liability for discriminatory screening by pointing to an algorithm. If the tool produces disparate impact on a protected group, the employer is responsible. The guidance is available through the EEOC’s AI resource page.

The same month the guidance was published, the EEOC filed its first lawsuit directly targeting AI-driven hiring discrimination. The case against iTutorGroup, an online tutoring company, alleged that the company’s application software automatically rejected female applicants over 55 and male applicants over 60, in violation of the Age Discrimination in Employment Act. The case was settled. The significance was not the settlement terms but the signal: the EEOC had demonstrated it was prepared to pursue employers whose automated tools produced discriminatory outcomes, regardless of whether a human had made the final call.

EEOC federal investigation AI hiring discrimination employment law
The EEOC has signalled that existing civil rights law applies to AI hiring tools regardless of whether a human made the final decision. | Pexels

State and City Governments Are Moving Faster Than Federal Law

Several jurisdictions have moved beyond guidance into enforceable regulation. New York City’s Local Law 144, which took effect in July 2023, requires employers using automated employment decision tools to conduct annual bias audits by independent assessors and to post the results publicly. Employers must also notify candidates that automated tools are being used in their evaluation. The law covers any tool that “substantially assists or replaces discretionary decision-making” in hiring. Details are available through New York City’s consumer affairs office.

Illinois passed the Artificial Intelligence Video Interview Act in 2019, requiring employers to notify candidates before using AI to analyse video interviews, explain how the AI works, and obtain consent. The law was amended in 2021 to add requirements around data retention and deletion. Colorado, Maryland, and Washington have passed or are advancing similar legislation targeting algorithmic decision-making in employment.

The regulatory landscape is fragmented and moving quickly. Companies that deployed these tools three or four years ago under no legal framework are now operating under obligations they may not have accounted for.

The Compliance Gap Most Companies Have Not Closed

The practical problem for employers is that many of these tools were purchased from vendors who made performance claims but did not run bias audits. The companies using them often have no visibility into how the models work, what training data was used, or what proxy variables the algorithm is using to score candidates. A tool that claims to assess “cultural fit” may be weighting factors that correlate strongly with age, gender, or race without naming them.

Under the EEOC framework and the NYC law, that is not a defence. The employer is responsible for the outcomes their tools produce. The vendor’s terms of service are not a shield against discrimination claims. What this creates is a situation where companies have outsourced consequential decisions to black-box systems they do not control or fully understand, and are now being told they bear full legal liability for those decisions.

This connects to the pattern documented in our reporting on Gen Z’s entry into the automated job market: the workers being filtered out by these systems are rarely told why, have no meaningful right of appeal, and in most cases do not know an algorithm was involved. The asymmetry of information between employers deploying these tools and candidates being evaluated by them is significant.

What Job Seekers Can Do With This Information

The legal developments create some limited leverage for candidates. If you applied for a role in New York City and were rejected, you have a right to know whether an automated tool was used and to see the bias audit results. If you were over 40 and rejected through an AI screening process, the Age Discrimination in Employment Act applies to that decision. The EEOC accepts complaints and has demonstrated it will investigate.

Practically, the more immediate step for candidates is understanding how these tools work well enough to navigate them. Keyword optimisation in resumes is no longer optional when the first reader is a machine. For video interviews, framing answers clearly and at a measured pace matters because the AI systems that analyse them are sensitive to speech patterns. Whether this is a reasonable standard to apply to candidates is a separate question. The tools are in use now and the legal challenges are running years behind the deployments.

The deeper issue, which regulation is beginning to address but has not resolved, is whether it is acceptable to use black-box systems to filter people out of economic opportunity at all. Amazon found its tool was discriminatory and scrapped it. Most companies running equivalent tools have not looked hard enough to find their equivalent problem. Some have found it and deployed the tools anyway. The EEOC has made clear that the legal clock on that decision is running.


This article draws on reporting by Reuters, the EEOC’s AI guidance, and NYC Local Law 144. Analysis and interpretation reflect the author’s reading of publicly available information and should not be treated as legal advice.

ST

Synthetic Truth

Independent coverage of AI, work, and money. No corporate sponsorship, no stock portfolio, no incentive to mislead. Just honest analysis on where technology, power, and the economy are headed.

Leave a Comment

Your email address will not be published. Required fields are marked *

Free Newsletter

AI is changing everything.
Stay ahead of it.

Get the unfiltered truth about AI, jobs, and money — straight to your inbox. No hype. No fluff.

No thanks, I prefer to stay uninformed