How to combat AI bias in your hiring process

Aug 29, 2025 - 02:12
 0  0
How to combat AI bias in your hiring process

If you’ve been keeping tabs on AI news, you’ve likely heard of Mobley vs. Workday—a lawsuit where a job applicant claims that AI-powered hiring software has unfair bias. For many companies, this ongoing lawsuit has been a moment to carefully consider how they are rolling out AI in their hiring process.

As the CEO and cofounder of an AI-native skills company, I’ve spent the last decade working with talent leaders to build better and fairer hiring processes. And, here’s the uncomfortable truth: The biggest source of hiring bias isn’t AI—it’s us. While high-profile lawsuits like Mobley gets all the headlines, over 99.9% of employment discrimination claims in the previous five years don’t center on AI bias, but on human bias.

The conversation today isn’t about whether to use AI, but how to use it to fundamentally improve the flawed, human-driven status quo.

Are AI hiring tools biased?

The simple answer to whether AI hiring tools are biased is often “yes.” Because most AI models are trained on historical data, they can inherit and amplify existing human biases.

However, a far more relevant question is: Are AI systems more biased than humans? The answer to that is a resounding “no.” The same meta-analysis that showed employment discrimination claims were based on human bias, also shows that female candidates experience up to 39% fairer treatment with AI compared to human evaluators, and racial minorities see up to 45% fairer treatment. This isn’t an excuse to ignore the risk of AI bias—it’s a signal that AI can and should be a tool to raise the standard for fairness in hiring.

Best practices to reduce bias when using AI hiring tools

Just because AI is usually less biased than humans doesn’t mean that it is bias-free. Here are four key ways your company and its vendors can manage and mitigate bias in your AI hiring systems.

#1: Publish AI explainability documentation

The first step for any employer or AI hiring tool is to clearly and thoroughly explain the rubric AI uses to score candidates and how candidates are evaluated against these criteria. This explainability statement should make sense to a nontechnical audience.

#2: Conduct bias testing and auditing

Bias audits were once rare, but today many companies consider them nonnegotiable. With the meta-analysis showing that 75% of AI hiring tool vendors now conduct bias testing, this has become a core expectation. Start by asking your vendors to track disparate impact for sex and race/ethnicity to comply with laws like NYC Local Law 144. Then expand your audit to other categories like age and disability status.

#3: Stay on top of AI regulations 

Today, companies across the globe should be aware of four major regulations and standards affecting AI use in hiring processes:

  • NYC Local Law 144 regulates the use of automated employment decision tools (AEDTs) that use AI and requires audits for bias and public disclosure of the results.
  • Colorado SB 24-205 requires AI hiring tool vendors and companies using them, to comply with bias monitoring and reporting requirements.
  • The EU AI Act establishes a wide-ranging legal framework for ethical AI use in the European Union.
  • ISO 42001, a set of global standards rather than regulations, specifies requirements for the use and maintenance of AI within organizations.

In the coming years, more regulations governing AI tool usage will likely emerge, with growing pressure for vendors to comply with international standards. To stay competitive in the hiring market, companies using AI hiring tools should ensure their vendors keep up with changes to these regulations.

#4: Keep humans in the loop

Regardless of how and where they implement AI, companies should keep a human involved in the hiring process. A best practice for AI-powered skills screening tools, for instance, is to provide a summary of how each candidate scored against the AI tool’s rubric. This allows hiring teams deeper insight into how AI makes its recommendations and provides context for teams to make their own decisions about advancing a candidate in the process.

Final words

The fear of AI risks in hiring is understandable, but it needs to be weighed against the significant, well-documented risks of not using it. The real risk for leaders today isn’t adopting AI—it’s falling behind. The tools now exist to leverage AI not just for efficiency, but to build a more equitable and skills-based hiring process. The choice for leaders is clear: We can either continue accepting the inherent biases of human-led hiring processes, or use technology to raise the bar for fairness, and build a better way to hire.

Tigran Sloyan is CEO and cofounder of CodeSignal.

What's Your Reaction?

Like Like 0
Dislike Dislike 0
Love Love 0
Funny Funny 0
Angry Angry 0
Sad Sad 0
Wow Wow 0