top of page

Reducing Bias in Hiring Through AI Agents

The rise of AI Agents presents exciting opportunities to not just enhance user interactions but also to help mitigate unconscious biases, particularly in high-stakes processes like hiring. As highlighted in a recent Bloomberg article, even cutting-edge AI language models like OpenAI's GPT can exhibit biases when evaluating job candidates based solely on details like their names when associated with the same resume and processed at the same time. Combating bias is not easy, as Google found themselves in the opposite position after their models were heavily criticized for being too unrealistic and thus not useful, “because they were designed to produce diversity, but ended up doing so too bluntly”.

So how can AI Agents make a difference? By strategically deploying them to anonymize candidate details, do completely isolated assessments, normalize communication style and personal detail identifiers, and monitor and assess outcomes for continuous improvement.

Here's how it could work

  1. Start with the candidate pipeline and have the Agent use multiple machine learning techniques to remove personally identifiable information, location, and any indicia of origin, race, gender, or age prior to the interview.

  2. When conducting the interview, ensure that the candidate responses are completely isolated against the questions asked, and never include information about other candidates.

  3. Have the Agent normalize all responses to the same communication style prior to evaluation

  4. Ask the Agent to remove all references to personal details that may have come out during the interview, and normalize to predefined acceptable variables that are to be used against all candidates.

  5. Ensure when the Agent conducts the evaluation that it does so against objective measurement and never use any analysis of other candidates when doing the evaluation.

  6. After completing the evaluation of the interview, share candidate results with hiring managers, without sharing any personal identifying details.

  7. Have a Moderating Agent conduct regular assessments of the other Agent, and provide feedback to it, wherever it finds evidence of disparate impact.

While there is no single way to reduce bias completely from hiring decisions, the hope is that those who were disadvantaged in the past when it comes to finding employment, can actually benefit from the widespread adoption of AI Agents in the hiring process.


bottom of page