Uncovering AI Bias in Hiring

Uncovering AI Bias in Hiring

How AI resume screening reveals racial and gender discrimination

This research introduces FAIRE (Fairness Assessment In Resume Evaluation), a comprehensive benchmark for measuring bias in AI-driven hiring systems.

  • LLMs demonstrated significant racial and gender bias when evaluating resumes
  • Two evaluation methods (direct scoring and ranking) showed consistent bias patterns
  • Models favored certain demographic markers despite identical qualifications
  • Findings raise critical algorithmic fairness concerns for security and ethics in AI hiring

For security professionals, this research highlights the urgent need for bias detection frameworks to prevent discriminatory practices in automated hiring, protecting organizations from both ethical breaches and legal liability.

FAIRE: Assessing Racial and Gender Bias in AI-Driven Resume Evaluations

105 | 124