Concerns are mounting over AI-assisted hiring tools following allegations that Workday’s platform discriminates against job applicants. “If AI isn’t designed to address bias risks, it can perpetuate or even amplify exclusionary patterns,” law professor Pauline Kim told Fortune. Despite aims to streamline hiring for growing applicant pools, these tools may reinforce longstanding discrimination. In 2024, 492 Fortune 500 companies used applicant tracking systems, per Jobscan, to manage recruitment. While efficient, experts warn that poorly trained AI can embed biases. A 2023 University of Washington study found AI resume screenings favored white-associated names in 85.1% of cases and female-associated names in just 11.1% across 500 applications for nine occupations. Black male applicants were disadvantaged in up to 100% of cases compared to their white male counterparts. “Biased models trained on biased data create a feedback loop,” said lead author Kyra Wilson, noting the potential for worsening outcomes. Real-world claims echo these findings. Five plaintiffs over 40 sued Workday, alleging its AI screening discriminated based on race, age, and disabilities. Plaintiff Derek Mobley claimed rejections from over 100 jobs over seven years due to these algorithms. Workday denied the claims, stating its tools only assess qualifications, not protected characteristics, and highlighted third-party accreditations for responsible AI development. The company emphasized that customers retain full control over hiring decisions. Beyond hiring, AI-related issues persist. A letter from 200 Amazon employees with disabilities accused the company of violating the Americans with Disabilities Act by using AI processes that fail to meet ADA standards for accommodations. Amazon denied that AI makes final accommodation decisions and stressed its commitment to responsible AI use. **How AI Hiring Tools Can Be Discriminatory** AI hiring tools, which screen resumes or evaluate interviews, rely on training data from existing hiring practices. If these reflect biases—like favoring male candidates or elite universities—the AI can perpetuate them, said Elaine Pulakos, CEO of PDRI by Pearson. Without rigorous data checks, AI can “hallucinate” or produce skewed results. A 2023 Northwestern meta-analysis of 90 studies showed employers favored white applicants over Black (36% more callbacks) and Latino (24% more) applicants with identical resumes. AI can amplify these biases, scaling discrimination faster than human processes, said Victor Schwartz of Bold. However, he noted AI could also standardize fairness if designed well, unlike training thousands of HR staff. **Combating AI Hiring Biases** While Title VII and the EEOC protect against workplace discrimination, no specific federal regulations address AI-driven employment bias, said Kim. Disparate impact discrimination—unintentional bias from neutral policies, like screening out older applicants—remains actionable, but enforcement may weaken under policies hostile to this framework, such as a Trump-era executive order limiting disparate impact pursuits. Local efforts include a New York City ordinance requiring bias audits for automated hiring tools within a year of use. Other state laws emphasize transparency and opt-out options for candidates. Firms like PDRI advocate for human oversight to evaluate AI tools, while Bold’s Schwartz sees potential for AI to diversify hiring by streamlining applications for underrepresented groups, like women, who apply to fewer roles. By prioritizing audits, transparency, and careful data curation, experts believe AI can reduce rather than reinforce workplace bias, but unchecked systems risk scaling discrimination rapidly.