From Deepfake Interviews to Fake Resumes, AI Job Fraud Hits 72% of Recruiters
A survey of 874 hiring professionals reveals that artificial intelligence is reshaping the job market in ways that challenge even experienced recruiters. Nearly three-quarters report encountering AI-generated resumes, and 15% have observed candidates using face-swapping technology during video interviews to alter their appearance in real time.
The findings highlight a growing issue: high-tech job fraud is on the rise, and most companies are unprepared to counter it.
The Scope of AI-Driven Job Fraud
AI deception extends beyond resumes. Over half of recruiters (51%) have identified AI-generated work portfolios, 42% have spotted fake references, and 39% have encountered fabricated credentials or diplomas. Voice manipulation is also increasing, with 17% detecting voice filters or cloning attempts.
Despite these red flags, only 20% of hiring professionals admit to lacking confidence in detecting AI fraud without specialized tools, revealing a risky overreliance on manual detection. Half of recruiters now view AI-enhanced resumes as fraudulent, with nearly 50% rejecting candidates suspected of using AI. Additionally, 40% have dismissed applicants due to concerns about AI-driven identity manipulation in interviews.
Tech Sector Faces Highest Risk
The technology industry is the most vulnerable, with 65% of hiring professionals noting it as the prime target for AI-driven fraud. Marketing (49%) and creative/design roles (47%) are also heavily affected, as AI can easily fabricate portfolios and visual work. Finance follows at 42%.
Other sectors are less impacted but not immune: government roles see AI fraud in 21% of cases, healthcare 19%, and education 15%. Large companies (250+ employees) are particularly at risk, according to 31% of respondents, though 35% believe all organizations, regardless of size, are vulnerable.
Detection Challenges and Gaps
While recruiters express confidence in spotting fraud, only 31% of companies use AI or deepfake detection software. Most rely on manual HR reviews (66%) or third-party background checks (53%). Biometric identity verification is used by 27%, and a troubling 10% of companies employ no detection tools at all.
Training is another weak point: 48% of HR professionals have received no guidance on AI-driven fraud, though 15% say training programs are in development. Looking ahead, 40% of companies plan to invest in detection tools within the next year, and over half are willing to pay extra for hiring platforms with built-in AI fraud detection.
Industry Response and Proposed Solutions
Hiring professionals are calling for systemic changes. Two-thirds (65%) believe job platforms like LinkedIn and Indeed should flag AI-generated applications, and 62% support federal laws mandating disclosure of AI use in application materials. Additionally, 65% favor mandatory “live only” interviews to verify identities, and 54% advocate for stronger credential and background checks.
The urgency is evident: 88% predict AI-driven fraud will transform hiring within five years. Recruiters see AI-generated resumes as the bigger threat (63%) compared to deepfake video technology (37%).
Survey Details
Conducted by Software Finder, the survey included 874 HR professionals and recruiters, averaging 42 years old, with 50% female, 49% male, and 1% nonbinary respondents. The research explored the impact of AI and deepfake technology on hiring practices and the job market.