In early 2024, the U.S. Justice Department revealed a troubling case: two North Korean nationals, along with three accomplices, had allegedly infiltrated 64 U.S. companies over six years by posing as remote IT workers. Using a mix of fake websites, proxy computers, and identity fraud, they secured jobs and access to sensitive systems—all under false pretenses.
This isn’t an isolated incident, and experts warn that such deception is only going to increase. According to research firm Gartner, by 2028, one in four job applicants globally could be fake—created using AI-generated profiles and sophisticated digital tools.
So, how can companies protect themselves?
Start with the Résumé—and Your Gut
Julia Frament, head of people and culture at cybersecurity company Ironscales, helps companies spot fraudulent candidates. Her advice? Trust your instincts.
“If a résumé feels overly polished or unusually generic, that could be a red flag it was generated by AI,” Frament explains.
Watch for an overload of buzzwords, vague role descriptions, or a lack of specific accomplishments. A solid résumé should detail measurable results and name real tools, platforms, or systems the candidate has worked with. If it doesn’t, consider it a “pink flag”—something suspicious enough to investigate further.
Check Their Digital Footprint
A strong LinkedIn profile is another signal of authenticity. Sparse activity, few or no connections, and missing endorsements could indicate the applicant is fake or that their profile was recently created.
“If this is a real person, they should have some kind of digital trail,” says Frament.
Don’t Skip the Face-to-Face
The best way to confirm someone’s identity? Meet them. In-person interviews—whether a formal meeting, a coffee chat, or lunch—are still the gold standard. But in today’s remote work world, video calls often have to suffice. That’s where things get tricky.
With advances in AI, scammers can now create shockingly convincing virtual avatars—complete with synthesized voices and facial expressions. But the tech isn’t flawless.
Look out for:
-
Lag or desynchronization between lips and speech
-
Unnatural clarity of the avatar against the background (real people tend to have blurring around hairlines with virtual backgrounds)
-
Hesitations or glitches when responding to spontaneous questions
Frament recommends throwing in a few physical requests: “Ask them to wave, point to a light switch, or touch their face. AI avatars often struggle with these real-time, unscripted actions.”
Test Their Consistency
Another red flag: a mismatch between how someone writes and how they speak.
“If someone is hyper-formal in emails but totally inconsistent or robotic in conversation, that’s something to take note of,” says Frament.
Surprise Them
Finally, if suspicions remain late in the hiring process, a quick, unscheduled video call can be revealing. It doesn’t have to be formal—just a casual check-in to gauge authenticity.
“Scammers are often prepared for scheduled interviews,” says Frament, “but they usually can’t handle surprise interactions.”
As AI continues to blur the lines between real and fake, HR professionals and hiring managers are being called to adapt—not just to screen for talent, but to verify identity.
“Culture fit used to be the priority,” Frament concludes. “Now, we have to assess authenticity, too.”