The Covid-19 pandemic changed the way we work—possibly forever. As companies scrambled to keep tabs on employees working from home, many turned to digital surveillance tools. Now, even as more than half of Fortune 100 companies have brought employees back to the office full-time, that surveillance culture hasn’t gone away.
One of the most controversial technologies employers are using today is facial recognition software. According to a 2025 survey by ExpressVPN, 74% of U.S. employers are using some form of employee monitoring tech—and 67% are using biometric tools like facial recognition and fingerprint scanning. These tools are being used for everything from tracking attendance and verifying identities, to screening job candidates, managing deliveries, and even reducing the number of physical touchpoints in the office.
But this tech doesn’t come without serious concerns.
Facial recognition tools have well-documented flaws and biases—especially when it comes to race, gender, and disability. In fact, there have already been multiple cases of harm linked to their use in the workplace. In 2025, a complaint was filed against Intuit and the HR software company HireVue, alleging that an Indigenous and Deaf woman was denied a promotion due to biased AI decision-making. In another case, a makeup artist said she was fired in 2020 after HireVue’s facial recognition software gave her low marks based on her body language in a video interview.
There’s also the case of an Uber Eats driver who, in 2024, successfully challenged his dismissal after the app’s facial recognition software failed to verify his identity—allegedly due to racial bias. He argued the software didn’t recognize his face because of his skin tone. Researcher Dr. Joy Buolamwini, who has studied these issues extensively, has shown how facial recognition tech struggles with darker skin tones. Her work, including the book Unmasking AI and the documentary Coded Bias, highlights just how deep these flaws run.
The problems don’t end there. A 2025 study found that facial recognition tools are also more likely to misidentify people with Down syndrome, and they tend to struggle with accuracy when identifying transgender and non-binary individuals.
Aside from the technical issues, there’s a bigger question: What kind of workplace are we creating when employees feel like they’re constantly being watched?
In 2023, the White House Office of Science and Technology Policy gathered public feedback about workplace surveillance. Many employees said constant monitoring made them feel anxious and untrusted, leading to lower morale and productivity. Others worried about their ability to organize or unionize under the watchful eye of these tools. And of course, there are major concerns about privacy—especially when it’s not clear how biometric data is stored or used.
So, what should companies do?
Before adopting facial recognition software, employers need to seriously consider the risks. These tools can reinforce harmful biases, alienate workers, and erode trust. If your company already uses them, it’s critical to demand transparency from vendors. Are their systems audited? How do they ensure accuracy and prevent bias? What safeguards are in place?
Employees, too, should be informed about their rights and how their data is being collected and used. Transparency, accountability, and fairness must be at the core of any workplace tech policy.
At the end of the day, we need to ask ourselves: Do we really want a future where our face determines whether we’re hired, fired, or promoted? Because once that door is open, it’s hard to close.
