Only 1 in 5 Workers Say Their AI Use Is Checked at Work. That Needs to ChangeIn the push to adopt the new technology to boost productivity, companies may open themselves to serious problems if they don’t set rules for AI at work.



AI is transforming the workplace, with many employees eager to use it to streamline repetitive tasks and boost productivity. Forward-thinking companies are all-in, encouraging AI adoption to stay competitive. But every game-changing technology comes with risks, and a new survey reveals a startling gap: most employers have little to no visibility into how their workers are using AI. This lack of oversight opens the door to serious legal, financial, and reputational dangers.

A survey by EisnerAmper, a New York City-based business advisory firm, found that only 22% of U.S. desk workers using AI say their company monitors how these tools are used, according to HRDive. That means nearly eight out of 10 workers are operating AI with minimal supervision. Even if a company has safety protocols or legal guidelines in place, there’s no assurance employees are following them—or any audit trail if things go wrong.

AI’s imperfections are well-documented. Models can “hallucinate,” generating entirely false information while presenting it as fact. EisnerAmper’s survey of over 1,000 U.S. workers with a bachelor’s degree or higher who used AI in the past year confirms this: 68% reported frequent AI errors. Yet, 82% remained confident in AI’s overall accuracy, showing a surprising disconnect between reality and trust.

Jen Clark, EisnerAmper’s director of technology enablement, highlighted a “communication gap” between employers and employees about responsible AI use. This gap is especially concerning given recent reports of risky AI practices. A May Reddit thread titled “Copy. Paste. Breach? The Hidden Risks of AI in the Workplace” revealed workers uploading sensitive data to AI tools, putting companies at risk of data breaches and compliance violations. In February, another report noted employees sneaking unauthorized AI tools into the workplace, either because their company didn’t provide AI or the approved tools felt subpar. An August survey added fuel to the fire: 58% of U.S. workers admitted to pasting sensitive company data—like client records or financial details—into AI systems to get quick answers.

The problem? AI models can “leak” data. Information fed into public-facing tools like chatbots may resurface in unrelated queries or be used to train future models, potentially exposing sensitive client or company information. This could violate privacy agreements, derail major deals like IPOs, or trigger regulatory penalties and lawsuits.

What Can Employers Do?

  1. Create a Clear AI Policy: If your company doesn’t have one, it’s time to act. A simple policy could specify approved AI tools that align with your data privacy standards or restrict sensitive data uploads to public platforms like ChatGPT.

  2. Invest in Ongoing AI Education: AI tools evolve rapidly, bringing new benefits—and risks. Regular training on responsible AI use keeps employees informed and can even reassure clients that their data is safe, giving your company a competitive edge.

By setting clear guidelines and fostering open conversations about AI, employers can harness its power while minimizing risks. Don’t wait for a data breach to take action—start building a safer AI workplace today.

Post a Comment

Previous Post Next Post