Treating AI like an employee rather than a tool may come with risks, according to a new study from researchers at Boston Consulting Group and Boston University. The study of more than 1,200 managers found that when companies humanize AI — giving agents names, job titles, or spots on the org chart — workers catch 18% fewer errors, escalate work to supervisors 44% more often, and feel less personally accountable for outcomes. The takeaway, researchers say, is that companies should treat AI as software requiring human oversight, not as a colleague.
When companies put AI agents on the org chart and call them "employees" or "teammates," something unexpected happens to the human managers around them: personal accountability drops, escalations rise, review quality falls — and intent to adopt the technology doesn't increase.
We conducted a randomized experiment with 1,261 managers, published today in Harvard Business Review, with these findings.
The pull to humanize AI is strong. We see it everywhere. It helps make technology feel familiar and signals how AI-forward companies are to investors and employees.
However, our research suggests it subtly shifts how people behave — particularly in companies that have already added AI agents to the org chart. Managers in those organizations took less ownership of decisions, leaned harder on others to double-check the work, and caught 18% fewer errors when reviewing documents framed as coming from an "AI employee" rather than an "AI tool."
The companies that will get the most from agents won't simply plug them into org charts. They'll redesign work, accountability, and governance so humans remain accountable for outcomes as AI systems become increasingly capable.
Are you tempted to add your new AI agent to the org chart? You might want to think twice.
As "Agentic AI" becomes more advanced and autonomous, it's becoming a popular trend for leaders to anthropomorphize these systems—treating them as digital "employees" or "co-workers" rather than advanced tools.
The hope? To make the technology feel less foreign to the team and signal a strong commitment to AI innovation.
However, fascinating new research published in the Harvard Business Review by the BCG Henderson Institute reveals that this framing has some serious unintended consequences.
According to their recent randomized experiment, treating AI like an employee can actually backfire in several ways:
⚠️ Shifted Accountability: Humanizing AI tends to shift responsibility away from your actual human workers.
⚠️ Reduced Quality & Increased Escalation: It can lower the thoroughness of human review and cause employees to unnecessarily push decisions up the chain of command.
⚠️ Eroded Trust: Treating AI as a peer can erode professional identity and trust within your team.
⚠️ The Adoption Myth: Surprisingly, calling AI an "employee" doesn't meaningfully increase people's willingness to adopt the technology and integrate it into their daily workflows.
The Bottom Line: Agentic AI has massive potential to expand what our organizations can achieve. But the real challenge isn't whether to adopt it—it’s how to integrate it.
Leaders need to focus on integrating AI into workflows in ways that preserve human accountability, maintain high-quality outputs, and empower employees to work effectively alongside the technology as a powerful tool, rather than feeling replaced by a digital "co-worker."
What do you think? How is your organization framing the rollout of AI agents—are they tools, or are they teammates?



