6 Questions To Ask Before Turning Management Duties Over To AI

 


When the Boss is a Bot: Can AI Handle the Messy Reality of Management?

Human management is complicated. Messy, political, and deeply personal — it's the kind of work that rarely shows up in a job description but makes or breaks a team. Now, as AI agents take on increasingly prominent roles in organizations, a provocative question is emerging: can artificial intelligence handle not just the tasks of management, but the art of it?

Spoiler: it's more complicated than it sounds.

The 1% / 99% Problem

A corporate IT director recently admitted something most managers already know but rarely say out loud — running a department is about 1% technical proficiency and 99% politics. It's about reading the room, building trust, knowing when to push and when to pull back. It's deeply, irreducibly human.

AI agents don't play politics. They don't jockey for position or take credit for someone else's idea. In theory, that sounds refreshing. But does the absence of political behavior actually make for better management — or does it just create a different set of blind spots?

To find out, I asked executives and managers to share the unwritten rules that define great leadership. Then I asked the harder question: could an AI actually do any of this?



Learning to Say No

"Saying 'no' is actually more important than saying 'yes,'" says Phil Santoro, entrepreneur and co-founder at Wilbur Labs. In fast-moving environments, teams get buried under requests that look like productive work but are really just noise. The real skill is filtering ruthlessly so your team can focus on what actually matters.

It's a surprisingly strategic act. Can an AI agent develop that instinct — distinguishing high-impact work from distraction — without being told explicitly which is which? That kind of judgment requires context, organizational awareness, and a feel for shifting priorities. We're not there yet.

Praise That Actually Means Something

Tom Thomas, senior data engineering manager at Indeed, swears by the power of purposeful recognition. Genuine praise builds trust and raises the bar — but only when it's specific, timely, and tied to real expectations for growth. Empty flattery does more harm than good.

AI can generate positive feedback. But can it deliver sincere praise? Can it read the moment well enough to know that a quick "nice work" lands differently than a thoughtful acknowledgment in front of the team? The mechanics are easy. The meaning is the hard part.



Ownership and the Budget Conversation

Both Thomas and Santoro emphasize that transparency around budget decisions isn't just good housekeeping — it's a leadership tool. When teams understand the financial picture, they start thinking like owners. They find creative ways to cut costs, eliminate waste, and invest smarter.

AI is genuinely useful here. It can flag overspending, predict overruns, and surface unused subscriptions. But can it facilitate the conversation that turns a spreadsheet into a shared sense of mission? That's a different skill entirely.

The Authenticity Question

"When people feel safe bringing their true selves to the table, teams thrive," says Gil Pekelman, CEO of Atera. Fear of judgment kills innovation. Psychological safety isn't a soft perk — it's a performance driver.

Which raises an interesting question: would you feel comfortable being vulnerable with an AI manager? For some, the lack of judgment might actually feel liberating. For others, sharing real concerns with a system that logs everything could feel like anything but safe. The answer probably depends less on the AI and more on the culture surrounding it.

Building Teams, Not Just Org Charts

Orla Daly, CIO at Skillsoft, puts it simply: stop focusing on hierarchy and start focusing on capability. Great teams are built on skill, adaptability, and genuine curiosity — not reporting structures.

AI can analyze team compositions, identify skill gaps, and recommend pairings. But fostering the kind of organic curiosity and collaboration that makes a team click? That's still stubbornly human territory.

Measuring Transformation, Not Just Activity

Perhaps the most sobering insight comes from Jean-Philippe Avelange, CIO at Expereo, who points to a striking Microsoft study: 80% of employees stopped using AI tools after just three weeks — not because the tools failed, but because there was a gap between what the technology could do and what people could actually accomplish with it.

The instinct is to measure logins and clicks because they're easy to count. But what actually matters is harder to quantify: are people solving problems they couldn't solve before? Are they growing? Avelange calls this "measuring capability alongside activity" — and admits it's messy and imprecise. But it's the only measurement that reflects real transformation.

Can an AI manager look beyond its own metrics to understand whether genuine change is happening? That's a level of self-awareness that even most human managers struggle with.

So, Can AI Manage?

For certain tasks — scheduling, budgeting, performance tracking, even some forms of feedback — AI is already proving useful. And as tools mature, that list will grow.

But the deeper work of management? Knowing when to say no. Making praise feel real. Creating safety. Building belief. Measuring what matters even when it can't be easily counted. That work lives in the space between data points, in the moments of human judgment that no algorithm has cleanly solved.

The future of management probably isn't AI or humans. It's figuring out, thoughtfully and honestly, which parts of the job each does best.

Post a Comment

Previous Post Next Post