The middle manager’s AI survival guide
Mass layoffs and mixed messages—but also a shot at reinvention: 35 people caught in the middle share their war stories and hard-won advice.
Caught in the Middle: How AI Is Breaking the Management Layer
The promise was productivity. The reality, for millions of middle managers, has been something closer to whiplash.
Executives have grown bewitched by AI's potential to slash costs and accelerate output. Rank-and-file employees oscillate between fear that their jobs are disappearing and overreliance on tools they only half-understand. Caught between those two forces are middle managers — responsible for executing directives they didn't write, reassuring employees they can't fully reassure, and making sense of strategies that often don't make any.
That pressure is now intensifying, because some of the most powerful voices in tech have decided that middle management itself is the problem.
The Case Against the Middle
Meta and Microsoft have both made headlines recently with workforce reductions framed, in part, as a response to ballooning AI costs. Palantir CTO Shyam Sankar argued on Fox News that "AI can eliminate bureaucracy because we've built up all these layers … to concentrate power, essentially, in the hands of a few bureaucrats." Block CEO Jack Dorsey, in the wake of laying off 40% of his workforce, went further: he wants to eventually compress management from five layers to two or three — and ultimately have all 6,000 of his employees report directly to him.
It is worth pausing on that number. Six thousand employees, one CEO. To win World War II, Dwight Eisenhower depended on an army of middle managers — sergeants and lieutenants, captains and colonels — to translate strategy into action across vast, complex operations. Dorsey's vision is the kind of magical thinking that impresses investors at pitch meetings but makes about as much sense, operationally, as eliminating offensive linemen to reduce NFL salary cap burdens.
The rhetoric, however, has real consequences. It shapes how companies deploy AI — and how they treat the people caught in the middle of that deployment.
What Chaos Actually Looks Like
At Meta, the pressure to automate was less a strategy than a scramble. Ethan, an individual contributor on Meta's product risk review team who asked to be identified only by his first name, watched his department get restructured six times in six months. He had a new manager every 30 days. None of them knew what the end goal was.
"We had two weeks of getting to know each other, and then we were trying to understand what the new objectives were for the new AI improvements," he says. "By the time we got comfortable, there would be another shift." The AI tool his team was using made frequent mistakes and couldn't factor in context absent from product development documents — things like institutional history. Ethan found himself rubber-stamping products despite the risks they presented. "It was all in the name of shipping quickly and removing the privacy and risk function, as that was seen as a blocker to development. A lot of things slipped through the cracks."
Eventually, the logic behind the chaos became clear. "It turns out what we were doing was setting up the framework for our department to be automated by AI," he says. His entire team was laid off. At the time he left, the risks he'd raised remained unaddressed. Last week, Meta announced it would begin tracking employee keystrokes and mouse movements to help train its AI.
His advice: "If it feels like the company is trying to automate your job, they probably are. Loyalty to any company — passion for the people or the product, especially in tech — means less to management than the share price."
A Meta spokesperson responded that "this AI evolution within Risk Review doesn't replace human judgement — it strengthens it."
The disorder at Meta was extreme, but the underlying dynamic — AI adoption prioritized over AI utility — was not unique to one company.
When Adoption Becomes the Goal
At Amazon, a manager who was laid off earlier this year and requested anonymity described a culture in which the appearance of AI use had become its own performance metric. "Managers are being told to hold people accountable for using AI," he says — not to improve the quality of work, but to demonstrate that the company was embracing it.
"Logins and tokens and usage were tracked and held against people during annual reviews and promotion discussions." The result was predictable: employees built redundant, low-value AI apps just to generate usage numbers. "VPs would brag about how much their developers use AI, and it's an internal contest to see which team is 'doing the most with AI.' What the builders are doing and the quality or usefulness of the output has become secondary."
An Amazon spokesperson said the company does not centrally mandate AI use or instruct managers to consider AI utilization in performance evaluations — a claim that sits uneasily alongside the behavior multiple employees described.
At a startup that builds an AI tool for engineers, Jenna (a pseudonym), a marketing manager, watched a similar dynamic unfold. The company wanted to project total AI commitment — not because it improved results, but because it made for a compelling fundraising narrative. The strategy worked: the startup raised $100 million in its latest round. A month later, Jenna and her team were laid off, replaced by a group described as more "product-forward."
She had seen it coming. "I'd ask questions like, 'What are you going to say to the junior developer who doesn't get the job because AI can do the work?'" she says. "I wanted to do good work, which is why I was asking tough questions. I wasn't trying to be a naysayer."
Asking the tough questions, in a climate like this, carries its own risk.
The Human Cost of a Bad Directive
The workers most visibly exposed to poor AI strategy are rarely the executives who designed it. Divya, a former analytics manager at Genentech, watched her company reorganize in 2024 under the banner of becoming more "AI-ready." Before employees could re-interview for their positions in July, the real competition was for proximity — to teams and managers seen as AI-competent, regardless of whether anyone actually understood what that meant.
"We were just wondering what's happening and trying to find ways to make ourselves seem important and valuable," she says. "People were not really working. They were just preparing for the interviews." Divya was laid off that July.
At Oracle, the calculus was starker still. The company is committed to $300 billion in AI data center investment and is looking for ways to offset those costs. "Humans are the most expensive part of the equation," says Evan Harmer, a pseudonym for a manager laid off there last September. "If an executive sees that AI tokens cost $200 to $500 a month and replace an unknown number of people, the math is difficult to ignore." Most of the layoffs he witnessed hit engineering — precisely where AI is most visibly replacing human work.
What Good Management Actually Requires
Not every story ends in layoffs, and not every AI rollout is a cynical theater exercise. The managers who have navigated this moment most effectively tend to share a few traits: they've insisted on clarity, they've protected their teams' time, and they've refused to let adoption metrics substitute for outcomes.
At Boston Consulting Group, Pragya Maini, global director of people and organization, built role-specific enablement sessions — separate training for junior consultants, project leaders, and senior staff, because the relevant use cases differ at each level. "We break it down by different use cases to show them how AI tools can be used," she says. She also cautions against the hidden cost of open-ended experimentation: "How do you make sure people are not getting into rabbit holes? I can't risk having people spend a full week just telling me, 'I learned five new tools.'" Her practical framework: assign AI to a real deliverable rather than a sandbox, accept a short productivity dip upfront as the cost of training, and build a team norm of sharing what works.
Jason Ippen, VP of brand strategy at Georgia-Pacific, learned early that mandating AI backfired. When his team was first told to experiment with tools like Midjourney and ChatGPT, "the creatives were stressed. We heard a lot about the limitations rather than how the tools could enhance their work." His revised approach has focused on creating a low-pressure environment for experimentation — and letting curiosity lead.
For managers without that organizational support, the path forward is more independent. Russell Taris, a regional manager at a civil engineering firm where leadership is neither pushing AI nor blocking it, started by experimenting only on tasks that affected himself: meeting prep, status updates, personal notes. "By the time anyone asked how I was being more productive, I already had months of practical experience," he says. He recommends finding peers in similar positions — not IT departments or tool vendors — and building lateral knowledge networks. To get senior leaders on board, he focuses on results: "Most senior leaders don't want a presentation about AI. The best way to get buy-in is to show that your work is improving without creating new problems."
Managing the Fear That Nobody's Talking About Openly
Beneath all the strategic questions is an anxiety that most managers have to navigate without a script: their direct reports are scared.
Mickael Mingot, former head of programs and content strategy at TikTok for France, Belgium, and Brussels, heard it regularly. "You have to reassure them, but you're actually not 100% sure of what's going to happen," he says. "Direct reports think you have visibility into every strategic decision, which is not true. Sometimes you have only 10% visibility." His approach was to reframe AI as an amplifier rather than a replacement. "Use AI as something that will multiply you — because you are creative, because you are intelligent. If you use it, you will be even more intelligent, even more creative."
Some managers are also having to manage upward against leaders who misunderstand what AI can and cannot do. "A lot of what we get from leadership is: 'Shouldn't AI help you do this faster? This shouldn't take 20 weeks,'" says Lyn, a product manager for a retail platform who requested a pseudonym. Her job requires understanding employee problems deeply — going out, talking to people, and uncovering institutional logic that no AI can infer from a document. An HR director in Singapore offered a blunt three-part response to executives who push too hard: be transparent about the current state of prompting, show them the actual output quality, and take the time to find out whether they've ever tried it themselves.
The Exception That Proves the Rule
The clearest counterexample to all of this dysfunction is Abhishek Chaturvedi, an engineer at Docusign who, last August, brought a bottoms-up proposal to his CTO: a small internal group of AI champions who would figure out — without top-down mandate — where AI actually made sense.
"Instead of having a mandate saying we have to use AI, we wanted to figure out where it actually makes sense," he says. His group identified specific workflows, selected the most appropriate tools, established monthly workshops, and created office hours for colleagues with questions. The emphasis was always on building trust in the output, not maximizing usage metrics. "As engineers, we are responsible for the code that we submit." Today, the group has grown to 85 people, and AI adoption among Docusign engineers sits at 95%.
The contrast with Meta's experience — where adoption was mandatory, chaotic, and ultimately led to mass layoffs — could not be starker.
The Limits of the Middle
The managers most energized by AI are consistently those who were given room to lead its adoption on their own terms, or who created that room themselves. That is not a coincidence. The technology is genuinely useful; the question is whether the organizations deploying it are structured in ways that allow human judgment to guide it.
That question, ultimately, is not one that middle managers can answer alone. They can translate strategy into practice, protect their teams from bad mandates, and make thoughtful decisions about what to automate and what to leave to human hands. But the most consequential choices — who keeps their job, how layoffs are conducted, who gets left behind — are made above them, by people who are often insulated from the consequences.
The irony of this moment is that the executives most eager to eliminate middle management may be dismantling precisely the layer that could make AI adoption actually work. Middle managers are not bureaucratic friction. They are, as Eisenhower understood, the transmission between vision and execution. Remove them, and you don't get a flatter organization. You get an organization where nothing gets done.
