Skip to content




The middle manager’s AI survival guide

Featured Replies

rssImage-ae9ad0e1890142e4eae802ad68e972e5.webp

Pity the middle manager. Even before the emergence of AI, these jobs had increasingly become a one-way ticket to burnout and misery. Since 2013, the average number of direct reports has increased by almost 50% to twelve employees, according to Gallup. The same poll revealed that less than one-third of managers are engaged at work, while over a quarter are planning to leave their jobs.

Enter AI: The ever-changing chimera, swathed in hype, is now making life more complicated for managers. Executives are bewitched by AI’s promise of productivity. Rank-and-file employees oscillate between fear that AI will take their jobs and overusing it. Those sandwiched in between, the middle-managers, are caught between corporate’s AI directive (or lack thereof) and the occasionally wild experimentation of their direct reports.

Unsurprisingly, some tech moguls see AI as an opportunity to eliminate the bothersome costs associated with paying human beings. Meta and Microsoft recently made headlines with new announcements about workforce reductions to counter ballooning AI costs. This follows Shyam Sanker, CTO of Palantir telling Fox News: “AI can eliminate bureaucracy because we’ve built up all these layers… to concentrate power essentially in the hands of a few bureaucrats running organizations and away from the worker at the frontline.” 

Block CEO Jack Dorsey appears to be on board with the idea.  In the wake of laying off 40% of his workforce, he wrote a blog post arguing that AI will make middle managers obsolete. On Sequoia Capital’s Long Strange Trip podcast, he said he plans to eventually reduce management from five layers to two or three, with the eventual goal of getting rid of all of them to have all 6,000 employees report directly to him.

Let us pause for a moment to consider the notion of 6,000 employees reporting directly to a CEO. Not exactly a Sun Tzu maxim. To win World War II, Dwight Eisenhower depended on the effectiveness of an army of middle managers (sergeants and lieutenants, captains and colonels). Dorsey’s assertion is the kind of magical thinking that may inspire potential investors to reach for the checkbook, but that in practice makes about as much sense as reducing salary cap burdens in the NFL by eliminating offensive linemen. 

The Challenges AI presents

Designing your own layoff: Meta

Companies like Meta have offered major-league compensation packages to AI researchers they think will get help them get the edge, while middle managers scramble to implement technologies that evolving more rapidly than projects can even be outlined. 

Ethan, an individual contributor on Meta’s product risk review team, describes utter chaos in 2025 as pressure to use an internal AI tool to handle risk reviews for products under development ramped up. (Ethan requested we only use his first name.)

“My department was restructured six times within six months…I had a new manager every 30 days. None of them knew what the end goal was,” he says. “We had two weeks of getting to know each other, and then…we were trying to understand what the new objectives were for the new AI improvements. By the time we got comfortable, there would be another shift of ‘oh, we will actually want to do the process this way’ and then I would report to a different manager. A lot of people were burnt out from the constant change. There was a ton of attrition on the team.”

Work quality suffered. The AI often made mistakes and couldn’t factor in context that wasn’t included in a product development document such as historical information. Ethan was essentially rubber-stamping products despite the risks they presented. “It was all in the name of shipping quickly and removing the privacy and risk function as that was seen as a blocker to development,” he says. “A lot of things slipped through the cracks.”

Eventually, Ethan discovered the reasoning behind the mad rush: “It turns out what we were doing was setting up the framework for our department to be automated by AI,” he says. “I gave Meta nearly a decade of my life. It was my dream job for most of that time. In the end, everyone I worked with was laid off just so that shareholders could get a better return and Zuck could spend more on AI data centers the size of NYC.”

Ethan left Meta last June. Shortly after his entire team was laid off. At the time he left, the risks he’d worried about had not been fixed. Just last week, Meta announced to employees the company would begin tracking keystrokes and mouse movements to help train its AI.

His advice to others caught in the same trap? “If it feels like the company is trying to automate your job, they probably are. You should always be keeping your options open,” he says. “Loyalty to any the company, passion for the people or the product, especially in the tech industry means less to management than the share price.”

In response to questions, a Meta spokesperson referred Fast Company to a Meta Newsroom Post which states: “this AI evolution within Risk Review doesn’t replace human judgement—it strengthens it.”

AI for the sake of AI: Amazon

A manager at Amazon (who was laid off earlier this year and wanted anonymity so as not to jeopardize his severance) described the overall culture at major tech firms: “Managers are being told to hold people accountable for using AI,” he says, in order to “show the company is adopting AI,” regardless of what AI was doing to the actual quality of the work.

At Amazon, “logins and tokens and usage were tracked and held against people during annual reviews and promotion discussions.” The result?  

“People were building multiple highly redundant PartyRock [an Amazon AI app builder] apps to perform “doc writing reviews” because writing documents is a key aspect of working at Amazon. There were many many apps that were written to show that people were using AI and the value of the apps themselves was super low,” he says.

“VPs [would] brag about how much their developers use AI and it’s an internal contest to see which team (based on actual monitoring of usage) is ‘doing the most with AI.’  What the builder/developers are doing and the quality or usefulness of what the output is has become secondary….”

“With a company of Amazon’s size and scale, AI adoption is going to look different across different parts of the business,” Montana MacLachlan, an Amazon spokesperson says. “…What we hear from the vast majority of our teams is that they’re getting a lot of value out of the AI tools that they use day-to-day.”

Fake it to make it: Genentech

Divya, a former analytics manager at Genentech, says the company ramped up its AI initiative in 2024 and announced a reorganization last April in the name of making the company more efficient and AI ready. People were allowed to re-interview for positions in July. Before the interviews there was a mad scramble to be associated with teams and managers who were good at using AI, rather than focusing on how to actually use AI. “We were just wondering what’s happening and trying to find ways to make ourselves seem important and valuable,” she says. “People were not really working. They were just preparing for the interviews.”

Divya was laid off in July 2025.

“Genentech is hiring hundreds of new roles to embed automation, digital, and AI across the organization,” says Nadine Pinell, a spokesperson for Genetech. “Our digital transformation is as much about people as it is about technology.”

Don’t question authority: the startup

Jenna, a marketing manager at a start-up building an AI tool for engineers, describes a mounting pressure to use AI whenever possible, even if it results in lower-quality work. She says the company wanted to demonstrate it was all in on AI, even in divisions that didn’t need to use AI. The strategy was effective: The startup was able to raise $100 million for its latest round of funding.

Jenna, who requested we only use her first name, became increasingly worried about the approach. “I’d ask questions like what are you going to say to the junior developer who doesn’t get the job now because AI can do the work? Or the senior software engineer who doesn’t have a team of people to manage now because it’s exclusively AI agents?” she says. “I wanted to do good work, which is why I was asking tough questions. I wasn’t trying to be a naysayer.”

A month after the funding round, Jenna and her team were laid off.  The company is replacing them with a group that’s more “product-forward.”

Cost-benefit analysis: Oracle

Evan Harmer (a pseudonym) was a manager at Oracle who was laid off last September. “Oracle is on the hook for $300 billion worth of AI data centers, and so they’re looking for ways to cost cut. Humans are the most expensive part of the equation,” he says.

“There was no warning… Most of the folks that we see are getting laid off are in engineering where AI is writing the code,” he says. “If an executive sees that, and they’re paying $200 to $500 a month for AI tokens that replaces I don’t know how many people, the math is difficult to ignore.”

Oracle did not respond to a request for comment.

The Opportunity

Harmer found a new job at an AI startup. Within six months, he was able to vibe code three different apps, something that would have taken two teams a year to do at his old job. “The first time I created something, it was just like, Oh my goodness. This is it, I see it,” he says. “I understood why everyone was saying how AI was going to be such a big disruptor in the software industry.”

Many of the nearly three dozen workers we interviewed described confused– and confusing–AI strategies at their companies. Many organizations aren’t building AI but are hoping to reap the productivity gains it promises. Priya, a manager at a public relations firm in India who requested a pseudonym because she did not want to jeopardize her chances of promotion, is struggling to understand how to apply AI.  She says her company’s directive amounts to “use it but don’t use it.”

Every month or every other month, we have training programs…that show us that the company has onboarded another new AI platform that can help you seamlessly do your work…from writing your content to understanding the larger client landscape and identifying misinformation.” She says: “There’s a lot of things that are dumped into these one-hour tutorials with no follow up. Then you go back to your meeting notes…and try to figure it out.”

At the same time, Priya’s organization writes copy for caregiving brands and the directive is to “sound human.” “How do I tell my junior associate you can use [AI], but you shouldn’t use it?” she asks.

Priya says she spends three days a week rewriting her direct report’s AI slop.

Tips on managing down

At BCG, director of people and organization Pragya Maini found training sessions with very specific and clear information to be helpful. “We have enablement sessions that I’m running for project leaders as an internal AI champion, then separate ones for consultants, and then separate ones for more senior people, because the use cases would be very different,” she says. “We break it down by different use cases to show them how AI tools can be used.”

Jason Ippen, VP of brand strategy at Georgia-Pacific, learned a gentle approach works better than forcing AI. “When ChatGPT really got big, a couple years ago, we started to recognize that was going to have an impact on the content creation…We introduced AI tools (e.g. Midjourney, ChatGPT) and said, ‘Start testing these on your projects.’” 

The result? The creatives were stressed…We heard a lot about the limitations (hands with six fingers, off-brand copy) rather than how the tools could enhance their work.” 

Since then, he’s taken a different tack. “We’ve tried to create an environment that is motivating and encouraging to people, giving them time to experiment.”  

Maini pointed out that experimentation can come with some downside: “How do you make sure people are then not getting into rabbit holes of figuring out different tools? Are they also governing their time? I can’t risk having people spending a full week on just telling me, okay, I learned five new tools.  There’s no perfect formula and some investment time needs to be built in upfront,” she says. “Normalizing that upfront takes a lot of pressure off.”

Here’s what she says has been working for her:

1) Assign AI to a real task, not just a sandbox exercise. If someone has a deliverable coming up, that’s the moment to say “try using AI for this part.”

2) Accept a short productivity dip upfront as employees learn the tools.  That’s the tuition for a much bigger eventual return on the investment of time.

3) Create a team norm of sharing what works so the learning compounds across everyone, not just one person.

Reassuring Your Direct Reports

The most prevalent anxiety managers saw was the fear that AI will take jobs. Mickael Mingot, the former head of programs and content strategy at TikTok for France, Belgium and Brussels, says his team used AI for low-value tasks such as writing copy for push notifications on phones during various campaigns such as the Olympics or the Oscars.

While TikTok did not require managers to use AI, Mingot was aware that “we were working in a strategic partnership job that could easily be impacted by layoffs, easily be impacted by AI.” At the time, “TikTok had so many reorganizations.”

Direct reports would bring up their fears in meetings with him. “You have to reassure them, but you’re actually not a hundred percent sure of what’s going to happen,” he says. “Direct reports think you have the solution to everything and visibility into every strategic decision, which is not true. Sometimes you have only 10% visibility.”

“A big part of the manager job right now is to reassure people,” he says. Ultimately, he told his team: “Use AI as something that will multiply you, that will amplify you, because you are creative, because you are intelligent…If you use it, “you will be even more intelligent, even more creative.”

Tips on managing up

Leaders who don’t understand AI’s limitations are one of the largest sources of stress, middle managers say. “A lot of what we get from the leadership is… Shouldn’t AI help you do this faster? This shouldn’t take 20 weeks. This should take you 10 weeks,” says Lyn, a product manager for a retail platform, who asked for a pseudonym.  

Lyn’s team figures out which tools employees need and builds them. A large part of her job is understanding employee problems. “AI does not help with everything that we need to do…We have to go out, talk to people, and do the legwork of understanding all of the logic that currently exists in the tool or is there a bit that’s redundant or obsolete?” she says.

“Some managers get it, because they use AI, but many don’t,” says an HR director working in transportation in Singapore. “At times there’s a bit of pressure ‘I think we should just go ahead. You can do a few more prompts, and we can get it done.’”

He offered a three-part strategy:

1) Be very transparent. “Say, ‘you know we’re still learning prompting and even the AI takes time.’ It’s a constant dialogue with managers.

2) When they push back, “Show them evidence. This is the output, and it’s pretty crappy. It’s not very good for us to put it out there in a senior meeting.”

3) Get to the root of the problem. “Educate people.  Have they used it and tried prompting and figuring things out?”

Invest the time to keep up

Maggie Miller, a senior director of corporate marketing at HackerOne, points out keeping up with all the different models and their releases is a challenge. Last year, she and her team brought in an AI consultant and built several custom GPTs for writing and campaign planning, yet she’s already worried they are not current because models have already been updated.

“The pace of model innovation can be distracting, but staying grounded in what’s useful, what actually creates value for the team and the business, is what matters most,” she says. “My advice to other managers is to resist the urge to chase every new release and instead focus on building systems and use cases you can use. That way, you can incorporate meaningful improvements without creating constant disruption.”

George, an engineering director at a medical device company, keeps up by living and breathing AI. “AI is literally a hobby for me,” he says. “I’m investing my commute time. I’m using that to do my micro learning on all things AI. I’m listening to podcasts and trying to keep up with what are the latest models this whole transformation. Anytime there’s a new model or a new feature, I go and tinker with it. I’m always trying the new things, just to be aware of how they work and how effective they are.”

Dream it, build it

Some middle managers are taking initiative and creating their own AI initiatives on their own terms. Last August, Abishek Chaturvedi, an engineer at Docusign and a co-worker brought a proposal to create an internal group of AI champions to their CTO. “We created this bottoms up group of five people,” he says. “Instead of having a mandate from our top down saying we have to use AI, we wanted to figure out, okay, where does it actually makes sense?”

His team identified which workflows AI can help with and which tools are best to use. “Then we have a monthly workshop where we teach best practices,” he says. His team also offers office hours.”

“As engineers, we are responsible for the code that we submit,” he says. His team’s job was to help other engineers “Build trust in AI, so how to structure your prompts and how code in a way that you have enough time for review, so you can trust the output from AI,” he says

Today, Chaturvedi’s group of AI champions is 85 people strong, and AI adoption among engineers at Docusign is 95%.

Dancing on your own

Several managers are in companies where leadership is lukewarm on AI and left to experiment on their own. Russell Taris, a regional manager for a civil engineering firm, says his leadership is “not against us using AI, but they’re not pushing us to use it. We’re left up to our own devices…which is great for me and for because I kind of have the freedom to do what I want within the realm of, you know, confidentiality,” he says.

Taris, who also writes a blog on the best AI productivity tools for managers, finds “the safest place to experiment is on tasks that only affect you. For example, I started using AI to prep for my own meetings, draft my own status updates, and organize my own notes because nobody needed to approve them. By the time anyone asked how I was being more productive, I already had months of practical experience and could speak on what worked and what didn’t.”

For help he turned to “other team members who are figuring it out at the same pace I am. Not the IT department or the people selling the tools, but employees in my same position running into the same bottlenecks I was. Once you start those conversations, even casually, you realize you’re not the only one experimenting. And hearing what someone else tried and abandoned is just as valuable as hearing what worked.”

To get senior leaders on board, he recommends focusing on results: “Most senior leaders don’t fully understand what AI is capable of and how it can be used responsibly. Generally, I don’t make a big production out of it when I turn something around faster than expected or present data in a way that is more functional… I’ll just mention that I used AI for a first draft or to build an internal tool…Most senior leaders don’t want a presentation about AI. The best way to get buy-in is to show that your work is improving without creating new problems.”

The real magic is empowering middle management

The middle managers who were most excited about AI were the ones lucky enough to be in organizations where AI implementations have been thoughtful and led by employees, or the ones where leaders have let employees experiment on their own.

Despite all the uncertainty in the arms race over AI—how fast to go, what to compromise in the process, and whether or not the race is actually worth it—implementation is all in the hands of middle managers. They dictate what gets automated, what doesn’t, and what is an acceptable bar for quality—even if they have qualms. 

Still, regardless of where they stand on AI, or how brilliant they are at managing up and down—there’s only so much middle managers can do when faced with a confusing directive, or inhumane mandate. Middle managers may be the key to implementation, but they have only so much power to answer the pressing questions using AI raises: who gets to keep their job, how to conduct layoffs as humanely as possible, and how to make sure no one is left behind during AI rollouts.

View the full article





Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.

Account

Navigation

Search

Search

Configure browser push notifications

Chrome (Android)
  1. Tap the lock icon next to the address bar.
  2. Tap Permissions → Notifications.
  3. Adjust your preference.
Chrome (Desktop)
  1. Click the padlock icon in the address bar.
  2. Select Site settings.
  3. Find Notifications and adjust your preference.