Skip to content




The hidden logic behind AI CEOs’ job loss warnings

Featured Replies

rssImage-998eb7ca2a6073e8bdc7a9861a9ee669.webp

Why do CEOs of big AI labs like OpenAI and Anthropic often publicly acknowledge that AI is likely to result in significant job loss? Most AI company CEOs now concede that widespread job loss from AI is coming, while differing somewhat on the timeline.

  • OpenAI CEO Sam Altman has long acknowledged that AI will displace workers. “The real impact of AI doing jobs in the next few years will begin to be palpable,” he said recently. But he often adds that AI will also create new jobs, such as for humans who manage teams of AI agents.
  • Anthropic CEO Dario Amodei has been the most frank and pessimistic when it comes to AI-driven job loss: “I would not be surprised if somewhere between one and five years we start to see big effects [including the potential to] wipe out half of all entry-level white-collar jobs,” he said in a recent interview.
  • Google DeepMind CEO Demis Hassabis believes the transition of work to AI will happen quickly. “I believe the AI transition will deliver 10 times the impact of the Industrial Revolution, happening at 10 times the speed,” he told Bloomberg at Davos in January.
  • Meta CEO Mark Zuckerberg has spoken mainly through actions at his own company. Meta recently confirmed it will cut 10% of its workforce, or 8,000 jobs, and use the savings to fund a planned $135 billion investment in AI infrastructure. “We’re starting to see projects that used to require big teams now be accomplished by a single very talented person,” Zuckerberg said during a January earnings call.

Such statements might seem likely to alienate people from the technology, as well as from the executives and companies bringing it into the world. In fact, a recent Quinnipiac University poll found that a majority of Americans (55%) now believe AI will cause more harm than good.

So when people like Altman and Amodei sit before large audiences and discuss how quickly AI could displace human workers, who are they really talking to?

“It would be investors, because if all jobs are going to be taken over by AI, you better own a piece of that AI, right?” says Ben Goertzel, the scientist who coined the term “AGI” (that’s artificial general intelligence) and coauthored the 2005 book Artificial General Intelligence with DeepMind cofounder Shane Legg. Goertzel believes Amodei and Altman genuinely believe what they are saying about job losses. But investors hear the same words as opportunity, not warning.

When AI leaders talk about the large-scale impact of their products, they are also reinforcing a crucial narrative: that generative AI models will soon take over many corporate work tasks, delivering unprecedented productivity and efficiency. That narrative does more than keep investment dollars flowing into model training and data center construction. Companies representing roughly a third of U.S. stock market value are making major bets on it, so any erosion of confidence could have sweeping economic consequences.

But this is largely a narrative shared within boardrooms and among the AI community on X. The public hears it secondhand, and often hears something very different. Many worry about when waves of job losses will arrive, and how AI could be used for harmful purposes such as mass surveillance, disinformation, and cybercrime.

AI companies are not speaking directly to the public about these concerns. There is no nationally televised town hall where executives explain how they plan to keep increasingly powerful AI systems aligned with human needs and values, or how they intend to prevent those systems from being weaponized by bad actors.

Instead, AI industry leaders spend far more time engaging with business executives, politicians, lobbyists, and tech influencers like Marc Andreessen. That may help explain why much of the country increasingly views AI company leaders as affluent elites, largely insulated from mainstream American life. An April YouGov survey of 5,500 U.S. adults found that only 17% rated leaders of major AI companies as “very trustworthy” or “somewhat trustworthy.”

Meanwhile, voters across the country are increasingly using grassroots political pressure to block construction of the data centers that major AI labs urgently need. Populism is in the air in 2026, and the AI data center issue could easily become a central political flashpoint as the midterms approach. That concrete issue could evolve into a much broader national debate encompassing AI safety, labor protections, and compensation for displaced workers.

For now, the AI industry is moving aggressively to embed its models into corporate business operations. Goertzel believes the broad handoff of work tasks to AI is being slowed less by the technology itself than by organizational friction.

“There’s just a lot of friction and inertia in how people do things,” he says. “So even when a job function, in theory, 90% of it could be done by AI, organizations are just slow at reshuffling how things work.”

View the full article





Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.

Account

Navigation

Search

Search

Configure browser push notifications

Chrome (Android)
  1. Tap the lock icon next to the address bar.
  2. Tap Permissions → Notifications.
  3. Adjust your preference.
Chrome (Desktop)
  1. Click the padlock icon in the address bar.
  2. Select Site settings.
  3. Find Notifications and adjust your preference.