Skip to content




Stop calling it inevitable: The AI job crisis is being built, not born

Featured Replies

rssImage-3f8d2a6b130780aef4994d10038d4b17.webp

The light is shining through the windows of what looks like a well-appointed, book-lined apartment where Dario Amodei, the chief executive of AI giant Anthropic, is giving an interview. He smiles and laughs at the interviewer’s jokes, giving the impression of an approachable, amiable, ever-so-slightly unkempt scientist.

But when the questions turn to AI’s impact on humanity, Amodei’s demeanor shifts. He says that while he is not a doom-and-gloomer, he is certainly worried. Previous disruptions took place over longer timescales, and he frets that the speed and scope of this one will make it much harder to manage. His concern “is that the normal adaptive mechanisms will be overwhelmed” and that more than half of entry-level white-collar jobs are at risk.

As he speaks, Amodei sounds like a physician delivering a difficult prognosis: sober and compassionate, very concerned about the patient’s well-being, but ultimately helpless in the face of death’s inevitable arrival. Except Amodei is not just some powerless observer. He is the chief executive of one of the companies that is doing the most to bring about this jobless future. He is more architect than bystander, but you would never know it from the tone of his public utterances.

Amodei is far from alone in this stance. In the years following the launch of ChatGPT in 2022, the AI industry’s messaging tended toward the reassuring: AI is not coming to take your jobs; instead, it will be a cognitive exoskeleton that augments workers, making humans more capable and more productive, both in the workplace and beyond.

That reassurance is now being abandoned. The same week that Amodei’s interview was published, Mustafa Suleyman, Microsoft’s head of AI, told the Financial Times that most “white-collar work . . . will be fully automated by an AI within the next 12 to 18 months.” In July of last year, OpenAI chief executive Sam Altman told a Federal Reserve Board conference that “there are cases where entire classes of jobs will go away,” describing some categories as “totally, totally gone.”


The Kidnapper’s Ransom

The passive voice is doing a lot of work in these prognostications—jobs will simply “be automated” and roles will just “go away.” The disruption is presented as being like the weather—something we must prepare for, adapt to, endure. Hiding behind this phrasing is a very different reality: These changes are the downstream consequences of decisions made in specific boardrooms by specific people reacting to specific financial incentives.

University of Oxford political philosopher G.A. Cohen identified this pattern of argument decades ago. His insight was that an argument changes its character entirely when the person delivering it is the same person whose choices make the premises true. Cohen’s analogy was vivid: Imagine a kidnapper who argues that children should be with their parents and, therefore, the ransom should be paid. The argument is logically valid. Its premises are true. But it is discredited by the fact that the person making the case is the one who created the crisis.

Amodei sits across from us, expressing sober concern about a future of mass displacement. But the displacement he describes is not something that is just happening. It is something his company is building. When Amodei, Suleyman, Altman, and a few others tell us that artificial intelligence will cause a bloodbath of white-collar jobs, they are not observing an unstoppable force—they are the force.

The Gravity of Automation

In his essay “The Adolescence of Technology,” Amodei argues that “the idea of stopping or even substantially slowing the technology is fundamentally untenable,” because “if one company does not build it, others will do so nearly as fast.” This is not a trivial point. As someone who has spent the last three decades deploying new technologies across large companies and major government agencies, I have experienced these competitive pressures firsthand—the cost of falling behind is not imaginary.

Amodei argues that even if all Western companies stopped their work on AI, “authoritarian countries would simply keep going.” And indeed, Beijing is pouring vast resources into AI development. So no single company, and arguably no single country, can unilaterally step back from the frontier without ceding ground that may not be recoverable. Anyone who dismisses this constraint is not being serious.

This competitive pressure extends beyond the pace of development to its direction. A product that replaces a worker outright will often deliver faster and more directly measurable savings than one that makes the same worker more capable. And since businesses are duty-bound to maximize shareholder value, they are obliged to pursue automation when it offers the clearest path to increasing profits. There is genuine economic gravity pulling toward automation over augmentation, and pretending otherwise would be naive.

Yet paths to profit do not exist in a vacuum. They are shaped by a range of incentive structures—such as tax codes, procurement standards, and regulatory frameworks—that are entirely in the hands of humans. The case for leaving these incentives untouched rests on the assumption that, when companies seek to maximize their profits, the results are economically beneficial for society as a whole. But it is hard to take that assumption seriously in the case of AI.

The most influential figures in the tech industry are predicting outcomes that would devastate the job market, collapse consumer demand, and leave gaping holes in the tax base. As Sal Khan, founder and CEO of the free digital learning platform Khan Academy, put it recently: If even 10% of white-collar jobs are lost to the AI revolution, “it’s going to feel like a depression.” Shaping the incentive structure to favor augmentation in these circumstances is not an anti-market move. It is a recognition that markets require functioning consumers to survive.

Unstacking the Deck

The incentive structures that shape how businesses respond to technological advances are not natural forces over which we have no control. They are the direct product of the political choices we make around technology adoption. A team of economists led by MIT professor Daron Acemoglu has shown that the U.S. tax system currently incentivizes automation by taxing labor at a much higher rate than the capital expenditure involved in robotic automation. Acemoglu and his team show that a rebalancing of the relevant taxes could increase human employment by as much as 4%.

Of course, the automation of white-collar work by AI agents differs from the automation of manual work by robots. AI agents will not, for the most part, be funded by capital expenditure but by the ongoing subscription model that is common in the software industry. However, the underlying insight stands. The incentive structures our governments endorse can steer companies toward or away from automation.

A wide range of tools are available to shape the business environment, and we should consider all of them in the case of AI. Some of these tools—such as changes to government procurement policies or requirements for increased transparency around long-term strategies—would aim to gently nudge businesses toward augmentation over automation. But blunter instruments, such as taxes on automation, are also available.

Many of the most influential figures in the AI world have suggested that a universal basic income may be a necessary response to AI-driven joblessness. Sam Altman himself has proposed an “American Equity Fund” that would tax companies and land at 2.5% of their value annually to fund direct payments to every adult citizen. But this concedes the central point: Left to its own devices, the market will produce outcomes so lopsided that highly intrusive state redistribution becomes necessary. If we can accept the kind of massive taxation required to fund a basic income for the entire population, why not take a proactive approach and apply much more limited taxes ahead of time to incentivize augmentation over automation?

The Free-Market Case for Augmentation

The obvious objection is that shaping incentives in this way would put the United States at a competitive disadvantage against nations that choose to automate more aggressively. As Amodei observed in his interview, when it comes to chess, AI models acting alone quickly surpassed human players who were augmented with AI. If we extend this logic to the economy, we should expect that fully automated businesses will outperform those with a hybrid human-AI workforce; a failure to follow this path will guarantee that the United States falls behind its economic competitors.

But this analogy is too weak to justify the greatest economic upheaval in human history. On a fundamental level, the economy is simply not like a game of chess. A successful economy doesn’t just rely on optimizing outputs—it requires consumers who can buy the goods and services produced by its businesses. As Henry Ford warned a century ago: “The owner, the employees, and the buying public are all one and the same.” Any industry that undermines the buying power of its workforce “destroys itself—for otherwise, it limits the number of its customers.” A model of the future that focuses only on fantastical production gains—while ignoring the destruction of the consumer base—is not a model of an economy at all. It is a piece of lazy science fiction.

What would it take for a nation to survive full automation? The answer becomes uncomfortable when we look at the competitor most frequently invoked by those who warn against falling behind. China’s AI development is largely state-directed and state-funded. If Beijing pursues full automation, it can deploy the apparatus of an authoritarian command economy to sustain a population that is no longer earning wages through traditional employment. The United States cannot.

This is where the free-market case for maximal automation collapses on its own terms. For America, full automation leads to precisely the outcome that our economic and political philosophy rejects: mass dependence on government transfers funded by massive new taxation on the companies that eliminated their workforces. The most pro-market position available is not to maximize automation. It is to ensure that AI develops in a way that keeps humans economically productive. The alternative is a path that ends not in the victory of capitalism, but in an extreme form of the kind of socialism that America has spent a century defining itself against.

Amodei and his peers have built extraordinary companies. They may even be right that the technology they are developing will ultimately benefit humanity. But when they sit across from us and describe mass job displacement as something that they are worried about but are powerless to prevent, we should demand better of them. At the very least, we should demand that they accept their own role in the process. Instead of hiding behind “jobs will be automated,” they owe it to us to say: “We are building systems that will automate these jobs, and here is what we are doing about the consequences.”

But whether or not the titans of AI rise to the moment, the rest of us will have to meet its challenges head-on. The incentive structures that will decide whether AI augments human capabilities or hollows out the economy are being shaped right now by default. If we leave these decisions to the people building the technology, we already know which direction they will choose. They have told us.

View the full article





Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.

Account

Navigation

Search

Search

Configure browser push notifications

Chrome (Android)
  1. Tap the lock icon next to the address bar.
  2. Tap Permissions → Notifications.
  3. Adjust your preference.
Chrome (Desktop)
  1. Click the padlock icon in the address bar.
  2. Select Site settings.
  3. Find Notifications and adjust your preference.