Jump to content




Why focusing on cost-cutting during the AI revolution is a strategic mistake

Featured Replies

rssImage-43f59e2914f77f9364b020fb19164657.webp

When a new general-purpose technology emerges—be it railroads, electricity, computers, etc.—companies react in predictable ways. A small minority tries to reinvent themselves around it; the majority looks first for ways to cut costs. 

Right now, in the middle of the most significant technological inflection since the internet, many organizations are choosing the second path. They deploy artificial intelligence to automate call centers, reduce head count in back offices, and squeeze marginal gains out of existing processes. They measure “AI ROI” in payroll savings and hours reclaimed. 

It feels rational. It feels disciplined. It feels safe. 

It is also the fastest way to miss the real opportunity. 

Innovation waves are not efficiency programs

AI is not a new SaaS tool, nor is it merely a workflow enhancement. It is a rapidly evolving general-purpose technology advancing from large language models to agentic systems and toward systems that learn from interaction with environments (the so-called world models that can simulate, plan, and act). 

When the underlying capability is shifting every few months, optimizing for cost reduction is like trying to improve the fuel efficiency of a car while its engine is being replaced with a jet turbine. 

The organizations that win in moments like this do not start by asking, “Where can we eliminate labor?” They ask, “What becomes possible that was previously impossible?” 

Those are radically different questions. 

The productivity paradox should have been a warning

In the early 1990s, economists puzzled over a surprising phenomenon: Computers were everywhere, yet productivity statistics stubbornly refused to reflect their impact. In a press article, Nobel laureate Robert Solow famously quipped, “You can see the computer age everywhere but in the productivity statistics.” That observation became known as the “productivity paradox.” 

At the time, many assumed the paradox was a failure of technology. My own research from that time examined why the paradox appeared at all, showing that productivity measurement lags widely behind actual transformational change and that the mechanisms of value creation were not captured by conventional metrics. 

The explanation was obvious only in hindsight. The gains were diffuse, uneven, and entangled with organizational change. Companies had digitized old processes instead of redesigning them. 

Today we are watching the same pattern unfold with AI. 

AI’s impact won’t show up neatly in cost metrics

Artificial intelligence does not produce clean, linear productivity gains that fit neatly into quarterly dashboards. Its effects are asymmetrical. One employee using AI effectively may outperform 10 peers. Another may misuse it, degrade quality, or even endanger our corporate cybersecurity plans. Some teams redesign workflows entirely, while others bolt AI onto legacy processes and call it “transformation.” 

The result is what researchers now call measurement myopia: the inability of traditional metrics to capture improvements that are real but not directly tied to hours worked or cost saved. 

Trying to measure AI’s value solely through immediate cost savings is like trying to measure the value of electricity by counting candles not purchased. 

Efficiency is the comfort strategy, but not the opportunity one

Cost-cutting is attractive because it fits existing governance structures. CFOs understand it. Boards reward it. Metrics are clear. 

Exploration is messier. It requires experimentation without guaranteed returns. It demands a tolerance for failure. It produces intangible benefits before visible ones. 

But in periods of fast innovation, efficiency is often the comfort strategy of laggards who don’t yet understand what is happening. 

If AI is treated primarily as a head-count-reduction tool, organizations will optimize the present and sacrifice the future. They will standardize mediocrity instead of discovering leverage. 

Exploration, not exploitation, builds capability

Advocating exploration does not mean abandoning discipline. It means redefining it.

Leaders should be asking:

  • What new products can we build with AI-native capabilities?
  • What decisions can we delegate to systems that learn from feedback?
  • How can we redesign workflows, not just automate them?

Companies should mandate controlled experimentation across teams, not restrict AI usage to narrow cost-justification pilots. They should treat AI like an R&D posture rather than a shrink-the-budget posture.

Organizations that treat AI as an exploratory layer—encouraging teams to test, prototype, recombine, and rethink workflows—will build institutional fluency. They will develop internal champions. They will uncover unexpected value that no top-down cost initiative would have surfaced.

The real risk isn’t overspending. It’s under-imagining

The greatest risk in this moment is not overspending on AI. It is under-imagining it. 

Companies that chase short-term efficiency gains may report modest improvements and declare success. Meanwhile, more ambitious competitors will redesign their operations, products, and customer experiences around capabilities that didn’t exist two years ago

Over time, the gap will not be a few percentage points of margin. It will be strategic. 

In periods of rapid technological change, survival does not belong to the most efficient. It belongs to the most adaptive. 

View the full article





Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.

Account

Navigation

Search

Configure browser push notifications

Chrome (Android)
  1. Tap the lock icon next to the address bar.
  2. Tap Permissions → Notifications.
  3. Adjust your preference.
Chrome (Desktop)
  1. Click the padlock icon in the address bar.
  2. Select Site settings.
  3. Find Notifications and adjust your preference.