Skip to content




AI companies are tightening token limits. The last one to blink may win

Featured Replies

rssImage-d6257097aacb6799c5efc0ee3f6a6610.webp

For years, AI companies gave users unfettered access to the candy store, encouraging them to think of tokens, the chunks of text AI reads and writes, as effectively infinite.

Tokens were bundled into subscriptions, hidden behind generous caps, or priced low enough that people stopped counting them. But as the cost of serving models eats into revenue, and as chip shortages, helium disruption, and data center bottlenecks constrain how much compute can come online, the big model makers are starting to ration access more aggressively. All-you-can-eat AI is disappearing. Now companies are in a contest to see who can keep subsidising demand the longest, and whether the last to blink gets to dominate the market.

This week, Meta took offline its “Claudenomics” leaderboard, which tracked employee productivity using a crude metric of how many AI tokens they used over the past month. Employees used more than 60 trillion tokens in a single month, equivalent to around 80 million copies of War and Peace, or the contents of 10,000 entire libraries.

“Leading frontier model developers are going to face trade-offs in how they use their compute resources,” explains Sam Manning, senior research fellow at GovAI, a community of researchers studying how AI is used and deployed. “It’s a super consequential decision these companies need to make.”

The global shortage of AI chips, likely to be exacerbated by the Middle East war’s impact on helium, a key component in GPU production, along with a backlog in building data centers, means there is only a finite amount of hardware to both train and run AI models. Dial down the training budget and you risk falling behind competitors in releasing cutting-edge models. Cut back on inference, the speed and scale at which you meet customer demand, and you frustrate users.

Different companies are taking different approaches. Earlier this month, OpenAI announced it would switch users on its Codex app to token-based pricing, rather than per message, regardless of query size. That could benefit those running smaller tasks, but could also quickly burn through a user’s token allowance. The company also ended a months-long offer to double Codex limits at the start of April.

Around the same time, Anthropic blocked users from using Claude subscriptions to power OpenClaw agentic AI tools, pushing them instead toward API access. The likely reason was simple: demand. “We’ve been working hard to meet the increase in demand for Claude, and our subscriptions weren’t built for the usage patterns of these third-party tools,” said Boris Cherny, Claude Code executive, announcing the shift. “Capacity is a resource we manage thoughtfully and we are prioritizing our customers using our products and API.”

The financial pressure is clear. The cost of serving AI models accounts for more than half of OpenAI and Anthropic’s revenues, according to internal data obtained by the Wall Street Journal. “There’s just been huge consumer surplus,” says Manning. “A lot of the initial motivation for pricing was to build up market share and get users onto their platforms. Maybe it’s the case that we’re seeing some sort of an inflection point there.”

The price-versus-performance trade-off is not limited to U.S. firms. It is also front of mind for China’s AI companies. Zhipu AI, which makes the GLM models, has seen its open platform API token prices rise 83% year-to-date in early 2026, and this week announced another 8% increase for its latest models.

The price hikes reflect accelerating demand, according to JP Morgan research. Users appear willing to absorb higher costs for higher-value workloads, particularly in coding and agent-related use cases. Rising prices and sustained demand are already reshaping unit economics for China’s AI giants, with Zhipu AI’s API gross margins expanding from 3% in 2024 to 19% in 2025.

Still, Alibaba is taking a different tack. The company has made its Qwen-3.6 model free to users through OpenRouter, a coding support system. Users quickly burned through nearly 1.5 trillion tokens in a single day.

That decision stands out, but the logic is clear. Alibaba is trying to win developers, workloads, and long-term cloud customers. While OpenAI and Anthropic are tightening access to protect scarce capacity and improve unit economics, Alibaba is playing a longer game, absorbing the cost in hopes of locking in users that may be harder to win later.

Alibaba could also benefit from the fact most companies can’t compromise on price any time soon—if ever. Pricing pressures remain unavoidable if compute remains scarce, according to GovAI’s Manning. “We should expect there to be this sort of scarcity of compute for the foreseeable future,” he says.

View the full article





Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.

Account

Navigation

Search

Search

Configure browser push notifications

Chrome (Android)
  1. Tap the lock icon next to the address bar.
  2. Tap Permissions → Notifications.
  3. Adjust your preference.
Chrome (Desktop)
  1. Click the padlock icon in the address bar.
  2. Select Site settings.
  3. Find Notifications and adjust your preference.