Jump to content




Palantir is caught in the middle of a brewing fight between Anthropic and the Pentagon

Featured Replies

rssImage-fada54f5d2ea378d8ea0e111745de7ed.webp

A dispute between AI company Anthropic and the Pentagon over how the military can use the company’s technology has now gone public. Amid tense negotiations, Anthropic has reportedly called for limits on two key applications: mass surveillance and autonomous weapons. The Defense Department, which The President renamed the Department of War last year, wants the freedom to use the technology without those restrictions.

Caught in the middle is Palantir. The defense contractor provides the secure cloud infrastructure that allows the military to use Anthropic’s Claude model, but it has stayed quiet as tensions escalate. That’s even as the Pentagon, per Axios, threatens to designate Anthropic a “supply chain risk,” a move that could force Palantir to cut ties with one of its most important AI partners.

The threat may be a negotiating tactic. But if carried out, it would have sweeping consequences, potentially barring not just Anthropic but its customers from government work. “That would just mean that the vast majority of companies that now use [Claude] in order to make themselves more effective would all of a sudden be ineligible for working for the government,” says Alex Bores, a former Palantir employee who is now running for Congress in New York’s 12th district. “It would be horribly hamstringing our government’s ability to get things done.” (Palantir did not respond to a request for comment.)

i-2-91493997-anthropic-and-the-pentagon-
Alex Bores

Anthropic and the Pentagon’s war of words

Anthropic has, until now, maintained close ties with the military. Claude was the first frontier AI model deployed on classified Pentagon networks. Last summer, the Defense Department awarded Anthropic a $200 million contract, and the company’s technology was even used in the recent U.S. operation to capture Nicolas Maduro, the Wall Street Journal reported this week.

But the company’s commitment to certain AI safety principles has irked some people in President Donald The President’s orbit. (Katie Miller, Stephen Miller’s wife, has publicly accused the company of liberal bias and criticized its commitment to democratic values.) Unlike rivals xAI and OpenAI, both of which also also have Defense Department contracts, Anthropic is now locked in a fight with the Pentagon that playing out in public.

“Anthropic is committed to using frontier AI in support of US national security. That’s why we were the first frontier AI company to put our models on classified networks and the first to provide customized models for national security customers,” a company spokesperson tells Fast Company. “Claude is used for a wide variety of intelligence-related use cases across the government, including the DoW, in line with our Usage Policy. We are having productive conversations, in good faith, with DoW on how to continue that work and get these complex issues right.”

The Pentagon has taken a more confrontational tone. Agency officials are reviewing their relationship with Anthropic and have suggested that other contractors may also be required to stop working with the company. “The Department of War’s relationship with Anthropic is being reviewed,” Chief Pentagon spokesman Sean Parnell tells Fast Company. “Our nation requires that our partners be willing to help our warfighters win in any fight.” (Parnell did not respond to a request for clarification regarding specific concerns about autonomous weapons or surveillance.)

Palantir, the middleman

Palantir occupies a critical position in this ecosystem. A longtime government software provider, it has met a bevy of requirements allowing it to offer cloud services to support classified work. And, as is typical in the dizzying world of government technology contracting, Palantir also has key partnerships with Anthropic. 

Two years ago, the companies partnered to bring Anthropic’s technology to the government, a move that made Claude available to defense and intelligence services through Amazon Web Services. Last April, Anthropic joined Palantir’s FedStart program, which expanded the availability of its technology to government customers through Google Cloud.

Government tech contracting is a wonky business, but companies that want to sell software to the government typically need to work with a certified cloud provider like Palantir, or obtain certification themselves. “If you’ve never operated in a classified environment before, you essentially need a vehicle,” explains Varoon Mathur, who worked on AI in the Biden administration. “Palantir is a defense contractor with deep operational integration. Anthropic is an AI model provider trying to access that ecosystem.”

Growing tensions over how the Defense Department might use Claude also raise questions about how much visibility companies like Palantir and Anthropic have into the government’s use of their tools. “Anthropic and OpenAI offer Zero Data Retention usage, where they don’t store the asks made of their AI,” notes Steven Adler, a former OpenAI employee and AI safety expert tells Fast Company. “Naturally this makes it harder to enforce possible violations of their terms.”

A person familiar with the matter said Anthropic does have insight into how its technology is used, regardless of whether it’s in a classified environment, and that the company is confident its partners and users have been deploying the tech in line with its policies. In its reporting, the Wall Street Journal cited people familiar with the matter who said an Anthropic employee did reach out to Palantir to ask about Claude’s use in the Maduro operation, though Anthropic denied to that outlet that it had spoken with Palantir beyond technical discussions. The Anthropic spokesperson tells Fast Company that the company cannot comment on its technology’s use in specific military operations, but said it “work[s] closely with our partners to ensure compliance.”

More broadly, the standoff risks chilling relationships between Silicon Valley and Washington at a moment when the government is pushing to adopt AI more aggressively. “To state basically that it’s our way or the highway, and if you try to put any restrictions, we will not just not sign a contract, but go after your business, is a massive red flag for any company to even think about wanting to engage in government contracting,” says Bores.

View the full article





Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.

Account

Navigation

Search

Configure browser push notifications

Chrome (Android)
  1. Tap the lock icon next to the address bar.
  2. Tap Permissions → Notifications.
  3. Adjust your preference.
Chrome (Desktop)
  1. Click the padlock icon in the address bar.
  2. Select Site settings.
  3. Find Notifications and adjust your preference.