Skip to content




OpenAI’s Pentagon deal once again calls Sam Altman’s credibility into question

Featured Replies

rssImage-5adeba6fa2f9ace5f05fba30b37b71a0.webp

Welcome to AI DecodedFast Company’s weekly newsletter that breaks down the most important news in the world of AI. You can sign up to receive this newsletter every week via email here.

Familiar tensions around Sam Altman

OpenAI CEO Sam Altman voiced his support for Anthropic in its dispute with the Pentagon over the use of its AI for targeting autonomous weapons and in domestic mass surveillance. He did so in a company meeting and during a CNBC Squawk Box appearance last Friday, the day Anthropic was effectively blacklisted by the The President administration.

But two days earlier, on Wednesday, Altman had reportedly already begun talking to the Pentagon about a contract that would let OpenAI effectively replace Anthropic as the sole supplier of AI models for classified information. The day after Anthropic missed its “deadline” for agreeing to the Pentagon’s terms, Altman announced on X that his company had reached an agreement with the Pentagon to provide AI for the same classified work. He added that the contract emphasized that the Pentagon wouldn’t use its AI for autonomous weapons or domestic mass surveillance.

Altman explained on X that the contract contained guarantees that OpenAI models wouldn’t be used for autonomous weapons or mass surveillance. It seemed odd that OpenAI’s lawyers would be able to do that on such a tight timeline, while Anthropic’s lawyers weren’t able to do so over the weeks the company spent negotiating with the Pentagon. Altman seemed to try to explain it away in a March 1 tweet: “I think Anthropic may have wanted more operational control than we did,” he wrote. (Anthropic CEO Dario Amodei, for his part, said during a company meeting that OpenAI’s negotiations with the Pentagon amounted to “safety theater,” according to The Information.)

In an internal memo that Altman tweeted this week, he acknowledged that rushing to get a deal done with the Pentagon on the same day Anthropic lost its deal was a bad look. “The issues are super complex, and demand clear communication,” he wrote. “We were genuinely trying to de-escalate things and avoid a much worse outcome, but I think it just looked opportunistic and sloppy.”

All of this strongly suggests that OpenAI simply accepted the same or similar alternative contract language the Pentagon offered Anthropic at the eleventh hour—language that promised, in a completely non-binding way, not to use the AI for autonomous weapons or mass surveillance.

On Monday night, Altman said on X that the Pentagon had agreed to add more explicit language rooted in existing U.S. laws stating that OpenAI’s models wouldn’t be used for domestic surveillance. But didn’t Anthropic object to the Pentagon’s desire to use AI models for domestic surveillance programs already permitted under existing laws?

People who have worked with Altman say the CEO often says one thing and does another. Recall that the OpenAI board of directors fired Altman because he’d been less than honest about strategic decisions he made for the company.

In his latest Platformer newsletter, Casey Newton recalls this quote from Wall Street Journal reporter Keach Hagey’s book about Altman, The Optimist. “It had taken [Ilya] Sutskever years to be able to put his finger on Altman’s pattern of behavior—how OpenAI’s CEO would tell him one thing, then say another and act as if the difference was an accident. ‘Oh, I must have misspoken,’ Altman would say. Sutskever felt that Altman was dishonest and causing chaos, which would be a problem for any CEO, but especially for one in charge of such potentially civilization-altering technology.”

In his latest Platformer newsletter, Casey Newton cites reporting from reporter Keach Hagey’s book The Optimist, which recounts how OpenAI cofounder and then–chief scientist Ilya Sutskever eventually grew uneasy about Altman’s leadership. As Hagey writes, it took Sutskever years to put his finger on what bothered him: conversations with Altman would later seem to shift or contradict themselves, only to be waved away with explanations like, “Oh, I must have misspoken.” Sutskever ultimately came to see the behavior as dishonest and destabilizing—which, per the book, “would be a problem for any CEO, but especially for one in charge of such potentially civilization-altering technology.” 

The AI Contract Fight That Should Have Stayed Inside the Pentagon

Anthropic’s dispute with the Pentagon should never have become public. Because the matter involves defense and classified information, it should have been handled face-to-face, in private, at the Pentagon. But for some reason Defense Secretary Pete Hegseth and President Donald The President decided to turn it into a culture war issue. Their decision to ultimately declare Anthropic a “supply chain risk” was arbitrary and capricious, and still makes little sense. Yet the core issues at the center of the dispute were legitimate disagreements, and the way they’re being resolved could have lasting consequences for how AI is used in government, including defense.

In July 2025, Anthropic signed a $200 million contract with the Pentagon to develop AI for national security, making it the first AI company to deploy models on classified networks (through a partnership with Palantir). There was a sort of poison pill in that contract. It has now poisoned Anthropic’s relationship with the Pentagon, and arguably both parties share some of the blame.

The dispute began early this year when the Pentagon informed Anthropic that it was “reviewing its contracts.” DoD officials said that, in order to renew the agreement beyond its original term, Anthropic would need to remove any guardrails preventing its AI models from being used in operations not prohibited by law. The original contract, which the Pentagon signed in 2025, did not expressly prohibit the use of Anthropic’s models for targeting autonomous weapons or conducting mass surveillance—Anthropic’s two main “red line” use cases. But Anthropic’s Terms of Service did, and the contract stated that defense agencies could use the AI models for anything not prohibited in the Terms of Service.

Didn’t the DoD’s attorneys give those terms a careful read before signing the contract? And given the sensitive nature of the work its models would be doing at the Pentagon, why didn’t Anthropic put language about mass surveillance and autonomous weapons directly into the contract itself? Now, seven months later, the Pentagon says it will terminate the agreement. A lot of time, money, and effort might have been saved if the two sides had confronted their disagreements last July.

Many in the defense industry see the core dispute as a question of who gets to set policy for how the armed forces use AI. Such policies have already been dictated by Congress, the argument goes, and if new rules are needed Congress will act. Defense agencies, in this view, should not be bound by guardrails set by private AI companies.

Before the February 27 resolution deadline, Senate Armed Services Committee leaders Chairman Roger Wicker (R-Miss.) and Jack Reed (D-R.I.) sent a letter to Hegseth and Amodei arguing that contract disputes are not the appropriate venue for setting national AI policy, and urging the two sides to keep negotiating.

Anthropic, for its part, argues that some of AI’s capabilities have already raced ahead of the law. For example, AI models can analyze surveillance data at an unprecedented scale, potentially threatening privacy and assembly rights in ways existing statutes do not fully anticipate, Amodei has said. By writing a rule against such uses into its Terms of Service, Anthropic says it is providing its own safeguard.

Anthropic’s objection to using its models as the brains for autonomous weapons—like the drones now active in the Ukraine conflict and in the Gaza Strip—is more technical than legal or moral. The company believes the AI is not yet reliable enough to fill that role without human supervision, raising the risk of targeting and potentially killing the wrong people.

In more civil times, the Anthropic–DoD dispute would likely have been worked out behind the scenes. A technical solution also seems readily imaginable. While Anthropic was the first AI company to install models on classified networks, it was never going to be the only one. The Pentagon always planned to approve OpenAI, xAI, and Google for classified work. One could imagine a system that calls on different models for different tasks, depending on their strengths, and their “red lines.”

Instead, Anthropic—whose AI is reportedly well regarded by many in defense and intelligence circles—was suddenly labeled a “woke” company led by “leftist fanatics,” as the president put it on Truth Social, and barred from use not only by the Pentagon but by the agency’s suppliers as well.

More AI coverage from Fast Company: 

Want exclusive reporting and trend analysis on technology, business innovation, future of work, and design? Sign up for Fast Company Premium.

View the full article





Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.

Account

Navigation

Search

Search

Configure browser push notifications

Chrome (Android)
  1. Tap the lock icon next to the address bar.
  2. Tap Permissions → Notifications.
  3. Adjust your preference.
Chrome (Desktop)
  1. Click the padlock icon in the address bar.
  2. Select Site settings.
  3. Find Notifications and adjust your preference.