Jump to content




The Pentagon wants fewer AI limits. Anthropic doesn’t. Here’s why it matters

Featured Replies

rssImage-5835d1daba06c70e65ccbfe9e6459f2f.webp

Dario Amodei, CEO of Anthropic, will head to the Pentagon on Tuesday to meet with Defense Secretary Pete Hegseth about how the military uses the company’s artificial intelligence models. And it’s likely to be a tense meeting, as sources first told Axios.

Contract talks between the AI startup and the Department of Defense have gone off course in recent weeks as Anthropic has insisted on some safeguards for how its technology will be used. While the San Francisco-based company is willing to loosen some of its usage restrictions for the Department of Defense, it doesn’t want its models used for at least two specific purposes: spying on Americans or developing autonomous weapons.

Heading into Tuesday’s meeting, the two factions seem to have differing views on how those contract talks have been proceeding. While a spokesperson for Anthropic said in a statement Monday that the company is having “productive conversations, in good faith” with the Pentagon, a Defense Department spokesman said last week that Anthropic’s relationship with the Pentagon is under review.

“Anthropic knows this is not a get-to-know-you meeting,” a senior Defense official told Axios. “This is not a friendly meeting.”

ANTHROPIC’S ROLE IN NATIONAL SECURITY

Anthropic is currently the only AI company available in the military’s classified networks and was among several companies awarded a $200 million contract with the Defense Department to in July “advance U.S. national security.” 

The company has repeatedly reiterated its commitment to supporting national security, including again on Monday. In June, it announced Claude Gov, a suite of models it built exclusively for U.S. national security customers.

And yet, Amodei has become vocal about balancing the opportunities that AI presents with the concerns that it poses. In a lengthy piece published last month, the Anthropic co-founder warned: “Humanity is about to be handed almost unimaginable power, and it is deeply unclear whether our social, political, and technological systems possess the maturity to wield it.”

At the India AI Impact Summit last week, Amodei that he’s concerned about the autonomous behavior of AI systems and the potential for misuse of AI by individuals and governments.

THE MADURO FACTOR

Another factor that’s strained the relationship between Anthropic and the Pentagon came to light last week: Claude was used in the U.S. military’s operation at the start of the year to capture former Venezuelan President Nicolás Maduro, as The Wall Street Journal reported. That mission would seem to violate Anthropic’s usage guidelines that prohibit, among other things, that Claude not be used to incite violence or for criminal justice and surveillance.

The company’s usage policy, most-recently updated in September, is intended to “strike an optimal balance between enabling beneficial uses and mitigating potential harms.” 

But Anthropic also notes that the company “may enter into contracts with certain governmental customers that tailor use restrictions to that customer’s public mission and legal authorities if, in Anthropic’s judgment, the contractual use restrictions and applicable safeguards are adequate to mitigate the potential harms.”

POKING THE BEAR

Anthropic has tried to set itself apart from the rest of the universe of AI developers with a “safety-first” approach that’s even seen it take a swipe, via a Super Bowl ad, at OpenAI’s recent decision to incorporate ads into the ChatGPT platform.

While Amodei has emerged as a contrarian of sorts, at times, by pushing back on unrestricted use of its Claude AI model for the U.S. military, Amodei is effectively poking the bear that is Hegseth. 

As Axios reported last week, Hegseth has threatened that the Pentagon could declare Anthropic to be a “supply chain risk,” which would void its contracts and force other companies that work with the Pentagon to certify they aren’t using Claude in any related workflows.

“Our nation requires that our partners be willing to help our warfighters win in any fight,” chief Pentagon spokesman Sean Parnell told media outlets last week. “Ultimately, this is about our troops and the safety of the American people.” 

View the full article





Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.

Account

Navigation

Search

Configure browser push notifications

Chrome (Android)
  1. Tap the lock icon next to the address bar.
  2. Tap Permissions → Notifications.
  3. Adjust your preference.
Chrome (Desktop)
  1. Click the padlock icon in the address bar.
  2. Select Site settings.
  3. Find Notifications and adjust your preference.