AI อะไรเนี่ย

Industry

Anthropic Responds to US Department of War 'Supply Chain Risk' Designation

Anthropic Responds to US Department of War 'Supply Chain Risk' Designation

The AI industry is buzzing after Anthropic, a leading AI safety and research company, announced it was formally designated a "supply chain risk" to America's national security by the US Department of War. This significant move, confirmed in a letter received on March 4, 2026, has prompted a strong, clear response from Anthropic CEO Dario Amodei.

What Happened?

On Wednesday, March 4, Anthropic received official notification from the Department of War about this designation. The company quickly responded, stating that they do not believe this action is legally sound and intend to challenge it in court. This follows previous discussions, where Anthropic had already laid out its stance on the Department of War's considerations.

It's important to understand the scope of this designation. Based on statute 10 USC 3252, the Department of War's action has a very narrow focus. It applies only to the use of Anthropic's AI model, Claude, by customers as a direct part of contracts with the Department of War. Crucially, this designation does not limit other uses of Claude or Anthropic's business relationships that are unrelated to these specific contracts. This distinction is vital for the vast majority of Anthropic's customers, who remain unaffected.

The designation comes amidst a complex landscape, including previous public commentary from officials, such as the Secretary of War's statements, which Anthropic has previously addressed.

Why It Matters

This development matters significantly for the evolving relationship between the private AI sector and government defense. Anthropic's core concerns have consistently revolved around two narrow exceptions: the use of AI in fully autonomous weapons and for mass domestic surveillance. The company asserts that these are high-level usage areas, distinct from operational decision-making, which they believe is solely the military's role.

Despite the legal challenge, Anthropic is committed to supporting national security. They have pledged to provide their models to the Department of War and the broader national security community at a nominal cost, offering continuing engineer support during any necessary transition period. This commitment underscores a shared objective of advancing US national security, even amidst legal disagreements.

What's Next?

Anthropic's legal team is gearing up to challenge the designation, arguing it lacks legal foundation. In parallel, the company will continue its dialogue with the Department of War, aiming to ensure that national security experts and warfighters are not deprived of critical AI tools during ongoing major combat operations. The situation highlights the delicate balance between technological innovation, ethical considerations, and national defense requirements in the rapidly advancing AI landscape.

Read more: Where Anthropic Stands with the Department of War