Federal Judge Scrutinizes Pentagon’s Move Against AI Company Anthropic

robot
Abstract generation in progress

(MENAFN) A federal judge in the United States on Tuesday questioned the Pentagon’s decision to classify the AI company Anthropic as a “supply chain risk,” raising doubts about whether the action was truly warranted on national security grounds.

During a hearing in San Francisco, US District Judge Rita Lin described the government’s move as “troubling” and indicated that it might have exceeded the scope of legitimate security concerns.

The dispute emerged after Anthropic refused to allow its AI system, Claude, to be used for mass surveillance of American citizens or in fully autonomous weapons programs. The US government maintained that it must preserve the ability to use such technologies for “all lawful purposes.” Following unsuccessful negotiations, the Pentagon restricted the company’s participation in military-related projects.

In response, Anthropic filed a lawsuit asserting that the designation is unconstitutional and constitutes retaliation for the company’s stance on AI safety. It is seeking to block both the designation and a wider order instructing federal agencies to cease using its technology.

At the hearing, a Justice Department attorney noted that the Defense Department lacks explicit legal authority to terminate contracts solely based on other companies’ separate relationships with Anthropic.

MENAFN25032026000045017640ID1110905141

This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
Add a comment
Add a comment
No comments
  • Pin