The increasingly ugly feud between the Pentagon and artificial intelligence (AI) company Anthropic is shining a harsh spotlight on the ethics behind how the U.S. military uses AI.

Defense Secretary Pete Hegseth has threatened to cut Anthropic out of defense force use altogether unless the firm grants the military unrestricted access to the technology.

He has said if Anthropic doesn’t give the military full control of the tech, they represent a “supply chain risk.” He also suggested he could invoke the Defense Production Act, a decades-old law which gives the president sweeping powers over industry in the interests of national security. Anthropic has until Friday afternoon to comply.

AI is only getting more important in military technology but there are still serious concerns about how it’s used in missions, particularly when lives are at stake. Using AI to make decisions about targeting with lethal force is still very controversial.

Worries about how reliable AI is in high-stakes operations come alongside concerns over how President Donald Trump and his administration have wielded the military, deploying soldiers in American cities and killing at least 150 people in a highly criticized strike campaign against alleged drug vessels in the Caribbean and eastern Pacific.

The spat between the Pentagon and Anthropic brings fresh attention on how the U.S. military, and other forces around the world, are quickly integrating AI into their battlefield operations and which safeguards should be kept in place.

Still a relatively nascent technology, there are questions on how reliable AI is, when it can be trusted, and how to regulate it.

Anthropic insists its Claude large language model shouldn’t be used for mass surveillance or in weapons where AI makes decisions about who is targeted, without human input. These are the company’s red lines.

So, how is AI currently being used by the U.S. military, and how real are Anthropic’s concerns?

An ‘Early Adopter’

Militaries around the world are hopping on the AI bandwagon. To ignore AI is to fall behind the defense technology race and risk your adversaries developing far more effective weapons and other military systems.

The U.S. leads the pack in military AI use, and has already invested heavily in the technology, said Thomas Reinhold, a researcher specializing in the weaponization of AI and cyber at the Peace Research Institute Frankfurt (PRIF) in Germany.

“AI is already integrated into intelligence analysis, surveillance systems, autonomous platforms, logistics optimization, cyber defense, and predictive maintenance,” Reinhold told Newsweek.

“The U.S. are probably an early adopter here, but this trend is global because no military force likes to fall behind others’ abilities.”

The U.S. used Claude as part of the daring operation to capture then-Venezuelan leader, Nicolás Maduro, from his Caracas compound early last month, The Wall Street Journal first reported.

But although the U.S. has started deploying AI in military operations, it has not fully integrated AI into all of its systems and planning, Reinhold said.

Further afield, AI is actively used on the battlefields of eastern Ukraine. Both Russia and Ukraine are leading the way in using AI in active combat to lock onto targets, control drones and overcome jamming. It’s also used to gather and sift through intelligence and vast amounts of information more quickly than humans can.

While AI has its obvious benefits, it must be treated with caution, too. Ukrainian President Volodymyr Zelensky, speaking at the United Nations in September, said the world was now seeing the “most destructive arms race in human history, because this time it includes artificial intelligence.”

“It’s only a matter of time—not much—before drones are fighting drones, attacking critical infrastructure and attacking people all by themselves—fully autonomous and no human involved except the few who control AI system,” the Ukrainian leader said.

Zelensky isn’t the only one to have these concerns. Human rights experts have called for a ban on weapons that don’t have “meaningful human control or that target people” and suggest autonomous weapons could violate human rights.

Anthropic’s CEO, Dario Amodei, met with Hegseth at the Pentagon on Tuesday and said no one in the AI field had yet encountered real-life issues that cross the company’s “red lines,” the source said.

An Anthropic spokesperson said Hegseth’s meeting with Amodei featured “good faith conversations” about the company’s usage policy.

The meeting was genial and no voices were raised, the source with knowledge of the matter said.

“This is not a friendly meeting,” an unnamed senior Pentagon official had told Axios earlier this week.

Three other AI labs are also in talks with the Pentagon about how much control the military will have over their models, Axios previously reported. The Department of Defense also has contracts with OpenAI, Google and xAI.

Anthropic’s Dilemma

Anthropic’s usage policy for its Claude AI model explicitly prohibits domestic surveillance or weaponization. Company representatives have expressed concern to Pentagon officials that Anthropic’s AI tools could be used to spy on Americans or help guide weapons without enough human involvement, Reuters previously reported, citing anonymous sources.

But Anthropic is in somewhat of a unique position with the U.S. military. It is already integrated with some American systems, leads the industrial field and, crucially, has been the only AI model authorized for use with classified U.S. military operations—although the Pentagon recently inked a deal with Elon Musk’s xAI to use his Grok model for classified systems.

On top of this, other major defense players, like Palantir, also use Claude in their work for the Pentagon.

“It will be an enormous pain in the ass to disentangle, and we are going to make sure they pay a price for forcing our hand like this,” an unnamed senior Pentagon official told Axios earlier in February, talking about Anthropic.

The AI developer signed a $200 million contract with the Pentagon in July last year, which would be jeopardized if the Department of Defense goes through with its threats to penalize it. In the company’s press release, confirming the contract, Anthropic described itself as a leader in “safe and responsible AI” with “strict usage policies.”

Emil Michael, the former Uber executive now serving as a senior Pentagon official, has publicly said the U.S. government and military needed to be able to use AI for “all lawful use cases.”

“I believe and hope that they will ‘cross the Rubicon’ and say, ‘This is common sense. The military has certain use cases. There are laws and regulations that govern how those use cases can be done. We’re willing to comply with them,’” Michael told Defense Scoop earlier this month.

Read the full article here

Share.
Leave A Reply

2026 © Prices.com LLC. All Rights Reserved.
Exit mobile version