Anthropic-Pentagon battle shows how big tech has reversed course on AI and war
Summary
Composite: Getty Images Analysis Anthropic-Pentagon battle shows how big tech has reversed course on AI and war Nick Robins-Early Less than a decade ago, Google employees scuttled any military use of its AI. Although Anthropic’s refusal to remove safety guardrails and the Pentagon’s subsequent retaliation have highlighted longstanding concerns over the use of AI for conflict, the fight has shown how much the goal posts have moved when it comes to big tech’s ties to the military. “If people are looking for good guys and bad guys, where a good guy is someone who doesn’t support war,” said Margaret Mitchell, an AI researcher and chief ethics scientist at the tech firm Hugging Face. “Then they’re not going to find that here.” Anti-military protests to military contracts There’s a number of contributing factors in big tech’s newfound embrace of militarism. Anthropic goes to war Even as Anthropic has received public praise in its standoff with the Pentagon, its co-founder and chief exceutive Dario Amodei has emphasized that the AI company and the government largely want the same things. “Anthropic has much more in common with the Department of War than we have differences,” Amodei wrote in a blogpost last Thursday. The company’s lawsuit against the DoD showcases how extensively the company has been willing to work with the military and alter its products for their use. “Anthropic does not impose the same restrictions on the military’s use of Claude as it does on civilian customers,” Anthropic’s California lawsuit stated. “Claude Gov is less prone to refuse requests that would be prohibited in the civilian context, such as using Claude for handling classified documents, military operations, or threat analysis.” The government has reportedly been using Claude for target selection and analysis in its bombing campaign against Iran, a use-case that Anthropic has given no indication that it has an issue with.
Composite: Getty Images Analysis Anthropic-Pentagon battle shows how big tech has reversed course on AI and war Nick Robins-Early Less than a decade ago, Google employees scuttled any military use of its AI. Although Anthropic’s refusal to remove safety guardrails and the Pentagon’s subsequent retaliation have highlighted longstanding concerns over the use of AI for conflict, the fight has shown how much the goal posts have moved when it comes to big tech’s ties to the military. “If people are looking for good guys and bad guys, where a good guy is someone who doesn’t support war,” said Margaret Mitchell, an AI researcher and chief ethics scientist at the tech firm Hugging Face. “Then they’re not going to find that here.” Anti-military protests to military contracts There’s a number of contributing factors in big tech’s newfound embrace of militarism. Anthropic goes to war Even as Anthropic has received public praise in its standoff with the Pentagon, its co-founder and chief exceutive Dario Amodei has emphasized that the AI company and the government largely want the same things. “Anthropic has much more in common with the Department of War than we have differences,” Amodei wrote in a blogpost last Thursday. The company’s lawsuit against the DoD showcases how extensively the company has been willing to work with the military and alter its products for their use. “Anthropic does not impose the same restrictions on the military’s use of Claude as it does on civilian customers,” Anthropic’s California lawsuit stated. “Claude Gov is less prone to refuse requests that would be prohibited in the civilian context, such as using Claude for handling classified documents, military operations, or threat analysis.” The government has reportedly been using Claude for target selection and analysis in its bombing campaign against Iran, a use-case that Anthropic has given no indication that it has an issue with.
## Article Content
Dario Amodei, the chief executive of Anthropic, and Donald Trump in this composite photograph.
Composite: Getty Images
View image in fullscreen
Dario Amodei, the chief executive of Anthropic, and Donald Trump in this composite photograph.
Composite: Getty Images
Analysis
Anthropic-Pentagon battle shows how big tech has reversed course on AI and war
Nick Robins-Early
Less than a decade ago, Google employees scuttled any military use of its AI. Now Anthropic is fighting Trump officials not over if, but how
The standoff between Anthropic and the Pentagon has forced the tech industry to once again grapple with the question of how its products are used for war – and what lines it will not cross. Amid Silicon Valley’s rightward shift under Donald Trump and the signing of lucrative defense contracts, big tech’s answer is looking very different than it did even less than a decade ago.
Anthropic’s feud with the
Trump administration
escalated three days ago as the AI firm sued the Department of Defense, claiming that the government’s decision to blacklist it from government work violated its first amendment rights. The company and the Pentagon have been locked in a months-long standoff, with Anthropic attempting to prohibit its AI model from being used for domestic mass surveillance or fully autonomous lethal weapons.
Anthropic has argued that giving in to the DoD’s demands to permit “any lawful use” of its technology would violate its founding safety principles and open up its technology for potential abuse, staking an ethical boundary that others in the industry must decide whether they want to cross.
Although Anthropic’s refusal to remove safety guardrails and the Pentagon’s subsequent retaliation have highlighted longstanding concerns over the use of AI for conflict, the fight has shown how much the goal posts have moved when it comes to big tech’s ties to the military.
“If people are looking for good guys and bad guys, where a good guy is someone who doesn’t support war,” said Margaret Mitchell, an AI researcher and chief ethics scientist at the tech firm Hugging Face. “Then they’re not going to find that here.”
Anti-military protests to military contracts
There’s a number of contributing factors in big tech’s newfound embrace of militarism. Its alignment with the Trump administration, which has included
shows of fealty to Trump
from major CEOs, has tied tech firms to the government’s desire to expand its military capabilities. The administration’s vow to overhaul federal agencies using artificial intelligence has also specifically signaled an opportunity for AI firms to integrate their products into government and military operations in a way that could secure revenue for years to come. Looming in the background, concern over China’s technological advancement and a surge in international defense spending have also shifted attitudes in the industry.
It was not so long ago, however, that working with the military on potentially harmful technology was seen as a red line for many big tech workers. In 2018, thousands of Google employees launched a protest against a program to analyze drone footage for the DoD called Project Maven.
“We believe that Google should not be in the business of war,” over 3,000 workers stated in an open letter at the time. Google decided not to renew Project Maven following the protests and
published policies
that barred pursuing technology that could “cause or directly facilitate injury to people”.
In the years since the Project Maven protest, though, Google has
clamped down on employee activism
,
removed the 2018 language from its policies
that prohibited creating technology for weaponry and signed numerous contracts that allow militaries to use its products. In 2024, the tech giant fired over 50 employees in response to protests against the company’s
military ties to the Israeli government
. Chief executive Sundar Pichai
sent a memo
to employees after the firings stating that Google was a business and not a place to “fight over disruptive issues or debate politics”.
Google announced just this week that it would
provide its Gemini artificial intelligence
to provide the military a platform for creating AI agents to work on unclassified projects.
OpenAI too had a blanket ban on allowing any militaries to access its models prior to 2024, but since and now has its chief product officer
serving as a lieutenant colonel
in the US military’s “executive innovation corps”. The startup, along with Google, Anthropic and xAI, signed an up-to-$200m contract with the DoD last year to integrate its technology into military systems. On the day that Pete Hegseth, the defense secretary, declared Anthropic a supply chain risk, OpenAI
secured a deal with the DoD
allowing its tech to be used in classified military systems.
Elsewhere in the tech industry, more hawkish companies like defense tech firm Anduril, founded the year before the Google Maven protests, and surveillance tech maker Palantir have made partn
---
## Expert Analysis
### Merits
- OpenAI too had a blanket ban on allowing any militaries to access its models prior to 2024, but since and now has its chief product officer serving as a lieutenant colonel in the US military’s “executive innovation corps”.
### Areas for Consideration
- Looming in the background, concern over China’s technological advancement and a surge in international defense spending have also shifted attitudes in the industry.
- On the day that Pete Hegseth, the defense secretary, declared Anthropic a supply chain risk, OpenAI secured a deal with the DoD allowing its tech to be used in classified military systems.
- He expressed less concern about AI making it easier to kill people or conduct warfare and more about the reliability of the technology and threat of it being consolidated by too small a number of people with “fingers on the button” who could control an autonomous drone army.
### Implications
- Now Anthropic is fighting Trump officials not over if, but how The standoff between Anthropic and the Pentagon has forced the tech industry to once again grapple with the question of how its products are used for war – and what lines it will not cross.
- The administration’s vow to overhaul federal agencies using artificial intelligence has also specifically signaled an opportunity for AI firms to integrate their products into government and military operations in a way that could secure revenue for years to come.
- In 2018, thousands of Google employees launched a protest against a program to analyze drone footage for the DoD called Project Maven. “We believe that Google should not be in the business of war,” over 3,000 workers stated in an open letter at the time.
- Google decided not to renew Project Maven following the protests and published policies that barred pursuing technology that could “cause or directly facilitate injury to people”.
### Expert Commentary
This article covers anthropic, military, tech topics. Notable strengths include discussion of anthropic. Areas of concern are also raised. Readability: Flesch-Kincaid grade 0.0. Word count: 1419.
Related Articles
Rhythm Heaven Groove comes to Switch on July 2
2 days ago
Roku will stream Savannah Bananas games, along with the entire Banana Ball...
2 days ago
The best Android tablets of 2026: Lab tested, expert recommended
2 days ago
The best dedicated web hosting of 2026: Expert tested and reviewed
2 days ago