Back to Headlines
Business AI Analysis

AI firm Anthropic seeks weapons expert to stop users from 'misuse'

AI
AI Legal Analyst
March 17, 2026, 3:05 AM 6 min read 11 views

Summary

AI firm Anthropic seeks weapons expert to stop users from 'misuse' 2 hours ago Share Save Zoe Kleinman Technology editor Share Save Getty Images The US artificial intelligence (AI) firm Anthropic is looking to hire a chemical weapons and high-yield explosives expert to try to prevent "catastrophic misuse" of its software. In the LinkedIn recruitment post, the firm says applicants should have a minimum of five years experience in "chemical weapons and/or explosives defence" as well as knowledge of "radiological dispersal devices" – also known as dirty bombs. But some experts are alarmed by the risks of this approach, warning that it gives AI tools information about weapons - even if they have been instructed not to use it. "Is it ever safe to use AI systems to handle sensitive chemicals and explosives information, including dirty bombs and other radiological weapons?" said Dr Stephanie Hare, tech researcher and co-presenter of the BBC's AI Decoded TV programme. "There is no international treaty or other regulation for this type of work and the use of AI with these types of weapons. LinkedIn The US AI firm Anthropic job advert is recruiting for a chemical weapons and high-yield explosives expert to join its policy team Anthropic is taking legal action against the US Department of Defence , which designated it a supply chain risk when the firm insisted its systems must not be used in either fully autonomous weapons or mass surveillance of Americans.

## Summary
AI firm Anthropic seeks weapons expert to stop users from 'misuse' 2 hours ago Share Save Zoe Kleinman Technology editor Share Save Getty Images The US artificial intelligence (AI) firm Anthropic is looking to hire a chemical weapons and high-yield explosives expert to try to prevent "catastrophic misuse" of its software. In the LinkedIn recruitment post, the firm says applicants should have a minimum of five years experience in "chemical weapons and/or explosives defence" as well as knowledge of "radiological dispersal devices" – also known as dirty bombs. But some experts are alarmed by the risks of this approach, warning that it gives AI tools information about weapons - even if they have been instructed not to use it. "Is it ever safe to use AI systems to handle sensitive chemicals and explosives information, including dirty bombs and other radiological weapons?" said Dr Stephanie Hare, tech researcher and co-presenter of the BBC's AI Decoded TV programme. "There is no international treaty or other regulation for this type of work and the use of AI with these types of weapons. LinkedIn The US AI firm Anthropic job advert is recruiting for a chemical weapons and high-yield explosives expert to join its policy team Anthropic is taking legal action against the US Department of Defence , which designated it a supply chain risk when the firm insisted its systems must not be used in either fully autonomous weapons or mass surveillance of Americans.

## Article Content
AI firm Anthropic seeks weapons expert to stop users from 'misuse'
2 hours ago
Share
Save
Zoe Kleinman
Technology editor
Share
Save
Getty Images
The US artificial intelligence (AI) firm Anthropic is
looking to hire
a chemical weapons and high-yield explosives expert to try to prevent "catastrophic misuse" of its software.
In other words, it fears that its AI tools might tell someone how to make chemical or radioactive weapons, and wants an expert to ensure its guardrails are sufficiently robust.
In the LinkedIn recruitment post, the firm says applicants should have a minimum of five years experience in "chemical weapons and/or explosives defence" as well as knowledge of "radiological dispersal devices" – also known as dirty bombs.
The firm told the BBC the role was similar to jobs in other sensitive areas that it has already created.
Anthropic is not the only AI firm adopting this strategy.
A
similar position
has been advertised by ChatGPT developer OpenAI. On its careers website, it lists a job vacancy for a researcher in "biological and chemical risks", with a salary of up to $455,000 (£335,000), almost double that offered by Anthropic.
But some experts are alarmed by the risks of this approach, warning that it gives AI tools information about weapons - even if they have been instructed not to use it.
"Is it ever safe to use AI systems to handle sensitive chemicals and explosives information, including dirty bombs and other radiological weapons?" said Dr Stephanie Hare, tech researcher and co-presenter of the BBC's AI Decoded TV programme.
"There is no international treaty or other regulation for this type of work and the use of AI with these types of weapons. All of this is happening out of sight."
The AI industry has continuously warned about the potential existential threats posed by its technology, but there has been no attempt to slow down its progress.
The issue has gained urgency as the US government calls on AI firms while launching war in Iran and military operations in Venezuela.
LinkedIn
The US AI firm Anthropic job advert is recruiting for a chemical weapons and high-yield explosives expert to join its policy team
Anthropic is
taking legal action against the US Department of Defence
, which designated it a
supply chain risk
when the firm insisted its systems must not be used in either fully autonomous weapons or mass surveillance of Americans.
Anthropic co-founder Dario Amodei wrote in February that he didn't think the technology was good enough yet, and should not be used for these purposes.
The White House said the US military would not be governed by tech companies.
The risk label puts the US company in the same boat as the Chinese telecoms firm Huawei, which was
similarly blacklisted
over different national security concerns.
OpenAI said it agreed with Anthropic's position but then
negotiated its own contract with the US government
, which it says has not yet begun.
Anthropic's AI assistant, called Claude, has not yet been phased out, and is currently still embedded in systems provided by Palantir and being deployed by the US in the US-Israel Iran war.
Big Tech backs Anthropic in fight against Trump administration
Anthropic sues US government for calling it a risk
AI safety leader says 'world is in peril' and quits to study poetry
Sign up for our Tech Decoded newsletter
to follow the world's top tech stories and trends.
Outside the UK? Sign up here
.
Artificial intelligence
Military

---

## Expert Analysis

### Merits
- In other words, it fears that its AI tools might tell someone how to make chemical or radioactive weapons, and wants an expert to ensure its guardrails are sufficiently robust.
- All of this is happening out of sight." The AI industry has continuously warned about the potential existential threats posed by its technology, but there has been no attempt to slow down its progress.

### Areas for Consideration
- The issue has gained urgency as the US government calls on AI firms while launching war in Iran and military operations in Venezuela.
- LinkedIn The US AI firm Anthropic job advert is recruiting for a chemical weapons and high-yield explosives expert to join its policy team Anthropic is taking legal action against the US Department of Defence , which designated it a supply chain risk when the firm insisted its systems must not be used in either fully autonomous weapons or mass surveillance of Americans.
- The risk label puts the US company in the same boat as the Chinese telecoms firm Huawei, which was similarly blacklisted over different national security concerns.

### Implications
- In other words, it fears that its AI tools might tell someone how to make chemical or radioactive weapons, and wants an expert to ensure its guardrails are sufficiently robust.
- In the LinkedIn recruitment post, the firm says applicants should have a minimum of five years experience in "chemical weapons and/or explosives defence" as well as knowledge of "radiological dispersal devices" – also known as dirty bombs.
- But some experts are alarmed by the risks of this approach, warning that it gives AI tools information about weapons - even if they have been instructed not to use it. "Is it ever safe to use AI systems to handle sensitive chemicals and explosives information, including dirty bombs and other radiological weapons?" said Dr Stephanie Hare, tech researcher and co-presenter of the BBC's AI Decoded TV programme. "There is no international treaty or other regulation for this type of work and the use of AI with these types of weapons.
- LinkedIn The US AI firm Anthropic job advert is recruiting for a chemical weapons and high-yield explosives expert to join its policy team Anthropic is taking legal action against the US Department of Defence , which designated it a supply chain risk when the firm insisted its systems must not be used in either fully autonomous weapons or mass surveillance of Americans.

### Expert Commentary
This article covers anthropic, weapons, firm topics. Notable strengths include discussion of anthropic. Areas of concern are also raised. Readability: Flesch-Kincaid grade 0.0. Word count: 578.
anthropic weapons firm chemical tech expert explosives technology

Related Articles