Anthropic and OpenAI are hiring weapons specialists to prevent ‘catastrophic misuse’ | Euronews
Summary
By  Anna Desmarais Published on 18/03/2026 - 13:32 GMT+1 Share Comments Share Facebook Twitter Flipboard Send Reddit Linkedin Messenger Telegram VK Bluesky Threads Whatsapp Anthropic and OpenAI are recruiting experts on chemicals and explosions to build safety guardrails for their AI systems. Artificial intelligence (AI) companies Anthropic and OpenAI are looking to hire weapons and explosives experts to prevent misuse of their technology, according to job postings from both companies. ADVERTISEMENT ADVERTISEMENT Anthropic announced in a LinkedIn post that it was searching for a policy expert on chemical weapons and explosions to prevent “catastrophic misuse” of its technology by shaping how its AI systems handle sensitive information in these fields. OpenAI’s job posting earlier this month said it was looking for researchers to join its Preparedness team, which monitors for “catastrophic risks related to frontier AI models.” It also advertised for a Threat Modeler, which would give one person primary ownership "to identifying, modelling, and forecasting frontier risks" and serve as "a central node connecting technical, governance, and policy perspectives on prioritisation, focus and rationale on our approach to frontier risks from AI." Euronews Next reached out to Anthropic and OpenAI about the job postings but did not receive an immediate reply.
By  Anna Desmarais Published on 18/03/2026 - 13:32 GMT+1 Share Comments Share Facebook Twitter Flipboard Send Reddit Linkedin Messenger Telegram VK Bluesky Threads Whatsapp Anthropic and OpenAI are recruiting experts on chemicals and explosions to build safety guardrails for their AI systems. Artificial intelligence (AI) companies Anthropic and OpenAI are looking to hire weapons and explosives experts to prevent misuse of their technology, according to job postings from both companies. ADVERTISEMENT ADVERTISEMENT Anthropic announced in a LinkedIn post that it was searching for a policy expert on chemical weapons and explosions to prevent “catastrophic misuse” of its technology by shaping how its AI systems handle sensitive information in these fields. OpenAI’s job posting earlier this month said it was looking for researchers to join its Preparedness team, which monitors for “catastrophic risks related to frontier AI models.” It also advertised for a Threat Modeler, which would give one person primary ownership "to identifying, modelling, and forecasting frontier risks" and serve as "a central node connecting technical, governance, and policy perspectives on prioritisation, focus and rationale on our approach to frontier risks from AI." Euronews Next reached out to Anthropic and OpenAI about the job postings but did not receive an immediate reply.
## Article Content
By 
Anna Desmarais
Published on
18/03/2026 - 13:32 GMT+1
Share
Comments
Share
Send
Messenger
Telegram
VK
Bluesky
Threads
Anthropic and OpenAI are recruiting experts on chemicals and explosions to build safety guardrails for their AI systems.
Artificial intelligence (AI) companies Anthropic and OpenAI are looking to hire weapons and explosives experts to prevent misuse of their technology, according to job postings from both companies.
ADVERTISEMENT
ADVERTISEMENT
Anthropic announced in a LinkedIn
post
that it was searching for a policy expert on chemical weapons and explosions to prevent “catastrophic misuse” of its technology by shaping how its AI systems handle sensitive information in these fields.
The person hired at Anthropic will design and monitor the guardrails for how AI models react to prompts about chemical weapons and explosives. They will also conduct “rapid responses” to any escalations that Anthropic detects in weapons and explosions prompts.
Applicants should have a minimum of five years of experience in “chemical weapons and/or explosives defences,” as well as knowledge of “radiological dispersal devices,” or dirty bombs. The role involves
designing
new risk evaluations that the company’s leadership can “trust during high-stakes launches."
Related
AI on the battlefield: How is the US integrating AI into its military?
OpenAI’s job posting earlier this month said it was looking for researchers to join its Preparedness team, which monitors for “catastrophic risks related to frontier AI models.”
It also advertised for a Threat Modeler, which would give one person primary ownership "to identifying, modelling, and forecasting frontier risks" and serve as "a central node connecting technical, governance, and policy perspectives on prioritisation, focus and rationale on our approach to frontier risks from AI."
Euronews Next reached out to Anthropic and OpenAI about the job postings but did not receive an immediate reply.
These hires come after Anthropic mounted a legal challenge against the US government after it designated the company as a “supply chain risk,” a label that allows the government to block contracts or instruct departments not to work with them.
The conflict began on February 24, when the Department of War (DOW) demanded unfettered access to Anthropic’s Claude chatbot.
Related
Why AI company Anthropic and the US are at a standoff over a military contract
CEO Dario Amodei
said
that DOW contracts should not include instances where Claude is deployed for mass domestic surveillance and integrated into fully autonomous weapons.
Shortly after the fallout with Anthropic, OpenAI signed a deal with the Department of War (DOW) to deploy its AI into classified environments. The company said the deal included strict red lines, such as no use of its systems for mass surveillance or autonomous weapons.
Go to accessibility shortcuts
Share
Comments
Read more
Tech News
Ever played Pokémon Go? You may have helped train delivery robots
Tech News
The cyberattacks that are reshaping the Iran war
Tech News
From AI agents to space: Highlights from Nvidia’s GTC keynote
Artificial intelligence
AI
Open AI
---
## Expert Analysis
### Merits
N/A
### Areas for Consideration
- The role involves designing new risk evaluations that the company’s leadership can “trust during high-stakes launches." Related AI on the battlefield: How is the US integrating AI into its military?
- OpenAI’s job posting earlier this month said it was looking for researchers to join its Preparedness team, which monitors for “catastrophic risks related to frontier AI models.” It also advertised for a Threat Modeler, which would give one person primary ownership "to identifying, modelling, and forecasting frontier risks" and serve as "a central node connecting technical, governance, and policy perspectives on prioritisation, focus and rationale on our approach to frontier risks from AI." Euronews Next reached out to Anthropic and OpenAI about the job postings but did not receive an immediate reply.
- These hires come after Anthropic mounted a legal challenge against the US government after it designated the company as a “supply chain risk,” a label that allows the government to block contracts or instruct departments not to work with them.
### Implications
- ADVERTISEMENT ADVERTISEMENT Anthropic announced in a LinkedIn post that it was searching for a policy expert on chemical weapons and explosions to prevent “catastrophic misuse” of its technology by shaping how its AI systems handle sensitive information in these fields.
- The person hired at Anthropic will design and monitor the guardrails for how AI models react to prompts about chemical weapons and explosives.
- They will also conduct “rapid responses” to any escalations that Anthropic detects in weapons and explosions prompts.
- Applicants should have a minimum of five years of experience in “chemical weapons and/or explosives defences,” as well as knowledge of “radiological dispersal devices,” or dirty bombs.
### Expert Commentary
This article covers anthropic, weapons, openai topics. Areas of concern are also raised. Readability: Flesch-Kincaid grade 0.0. Word count: 492.
Related Articles
See the messages Brian Hooker sent his friend after wife's disappearance in...
3 days, 5 hours ago
Breaking down Artemis II's reentry process, heat shield's importance
3 days, 5 hours ago
Tracking traffic through the Strait of Hormuz
3 days, 5 hours ago
Israel issues new evacuation orders for Beirut suburbs
3 days, 5 hours ago