Anthropic
11 open roles (AI safety, policy & security)
description
Anthropic is a frontier AI research and product company that aims to build reliable, interpretable, and steerable AI systems. Their research interests span multiple areas including natural language, human feedback, scaling laws, reinforcement learning, code generation, and interpretability. Recently announced teams working on safety issues include their Frontier Red Team (working on adversarial testing of advanced ML models), and their Alignment Stress-Testing Team (working on red-teaming Anthropic's alignment and evaluation efforts).
We post specific opportunities at Anthropic that we think may be high impact. We do not necessarily recommend working at other positions at Anthropic. You can read concerns about doing harm by working at a frontier AI company in our career review on the topic.
Open roles (AI safety, policy & security )Some roles at Anthropic are focused on advancing AI capabilities (which helps enable some of their safety research especially around large model evaluations and interpretability).
You can find all of Anthropic's roles on their careers page.
Some roles at Anthropic are focused on advancing AI capabilities (which helps enable some of their safety research especially around large model evaluations and interpretability).
You can find all of Anthropic's roles on their careers page.
Learn more
80,000 Hours links
Preventing an AI-related catastrophe
Problem profile
Working at a leading AI lab
Career review
AI safety technical research
Career review
Anonymous advice on if you should work on AI capabilities to help reduce AI risk
Article
Interview with Chris Olah, who leads Anthropic's research into interpretability
Podcast
Interview with Nova DasSarma, who works on information security at Anthropic
Podcast
Interview with Nick Joseph, Anthropic's head of training
Podcast
External content
Anthropic's core views on AI safety
Information
An interview with Anthropic's co-founders Daniela and Dario Amodei's (Future of Life Institute podcast)
Podcast
Anthropic's AI safety research papers
Research