Organisations

Places with roles that could be particularly promising for working on key global problems.

Anthropic logo

Anthropic

16 open roles (AI safety, policy & security)

description

Anthropic is a frontier AI research and product company that aims to build reliable, interpretable, and steerable AI systems. Their research interests span multiple areas including natural language, human feedback, scaling laws, reinforcement learning, code generation, and interpretability. Recently announced teams working on safety issues include their Frontier Red Team (working on adversarial testing of advanced ML models), and their Alignment Stress-Testing Team (working on red-teaming Anthropic's alignment and evaluation efforts).
We post specific opportunities at Anthropic that we think may be high impact. We do not necessarily recommend working at other positions at Anthropic. You can read concerns about doing harm by working at a frontier AI company in our career review on the topic.

Open roles (AI safety, policy & security )