Google DeepMind
3 open roles (AI safety, policy & security)
description
Google DeepMind is a frontier AI research and product company aiming to develop general artificial intelligence. It's known for systems like AlphaGo, AlphaFold and Gemini. It has some teams that focus on AI safety, including the Scalable Alignment Team focusing on aligning existing state-of-the-art systems, and the Alignment Team focused on research bets for aligning future systems.
We post specific opportunities at Google DeepMind that we think may be high impact. We do not necessarily recommend working at other positions at Google DeepMind. You can read concerns about doing harm by working at a frontier AI company in our career review on the topic.
Open roles (AI safety, policy & security )We're only confident in recommending DeepMind roles working on safety, governance, policy, ethics and security issues.
You can find all of DeepMind's roles on their careers page.
We're only confident in recommending DeepMind roles working on safety, governance, policy, ethics and security issues.
You can find all of DeepMind's roles on their careers page.
Learn more
80,000 Hours links
Preventing an AI-related catastrophe
Problem profile
Working at a leading AI lab
Career review
AI safety technical research
Career review
Anonymous advice on if you should work on AI capabilities to help reduce AI risk
Article
Interview with Rohin Shah, a researcher who leads DeepMind's Alignment Team
Podcast
External content
Links to DeepMind's latest research and products
Research
Holistic Safety and Responsibility Evaluations of Advanced AI Models
Research
An early warning system for novel AI risks
Research
Evaluating Frontier Models for Dangerous Capabilities
Research
Introducing the Frontier Safety Framework
Blog post
Information about DeepMind's Alignment and Scalable Alignment teams
Blog post
Preparing for Debate AI with DeepMind safety researcher Geoffrey Irving
Podcast