OpenAI
16 open roles (AI safety, policy & security)
description
OpenAI is a frontier AI research and product company, responsible for ChatGPT and DALL-E. They have people working across Research, Engineering, Product and Operations and more. Teams working on safety and governance include their Preparedness, Safety Systems, Governance, and Security teams.
We post specific opportunities at OpenAI that we think may be high impact. We do not necessarily recommend working at other positions at OpenAI. You can read concerns about doing harm by working at a frontier AI company in our career review on the topic, including concerns about OpenAI in particular. Note that there have also been concerns around OpenAI's HR practices.
Open roles (AI safety, policy & security )We're only confident in recommending OpenAI roles working on safety and security issues.
You can find all of OpenAI's roles on their careers page.
We're only confident in recommending OpenAI roles working on safety and security issues.
You can find all of OpenAI's roles on their careers page.
Learn more
80,000 Hours links
Preventing an AI-related catastrophe
Problem profile
Working at a leading AI lab
Career review
AI safety technical research
Career review
Anonymous advice on if you should work on AI capabilities to help reduce AI risk
Article
Interview with Richard Ngo, a researcher at OpenAI
Podcast
External content
Planning for AGI and beyond
Blog post
OpenAI's Preparedness team and outline of their preparedness framework for frontier AI risks
Team
OpenAI's write up of its approach to alignment research
Approach
ChatGPT, a large language model released by OpenAI in November 2022
Product
DALL-E, an AI system that creates realistic images and art from a description in natural language
Product