OpenAI trained o1 and o3 to 'think' about its safety policy

OpenAI announced a new family of AI reasoning models on Friday, o3, which the startup claims to be more advanced than o1 or anything else it's released. These improvements appear to have come from scaling test-time compute, something we wrote about last month, but OpenAI also says it used a new safety paradigm to train its o-series of models.

On Friday, OpenAI released new research on "deliberative alignment," outlining the company's latest way to ensure AI reasoning models stay aligned with the values of their human developers. The startup used this method to make o1 and o3 "think" about OpenAI's safety policy during inference, the phase after a user presses enter on their prompt.

This method improved o1's overall alignment to the company's safety principles, according to OpenAI's research. This means deliberative alignment decreased the rate at which o1 answered "unsafe" questions – at least ones deemed unsafe by OpenAI – while improving its ability to answer benign ones.

<span class="wp-element-caption__text">Graph measuring o1's improved alignment compared to Claude, Gemini, and GPT-4o (Image Credit: OpenAI)</span>
Graph measuring o1's improved alignment compared to Claude, Gemini, and GPT-4o (Image Credit: OpenAI)

As AI models rise in popularity, and power, AI safety research seems increasingly relevant. But at the same time, it's more controversial: David Sacks, Elon Musk, and Marc Andreessen say some AI safety measures are actually "censorship," highlighting the subjective nature in these decisions.

While OpenAI's o-series of models were inspired by the way humans think before answering difficult questions, they are not really thinking like you or I do. However, I wouldn't fault you for believing they were, especially because OpenAI uses words like "reasoning" and "deliberating" to describe these processes. o1 and o3 offer sophisticated answers to writing and coding tasks, but these models really just excel at predicting the next token (roughly half a word) in a sentence.

Here's how o1 and o3 works, in simple terms: After a user presses enter on a prompt in ChatGPT, OpenAI's reasoning models take anywhere from 5 seconds to a few minutes to re-prompt themselves with followup questions. The model breaks down a problem into smaller steps. After that process, which OpenAI refers to as "chain-of-thought," the o-series of models give an answer based on the information they generated.

The key innovation around deliberative alignment is that OpenAI trained o1 and o3 to re-prompt themselves with text from OpenAI's safety policy during the chain-of-thought phase. Researchers say this made o1 and o3 much more aligned with OpenAI's policy, but faced some difficulty implementing it without reducing latency – more on that later.