OpenAI's o3 suggests AI models are scaling in new ways — but so are the costs

Last month, AI founders and investors told TechCrunch that we're now in the "second era of scaling laws," noting how established methods of improving AI models were showing diminishing returns. One promising new method they suggested could keep gains was "test-time scaling," which seems to be what's behind the performance of OpenAI's o3 model -- but it comes with drawbacks of its own.

Much of the AI world took the announcement of OpenAI's o3 model as proof that AI scaling progress has not "hit a wall." The o3 model does well on benchmarks, significantly outscoring all other models on a test of general ability called ARC-AGI, and scoring 25% on a difficult math test that no other AI model scored more than 2% on.

Of course, we at TechCrunch are taking all this with a grain of salt until we can test o3 for ourselves (very few have tried it so far). But even before o3's release, the AI world is already convinced that something big has shifted.

The co-creator of OpenAI's o-series of models, Noam Brown, noted on Friday that the startup is announcing o3's impressive gains just three months after the startup announced o1 -- a relatively short time frame for such a jump in performance.

"We have every reason to believe this trajectory will continue," said Brown in a tweet.

Anthropic co-founder Jack Clark said in a blog post on Monday that o3 is evidence that AI "progress will be faster in 2025 than in 2024." (Keep in mind that it benefits Anthropic -- especially its ability to raise capital -- to suggest that AI scaling laws are continuing, even if Clark is complementing a competitor.)

Next year, Clark says the AI world will splice together test-time scaling and traditional pre-training scaling methods to eke even more returns out of AI models. Perhaps he's suggesting that Anthropic and other AI model providers will release reasoning models of their own in 2025, just like Google did last week.

Test-time scaling means OpenAI is using more compute during ChatGPT's inference phase, the period of time after you press enter on a prompt. It's not clear exactly what is happening behind the scenes: OpenAI is either using more computer chips to answer a user's question, running more powerful inference chips, or running those chips for longer periods of time -- 10 to 15 minutes in some cases -- before the AI produces an answer. We don't know all the details of how o3 was made, but these benchmarks are early signs that test-time scaling may work to improve the performance of AI models.