Altman admitted that DeepSeek has lessened OpenAI's lead in AI, and he said he believes OpenAI has been "on the wrong side of history" when it comes to open sourcing its technologies. While OpenAI has open sourced models in the past, the company has generally favored a proprietary, closed source development approach.
"[I personally think we need to] figure out a different open source strategy," Altman said. "Not everyone at OpenAI shares this view, and it's also not our current highest priority … We will produce better models [going forward], but we will maintain less of a lead than we did in previous years."
In a follow-up reply, Kevin Weil, OpenAI's chief product officer, said that OpenAI is considering open sourcing older models that aren't state-of-the-art anymore. "We'll definitely think about doing more of this," he said, without going into greater detail.
Beyond prompting OpenAI to reconsider its release philosophy, Altman said that DeepSeek has pushed the company to potentially reveal more about how its so-called reasoning models, like the o3-mini model released today, show their "thought process." Currently, OpenAI's models conceal their reasoning, a strategy intended to prevent competitors from scraping training data for their own models. In contrast, DeepSeek's reasoning model, R1, shows its full chain of thought.
"We're working on showing a bunch more than we show today — [showing the model thought process] will be very very soon," Weil added. "TBD on all — showing all chain of thought leads to competitive distillation, but we also know people (at least power users) want it, so we'll find the right way to balance it."
Altman and Weil attempted to dispel rumors that ChatGPT, the chatbot platform through which OpenAI launches many of its models, would increase in price in the future. Altman said that he'd like to make ChatGPT "cheaper" over time, if feasible.
In a somewhat related thread, Weil said that OpenAI continues to see evidence that more compute power leads to "better" and more performant models. That's in large part what's necessitating projects such as Stargate, OpenAI's recently announced massive data center project, Weil said. Serving a growing user base is fueling compute demand within OpenAI as well, he continued.
Asked about recursive self-improvement that might be enabled by these powerful models, Altman said he thinks a "fast takeoff" is more plausible than he once believed. Recursive self-improvement is a process where an AI system could improve its own intelligence and capabilities without human input.
Of course, it's worth noting that Altman is notorious for overpromising. It wasn't long ago that he lowered OpenAI's bar for AGI.
One Reddit user asked whether OpenAI's models, self-improving or not, would be used to develop destructive weapons — specifically nuclear weapons. This week, OpenAI announced a partnership with the U.S. government to give its models to the U.S. National Laboratories in part for nuclear defense research.
Weil said he trusted the government.
"I've gotten to know these scientists and they are AI experts in addition to world class researchers," he said. "They understand the power and the limits of the models, and I don't think there's any chance they just YOLO some model output into a nuclear calculation. They're smart and evidence-based and they do a lot of experimentation and data work to validate all their work."
"Yes! We're working on it," Weil said of a DALL-E 3 follow-up. "And I think it's going to be worth the wait."
This article originally appeared on TechCrunch at https://techcrunch.com/2025/01/31/sam-altman-believes-openai-has-been-on-the-wrong-side-of-history-concerning-open-source/