DeepSeek says it built its chatbot cheap. What does that mean for AI's energy needs and the climate?

Chinese artificial intelligence startup company DeepSeek stunned markets and AI experts with its claim that it built its immensely popular chatbot at a fraction of the cost of those made by American tech titans.

That immediately called into question the billions of dollars U.S. tech companies are spending on a massive expansion of energy-hungry data centers they say are needed to unlock the next wave of artificial intelligence.

Could this new AI mean the world needs significantly less electricity for the technology than everyone thinks? The answer has profound implications for the overheating climate . AI uses vast amounts of energy, much of which comes from burning fossil fuels, which causes climate change. Tech companies have said their electricity use is going up, when it was supposed to be ramping down, ruining their carefully-laid plans to address climate change.

“There has been a very gung ho, go ahead at all costs mentality in this space, pushing toward investment in fossil fuels,” said Eric Gimon, senior fellow at Energy Innovation. “This is an opportunity to tap the brakes.”

Making AI more efficient could be less taxing on the environment, experts say, even if its huge electricity needs are not going away.

People flock to new DeepSeek assistant

DeepSeek’s claims of building its impressive chatbot on a budget drew curiosity that helped make its AI assistant the No. 1 downloaded free app on Apple’s iPhone this week, ahead of U.S.-made chatbots ChatGPT and Google’s Gemini.

“All of a sudden we wake up Monday morning and we see a new player number one on the App Store, and all of a sudden it could be a potential gamechanger overnight," said Jay Woods, chief global strategist at Freedom Capital Markets. “ It caused a bit of a panic. These were the hottest stocks in the world.”

DeepSeek’s app competes well with other leading AI models. It can compose software code, solve math problems and address other questions that take multiple steps of planning. It's attracted attention for its ability to explain its reasoning in the process of answering questions.

Leading analysts have been poring through the startup's public research papers about its new model, R1, and its precursors. Among the details that stood out was DeepSeek’s assertion that the cost to train the flagship v3 model behind its AI assistant was only $5.6 million, a stunningly low number compared to the multiple billions of dollars spent to build ChatGPT and other well-known systems. DeepSeek hasn’t responded to requests for comment.

The $5.6 million number only included actually training the chatbot, not the costs of earlier-stage research and experiments, the paper said. DeepSeek was also working under some constraints: U.S. export controls on the most powerful AI chips. It said it relied on a relatively low-performing AI chip from California chipmaker Nvidia that the U.S. hasn’t banned for sale in China.