Unlock stock picks and a broker-level newsfeed that powers Wall Street. Upgrade Now
China's DeepSeek claims theoretical cost-profit ratio of 545% per day

BEIJING (Reuters) - Chinese AI startup DeepSeek on Saturday disclosed some cost and revenue data related to its hit V3 and R1 models, claiming a theoretical cost-profit ratio of up to 545% per day, though it cautioned that actual revenue would be significantly lower.

This marks the first time the Hangzhou-based company has revealed any information about its profit margins from less computationally intensive "inference" tasks, the stage after training that involves trained AI models making predictions or performing tasks, such as through chatbots.

The revelation could further rattle AI stocks outside China that plunged in January after web and app chatbots powered by its R1 and V3 models surged in popularity worldwide.

The sell-off was partly caused by DeepSeek's claims that it spent less than $6 million on chips used to train the model, much less than what U.S. rivals like OpenAI have spent.

The chips DeepSeek claims it used, Nvidia's H800, are also much less powerful than what OpenAI and other U.S. AI firms have access to, making investors question even further U.S. AI firms' pledges to spend billions of dollars on cutting-edge chips.

DeepSeek said in a GitHub post published on Saturday that assuming the cost of renting one H800 chip is $2 per hour, the total daily inference cost for its V3 and R1 models is $87,072. In contrast, the theoretical daily revenue generated by these models is $562,027, leading to a cost-profit ratio of 545%. In a year this would add up to just over $200 million in revenue.

However, the firm added that its "actual revenue is substantially lower" because the cost of using its V3 model is lower than the R1 model, only some services are monetized as web and app access remain free, and developers pay less during off-peak hours.

(Reporting by Eduardo Baptista; Editing by Daren Butler)