Anthropic CEO issues frightening warning on Chinese AI rival

More than a month into 2025, it is already clear that companies are as focused on artificial intelligence (AI) as ever.

In fact, many Magnificent 7 tech companies, including Google  (GOOGL) , Microsoft  (MSFT)  and Meta Platforms  (META)  revealed high AI spending plans for the year, focusing on developing agentic AI and building data centers. But their smaller rivals are also taking key steps toward important advancements.

Leading this charge is Anthropic, the maker of the popular large language model (LLM) Claude. Founded by a team that helped to grow ChatGPT maker OpenAI, Anthropic is focused on creating safe AI systems and conducting research for the industry.

This work doesn’t just apply to the startup's AI products. The startup’s CEO recently issued a frightening statement highlighting the potential dangers that a rival AI model may pose.

Anthropic Co-Founder & CEO Dario Amodei is sounding the alarm on a potential problem that he sees with an AI model made by one of his rivals.Kimberly White/Getty Images
Anthropic Co-Founder & CEO Dario Amodei is sounding the alarm on a potential problem that he sees with an AI model made by one of his rivals.Kimberly White/Getty Images

Anthropic is sounding the alarm on a fellow AI startup

Last month, a small Chinese startup called DeepSeek sent waves of shock and fear through the tech sector, triggering a chip stock selloff in the process. The fact that the new company had produced an AI model built with less advanced Nvidia NVDA chips and trained it for only $5.6 million called the future of the industry into question.

Since then, experts have raised concerns that DeepSeek may be illegally harvesting data from users and sending it back to China. But Anthropic CEO Dario Amodei has revealed that his company has found reason to believe that DeepSeek’s R1 AI model is putting users at risk.

Related: Experts sound the alarm on controversial company’s new AI model

Amodei recently discussed a run test conducted by Anthropic on the ChinaTalk podcast with Jordan Schneider, noting that his startup sometimes examines popular AI models to assess any potential national security risks. In the most recent one, DeepSeek generated dangerous information on a bioweapon that is reportedly hard to acquire.

This part of the safety run test included Anthropic’s team testing DeepSeek to see if it would provide information relating to bioweapons that cannot be easily found by searching Google or consulting medical textbooks.

As Amodei put it, DeepSeek's model is “the worst of basically any model” that Anthropic has ever tested. “It had absolutely no blocks whatsoever against generating this information,” he adds.

If Amodei’s findings are correct, then DeepSeek’s AI model could make it easy for people with dangerous intentions to find dangerous bioweapon information that isn’t readily available for public consumption and use it for illicit purposesAnthropic’s experts aren’t the only people testing DeepSeek and finding concerning elements in the information it provides.