Unlock stock picks and a broker-level newsfeed that powers Wall Street.

Alibaba unveils Qwen3, a family of 'hybrid' AI reasoning models

In This Article:

Chinese tech company Alibaba on Monday released Qwen3, a family of AI models that the company claims can match and, in some cases, outperform the best models available from Google and OpenAI.

Most of the models are — or soon will be — available for download under an "open" license on AI dev platform Hugging Face and GitHub. They range in size from 0.6 billion parameters to 235 billion parameters. (Parameters roughly correspond to a model’s problem-solving skills, and models with more parameters generally perform better than those with fewer parameters.)

The rise of China-originated model series like Qwen have increased the pressure on American labs such as OpenAI to deliver more capable AI technologies. They've also led policymakers to implement restrictions aimed at limiting the ability of Chinese AI companies to obtain the chips necessary to train models.

According to Alibaba, the Qwen3 models are "hybrid" models — they can take time to "reason" through complex problems, or answer simpler requests quickly. Reasoning enables the models to effectively fact-check themselves, similar to models like OpenAI's o3, but at the cost of higher latency.

"We have seamlessly integrated thinking and non-thinking modes, offering users the flexibility to control the thinking budget," the Qwen team wrote in a blog post. "This design enables users to configure task-specific budgets with greater ease."

Some of the models also adopt a mixture of experts (MoE) architecture, which can be more computationally efficient for answering queries. MoE breaks down tasks into subtasks and delegates them to smaller, specialized "expert" models.

The Qwen3 models support 119 languages, Alibaba said, and were trained on a dataset of over 36 trillion tokens. (Tokens are the raw bits of data that a model processes; 1 million tokens is equivalent to about 750,000 words.) The company said Qwen3 was trained on a combination of textbooks, "question-answer pairs," code snippets, AI-generated data, and more.