Unlock stock picks and a broker-level newsfeed that powers Wall Street.

Microsoft Just Showed the Future of AI, and It's Great News for Intel and AMD

In This Article:

The most powerful generative AI models from the likes of OpenAI, Alphabet, and Anthropic require costly and power-hungry AI accelerators stuffed into data centers to produce results. OpenAI's recent GPT-4.5 model, for example, was rolled out in phases to users because it required an immense amount of computational resources.

AI models from Chinese start-up DeepSeek released earlier this year turned some assumptions about the AI infrastructure market on their heads. DeepSeek managed to produce a model that was far cheaper to train and run, while producing results of similar quality, compared to top-tier models from U.S. AI companies. The assumption that AI models would require ever-increasing quantities of computational horsepower, the foundation of the bull case for Nvidia stock, started to look a lot less like a sure thing.

Where to invest $1,000 right now? Our analyst team just revealed what they believe are the 10 best stocks to buy right now. Continue »

Another AI breakthrough

AI features have started to show up on PCs, smartphones, and other devices, but AI models small enough to run on those devices just aren't that capable. Tom's Hardware called Microsoft's Copilot+ PC AI features "a bad joke" when they first launched last year, and The New York Times concluded that Apple Intelligence, Apple's suite of AI-powered features, was "still half-baked" in October.

There are multiple problems with on-device AI. First, generative AI is not deterministic, meaning that the same input can produce wildly different outputs. That's fine if you're using AI to write a blog post, but it's not so great if you want it to perform a specific task on your smartphone reliably.

That first problem may never be fully solved, but the second problem could be. Problem no. 2 is that PCs and smartphones only have so much memory and computational power, which puts a hard limit on how capable an AI model running locally can be. The AI models that power ChatGPT, which run in data centers, require monstrous amounts of memory, compute, and energy to produce results. Obviously, that's not feasible with a laptop running on battery.

Microsoft may have an answer. The company recently unveiled a new "1-bit" AI model that is small enough to run on a CPU and uses just 0.4 GB of memory. Amazingly, this new model matches the performance of AI models in its size class that use far more memory. What's more, running on a single CPU, the model can produce output at a speed comparable to human reading, which is fast enough to be useful.