The most powerful generative AI models from the likes of OpenAI, Alphabet, and Anthropic require costly and power-hungry AI accelerators stuffed into data centers to produce results. OpenAI's recent GPT-4.5 model, for example, was rolled out in phases to users because it required an immense amount of computational resources.
AI models from Chinese start-up DeepSeek released earlier this year turned some assumptions about the AI infrastructure market on their heads. DeepSeek managed to produce a model that was far cheaper to train and run, while producing results of similar quality, compared to top-tier models from U.S. AI companies. The assumption that AI models would require ever-increasing quantities of computational horsepower, the foundation of the bull case for Nvidia stock, started to look a lot less like a sure thing.
Where to invest $1,000 right now? Our analyst team just revealed what they believe are the 10 best stocks to buy right now. Continue »
Another AI breakthrough
AI features have started to show up on PCs, smartphones, and other devices, but AI models small enough to run on those devices just aren't that capable. Tom's Hardware called Microsoft's Copilot+ PC AI features "a bad joke" when they first launched last year, and The New York Times concluded that Apple Intelligence, Apple's suite of AI-powered features, was "still half-baked" in October.
There are multiple problems with on-device AI. First, generative AI is not deterministic, meaning that the same input can produce wildly different outputs. That's fine if you're using AI to write a blog post, but it's not so great if you want it to perform a specific task on your smartphone reliably.
That first problem may never be fully solved, but the second problem could be. Problem no. 2 is that PCs and smartphones only have so much memory and computational power, which puts a hard limit on how capable an AI model running locally can be. The AI models that power ChatGPT, which run in data centers, require monstrous amounts of memory, compute, and energy to produce results. Obviously, that's not feasible with a laptop running on battery.
Microsoft may have an answer. The company recently unveiled a new "1-bit" AI model that is small enough to run on a CPU and uses just 0.4 GB of memory. Amazingly, this new model matches the performance of AI models in its size class that use far more memory. What's more, running on a single CPU, the model can produce output at a speed comparable to human reading, which is fast enough to be useful.
Changing the game
Nvidia dominates the market for AI accelerators, and there's essentially no chance that AMD (NASDAQ: AMD) or Intel(NASDAQ: INTC) will be able to catch up. Playing the same game as Nvidia, AMD and Intel are destined to remain far behind the market leader.
However, breakthroughs from DeepSeek and Microsoft raise the possibility that AI inference, the act of running a trained AI model to produce results, could eventually be run on CPUs in data centers and on devices without sacrificing quality. The ongoing cost of running AI models goes down dramatically if you can cut Nvidia's expensive GPUs out of the equation, and on-device AI becomes far more compelling for users if more powerful models are able to be stuffed into the memory footprint of a PC or smartphone.
Intel and AMD both sell server and PC CPUs that feature built-in AI accelerators. Some of AMD's EPYC server CPUs excel at certain AI inference tasks, and Intel's Granite Rapids server CPUs can run 70-billion parameter models. On the PC, both AMD and Intel now include dedicated AI processors in their CPUs.
Nvidia appears unstoppable, and it may very well be if you assume the AI infrastructure market remains centered on ultra-powerful data center GPUs. However, the company is vulnerable if powerful AI models no longer require its most powerful GPUs to run. If it starts to make more financial sense to build an AI datacenter filled with CPUs rather than GPUs, that's what's going to start happening. Microsoft's new AI model is a step in that direction.
Spending could start shifting back from GPUs to CPUs in the data center, which would be great news for AMD and Intel. In the PC market, more capable and useful AI features could drive demand for PCs and pull the PC market out of its post-pandemic funk. More powerful hardware is part of the equation, but so are powerful AI models able to fit into smaller memory footprints like Microsoft's latest innovation.
Nvidia can't lose in AI unless the game changes. With Microsoft and others working to lower the computational and memory cost of running powerful AI models, it appears the game is changing as we speak. Intel and AMD, permanently behind Nvidia in the AI accelerator market, could be the big winners as AI inference moves to the CPU.
Should you invest $1,000 in Intel right now?
Before you buy stock in Intel, consider this:
The Motley Fool Stock Advisor analyst team just identified what they believe are the 10 best stocks for investors to buy now… and Intel wasn’t one of them. The 10 stocks that made the cut could produce monster returns in the coming years.
Consider whenNetflixmade this list on December 17, 2004... if you invested $1,000 at the time of our recommendation,you’d have $561,046!* Or when Nvidiamade this list on April 15, 2005... if you invested $1,000 at the time of our recommendation,you’d have $606,106!*
Now, it’s worth notingStock Advisor’s total average return is811% — a market-crushing outperformance compared to153%for the S&P 500. Don’t miss out on the latest top 10 list, available when you joinStock Advisor.
Suzanne Frey, an executive at Alphabet, is a member of The Motley Fool's board of directors. Timothy Green has positions in Intel. The Motley Fool has positions in and recommends Advanced Micro Devices, Alphabet, Apple, Intel, Microsoft, and Nvidia. The Motley Fool recommends the following options: long January 2026 $395 calls on Microsoft, short January 2026 $405 calls on Microsoft, and short May 2025 $30 calls on Intel. The Motley Fool has a disclosure policy.