Unlock stock picks and a broker-level newsfeed that powers Wall Street.
How IBM CEO Arvind Krishna Is Thinking About AI and Quantum Computing

Arvind Krishna, chief executive officer of IBM, at the World Governments Summit in Dubai, United Arab Emirates, on Feb. 11. Credit - Christopher Pike—Bloomberg/Getty Images

(To receive weekly emails of conversations with the world’s top CEOs and decisionmakers, click here.)

IBM was one of the giants of 20th-century computing. It helped design the modern PC, and created the first AI to defeat a human champion in the game of chess.

But when you think of AI, IBM might not be the first, or even the tenth, company to spring to mind. It doesn’t train big models, and doesn’t make consumer-facing products any more, focusing instead on selling to other businesses. “We are a B2B company, and explaining what we do to the average reader—we'll take all the help we can get,” IBM CEO Arvind Krishna joked ahead of a recent interview with TIME.

Still, there’s an interesting AI story lurking inside this storied institution. IBM does indeed build AI models—not massive ones like OpenAI’s GPT4-o or Google’s Gemini, but smaller ones designed for use in high-stakes settings, where accuracy comes at a premium. As the AI business matures, this gets at a critical unanswered question on the minds of Wall Street and Silicon Valley investors: will the economic gains from AI mostly accrue to the companies that train massive “foundation models” like OpenAI? Or will they flow instead to the companies—like IBM—that can build the leanest, cheapest, most accurate models that are tailored for specific use-cases? The future of the industry could depend on it.

TIME spoke with Krishna in early February, ahead of a ceremony during which he was awarded a TIME100 AI Impact Award.

This interview has been condensed and edited for clarity.

IBM built Deep Blue, the first chess AI to beat a human champion, in the 1990s. Then, in 2011, IBM’s Watson was the first to win the game show Jeopardy. But today, IBM isn’t training large AI systems in the same way as OpenAI or Google. Can you explain why the decision was made to take a backseat from the AI race?

When you look at chess and Jeopardy, the reason for taking on those challenges was the right one. You pick a thing that people believe computers cannot do, and then if you can do it, you're conveying the power of the technology.

Here was the place where we went off: We started building systems that I'll call monolithic. We started saying, let's go attack a problem like cancer. That turned out to be the wrong approach. Absolutely it is worth solving, so I don't fault what our teams did at that point. However, are we known for being medical practitioners? No. Do we understand how hospitals and protocols work? No. Do we understand how the regulator works in that area? No.