In the latest Opening Bid, Mysten Labs CEO Evan Cheng weighs in on next big thing after DeepSeek.
For full episodes of Opening Bid, listen on your favorite podcast platform or watch on our website.
This post was written by Langston Sessoms, producer for Opening Bid.
Now that news has settled down a little bit, what's the next big thing after Deep Seek? Because I think a lot of folks were I mean, they were really surprised. Investors, people in I would argue in tech, that news came out of nowhere earlier this year. Is there another Deep Seek out there?
Well, there's always going to be another Deep Seek or another foundational LM that's going to come out and say it's better in other way. It's it's a race. All these big companies or some of these startup companies are in a race to see who is the best. And you but you also see sort of more specialized LM. You know, there's other developments, you know, sort of, can I bring real-time information into, you know, to be used by the model? Otherwise, you're going to end up getting outdated information for anything that's developing, you know, quickly, right? There's like, going to be context aware, going to be used by agents, going to be used by, going to be embedded in hardware in, you know, in robotics. So, there's not going to be an end. Uh, and and there's going to be concerned the other side as you say, there's going to be concern about trust and security and safety and all that. So, this field is going to move incredibly fast. I'm I'm all for it. It's it's super exciting. Uh, it's never going to slow down. I I'm having a hard time keeping up myself.
There's there's two opposing camps. One, the that has seen the deep seek news and they say, all right, we've already built too much AI capacity. But then you have Jensen Huang, of course, Nvidia founder and CEO on the other side saying, we need all of this computing power for inferencing and training models. What side of the camp do you fall in?
Uh, I think it's both can be true. Because there's going to be shift in demand from one area to another. Uh, inference probably can be done with lesser hardware so to speak, you don't need those ginormous uh server all the time for for everything unless you're open AI that's trying to serve everybody who's trying to, you know, create new images. Uh, you know, there's going to be a lot of there's a lot of interest in using more sort of local hardware or even on your phone, on your desktop to do inference, to do do refinement, that sort of thing. Uh, training, big big training, big LM foundation, you know, models that can only be trained at the data center that specialized for them. That race is still on. Is there come a day where everybody say shift direction to a different model they find out could, you know, as the deep seek we figure out you know, you can train with lesser hardware requirement, but maybe smarter software. That's always going to happen. But then you're going to see this co-design of software and hardware. Software gets smarter, but then, you know, everybody push the envelope some some words and hardware demand increases and we you know, and there's also going to be specialized hardware. This is not going to end anytime soon.