In This Article:
Concerns around the commercialization of AI models have risen amid headlines of Sam Altman's firing as OpenAI CEO and his potential hiring by Microsoft (MSFT). NYU Professor of Media and Technology W. Russell Neuman outlines the problem areas surrounding OpenAI's model to commercialize its large-language model and certain safety concerns.
"And the third thing is the notion of commercialization of for-profit, not-for-profit. It seems to me we wouldn't have seen what's going on if there wasn't the investment from Microsoft to make it possible," Neuman explains to Yahoo Finance.
Click here to watch the full interview on the Yahoo Finance YouTube page or you can watch this full episode of Yahoo Finance Live here.
This post was written by Luke Carberry Mogan.
Video Transcript
JULIE HYMAN: Amidst all this kerfuffle was some reporting that there was some concern that the commercialization of AI at OpenAI was proceeding too quickly, maybe it was raising some safety concerns.
What do you think is going on here, Russ?
Do you think the industry is being careful enough?
W. RUSSELL NEUMAN: Well, I think we can speculate there were four issues that could have been intertwined in the boardroom dramatics out on 18th Street in San Francisco.
The first is the issue of safety and whether pausing would make any sense.
But my view is that if you sort of stop or slow down, that's not going to make any difference.
You've got to put more energy into finding AI systems that can do the monitoring because humans can't keep up with these large-scale systems.
The second thing is OpenAI was supposed to be open source and it's not clear whether it makes any sense to have an open source program when you have hundred-- I'm sorry, a million points-- 1.7 trillion different parameters.
If you can open that up, nobody can make sense of that.
So if it's-- the concept of open source in this day and age makes a lot less sense.
And the third thing is the notion of commercialization of for-profit/not-for-profit.
It seems to me we wouldn't have seen what's going on if there wasn't the investment from Microsoft to make it possible.
The fourth possible issue that would generate the boardroom drama is simply the personalities.
So my take on it is all four of those potential issues are sort of non-issues because if you generate a company based on selling your access to your foundational AI model and it's not safe, that's not economic either.
So there's a lot of incentive to keep things safe economically.
- And, Russ, what is the biggest near-term risk when it comes to AI that you see?