Unlock stock picks and a broker-level newsfeed that powers Wall Street.
Why companies should care about AI regulation: Expert

Navrina Singh, White House AI advisor and founder and CEO of Credo AI, artificial intelligence (AI) governance software, joins Josh Lipton on Asking for a Trend to discuss AI regulation.

Singh tells Yahoo Finance that as AI tech evolves, more companies consider it a risk. “Just recently, 56% of the Fortune 500 companies have identified AI as a risk factor on their most recent annual reports, and this is up by almost 50% compared to 2022.”

“The reason that is happening is AI is becoming pervasive in whether it is in productivity tools for coding, in fraud models or in massively used systems which could be promoting misinformation.”

When there are such powerful systems, "it becomes really important for enterprises, but also policymakers, to think about how to put guardrails, and that's where this very interesting public-private sector interplay is coming,” Singh explains.

California Governor Gavin Newsom is signed three bills into law that target the misuse of AI-created content in an effort to combat election misinformation. Singh says the debate around Newsom’s AI policies “is bringing a lot of focus on what a good governance framework look like.”

As companies “are building very powerful foundation models that potentially could impact massive misinformation concerns [and] cyber attacks, we need to start thinking about how do you put guardrails” on the tech, Singh says.

00:00 Speaker A

Companies have been racing to deploy generative AI into their work since the launch of chat GPT in 2022, according to Microsoft and LinkedIn's 2024 work trends report. Almost four in five business leaders believe their company needs to adopt the technology to stay competitive, and three out of four people already use it in their jobs. But adopting AI in the workplace also presents real risks with many companies still trying to understand how to identify and measure them. Joining us now is Navrina Singh, Credo AI founder and CEO, and White House AI advisor. Navrina, it's good to see you. Um, maybe to start, Navrina, walk us through for our audience who may not be familiar a little bit about Credo AI, your mission there, and also maybe your White House AI advisor role, Navrina. Walk us through the responsibilities there.

02:04 Navrina Singh

Well, Josh, so good to see you and thank you so much for having me. So Credo AI is the leading provider of AI governance software, uh, which basically means we help you with not only identifying AI risks, manage them at scale, and also making sure they are compliant to emerging regulations. Our software currently provides oversight of artificial intelligence among global 2000 customers, including Mastercard, Booz Allen, Northrop Grumman, and many more. In addition, we partner with technology providers like Databricks, service providers like McKinsey. So what we essentially are providing is a tool, a software that can provide a standardized way for organizations to really manage their risk across their AI systems so that they can adopt artificial intelligence with confidence. And currently I have the opportunity to serve on the National AI advisory Commission, where my role is primarily to guide the government, uh, to think through what are the AI policies and bring AI expertise to policymakers.

04:01 Speaker A

And so on that on the role of sort of AI governance, Navrina, hot topic, I'm just curious how you think about the challenges of AI governance when, you know, regulators should step in, whether they should step in and try to regulate what is a powerful and some obviously believe paradigm shifting technology.

04:51 Navrina Singh

Josh, you are so right. This is one of the most transformational tech we are living through and creating. Uh, just recently, 56% of the Fortune 500 companies have identified AI as a risk factor on their most recent annual reports, and this is up, you know, by almost 50% compared to 2022. And the reason that is happening is AI is becoming pervasive in whether it is in productivity tools for coding or whether it is being used in fraud models or in massively used, uh, you know, systems which could be promoting misinformation. So when you have systems which are so powerful, it becomes really important for enterprises, but also policymakers to think about how to put guard rails. And that's where this very interesting public private sector interplay is coming where AI experts like us work together with policymakers to make sure that we are thinking about the right frameworks to put oversight mechanisms in place.

06:34 Speaker A

Well, let's talk about a real world example of AI governance there in California, Navrina. Governor Newsom unsure about, it sounds like signing that AI law worried about the impact on AI innovation, but did though sign the anti-deep fake law. Walk me through your reaction, your response to to what's going on there.

07:22 Navrina Singh

So one of the exciting things right now and this is really we are seeing across the board. Regulators and legislators are taking a really strong stance and understanding this AI technology. And so what's happening in California right now is Governor Newsom is looking holistically across the use of artificial intelligence in different spaces as an example, use of AI in political ads that could potentially promote deception for voters, which as you can imagine is a really critical time or use of artificial intelligence, especially for musicians, entertainers, and promoting likeness. So there's been a recent set of bills that have been passed in California, but one that has been debated extensively is something called SB 1047. Um, and the reason it's been heavily debated is it's truly is bringing a lot of focus on what does a good governance framework look like. And we are, we can dive deeper into it, but highest level, enterprises that are building very powerful foundation models that potentially could impact massive misinformation concerns, cyber attacks, we need to start thinking about how do you put guard rails and SB 1047 is an attempt to do that. And I think that's where we are seeing both sides of some really interesting debate happening as to should it be passed or not.

09:32 Speaker A

You know, Navrina, sticking with politics, election, Trump, Harris, it's a toss-up right now. Somebody's going to win. What does that outcome potentially mean in terms of impacting shaping future AI regulation?

10:03 Navrina Singh

Josh, AI is going to be one of the most critical points moving forward irrespective of who comes to power. This is going to be a core agenda for all the legislatures across, by the way, the globe. So what this means is right now, whichever government comes into power is going to be important for us to think about putting in place the right regulatory frameworks as well as governance structures, not only to hold Big Tech accountable, but also to make sure that we are looking at, uh, you know, responsible use of this technology. And this is where Credo AI really comes in. We are making sure that standards like NIST risk management framework, ISO regulations like EUAI Act are easily available, not just to, you know, high-tech enterprises, but to startups and SMBs so that they can continue to bring innovative AI technology to bear while making sure responsible use happens for humanity.

11:48 Speaker A

Final question. Navrina, you know we have a lot of investors listening this right now. Why why should they be paying attention to this topic of AI governance, Navrina?

12:12 Navrina Singh

You know, AI governance truly is the need of the hour. If you as an enterprise or as an investor want to get the real ROI from your artificial intelligence investments, governance is the way to do so. We are seeing day in, day out companies and enterprises using our software. They are able to build trust as a result of which they are able to not only acquire new customers faster, they are able to retain customers, they're able to, most importantly, bring in innovative AI technology more faster to their organizations so that they can continue not only productivity gains, but also spin out many innovative technologies for their consumers. So this truly AI governance is unlocking the potential of AI.

Heading into the presidential election, Singh believes “AI is going to be one of the most critical points moving forward, irrespective of who comes to power. This is going to be a core agenda for all the legislatures across the globe.”

Singh says AI governance should matter to investors because a safe and responsible framework is the way to see AI investments pay off. “We are seeing day in and day out companies and enterprises using [Credo’s] software, they are able to build trust, [and] as a result of which they are able to not only acquire new customers faster, they are able to retain customers.”

For more expert insight and the latest market action, click here to watch this full episode of Asking for a Trend.

This post was written by Naomi Buchanan.