AI needs 'centralized' and 'universal' regulation: Appian CEO

In This Article:

Big tech's rise to stock market leadership over the last year has largely been driven by AI. However, crackdowns and regulations are on the way, with states like Utah and California at the front of the push to regulate the technology. Appian (APPN) CEO Matt Calkins joins Asking for a Trend to discuss the future of AI in the US.

"The regulation is piecemeal. And we would do a lot better in this industry if we were able to centralize that and create a universal expectation for what's fair and what customers can expect from AI," Calkins says. He explains that while regulation is going in the right direction, the federal government should take a broader approach:

"AI regulation today is based on fear. It's based on the things that the public is afraid of and the governments therefore feel motivated to make a rule against. And that's healthy, but sometimes the public imagination doesn't stretch far enough. We've got regulations today from Europe to the United States about what AI can do, the things that we're worried about AI will do in its actions. And that's different from the injustice that can occur merely by creating AI, the injustice that can occur from taking data from people who didn't mean to give their data."

00:00 Julie Hyman

Big Tech's rise and stock market power is largely driven by AI, with almost every company trying to jump on the AI bandwagon. Crackdowns and regulation are coming with it. States like Utah and California are leading the push to regulate the new tech. Joining us to talk more about it is Matt Caulkins, CEO of Appian. Matt, it's good to see you again. So, we are seeing these efforts, there's some efforts underway in Europe as well to regulate AI. Um, and it's sort of a piecemeal approach, I guess. You know, it's happening in states, it's happening in Europe. There's talk about it on the federal level, but nothing happening yet. For someone who sits in the industry, how do you think about which approach from your perspective might be the best?

01:05 Matt Calkins

You're right, Julie. It's good to be back and the regulation is piecemeal, and we would do a lot better in this industry if we were able to centralize that and create a universal expectation for what's fair and what customers can expect from AI. I appreciate the progress made by Europe and Colorado and California and some states, and there's been talk about it in the federal government. But what we haven't done is come out with something that the industry can count on. I think that's the next step.

01:53 Julie Hyman

Um, I mean, it's interesting that some of the individual states, Utah, for example, says that a company can't blame an issue, something that goes wrong on AI without taking responsibility itself, for example. I mean, it seems like some of this stuff is going in the right direction. Does the federal government need to take a look at what some of the states are doing and adopt it on a federal basis?

02:46 Matt Calkins

I agree with you that that regulation is going in the right direction. I hope that the federal government does look at it, and I also think they they should look a little bit more broadly. AI regulation today is based on fear. It's based on the things that the public is afraid of, and the governments therefore feel motivated to make a rule against. And that's healthy, but sometimes the public imagination doesn't stretch far enough. We've got regulations today from Europe to the United States about what AI can do. The things that we're worried about AI will do in its actions, and that's different from the injustice that can occur merely by creating AI. The injustice that can occur from taking data from people who didn't mean to give their data. And so, there's really two primary areas in my opinion where AI can do damage. First in the actions it takes which it should not have taken, which cause injury to someone. And then second in the way that it's composed and trained, it could be stepping on the property rights of the creators of that information.

04:15 Julie Hyman

Well, and that latter point seems to be, we seem to be seeing that addressed even, not even on necessarily on state or federal regulation, but almost on company by company basis, right? Some of the lawsuits that we have seen, for example. It doesn't feel like we are any closer to a federal remedy for that sort of situation.

05:04 Matt Calkins

Oh, you're right on. This is taking place in the courts because the legislation is not present. And so we're letting the courts sift this out, and I don't think that that's the right way to go. Uh courts could handle it in a piecemeal manner, and they're not experts for that matter. The courts are not properly lobbied by all the interested parties, so they don't come out with the best idea for a new trend. They're just the default if we don't get our act together and produce real legislation. So I would not prefer that the courts be the place that this part of the legislation is decided, but today that's where it's happening.

05:58 Julie Hyman

Um, and so I guess what do you think will happen next? It seems like it's not happening, right? I mean, we're in the midst of a presidential campaign year. So there's not usually much significant legislative movement at a time like this. But are you optimistic? Is there any any movement on that level?

06:39 Matt Calkins

In the short term, I'm not optimistic. I think there's a few pressures that are engaging our legislators right now. There's the fear of missing their chance. They regret the fact that they let social media get by them. And now there's thought in Washington that had that been regulated, it would have done less damage or been more under control. There's also the fact that they're not experts in AI. They don't understand this technology and they're worried about making the wrong move if they try. Uh add to that to the fact that Big Tech has more of a voice in Washington than technology companies have ever had. And with their lobbyists, they've really shaped the agenda. If you look at the statements that have come out of the White House's proclamation on AI or the Schumers Committee on AI, they say almost nothing about the infringement of property rights. And the reason is because Big Tech has been successfully setting the agenda and they don't want to talk about property rights because they're the ones doing the infringing.

Thus, he sees two primary areas where AI has the potential to do damage. He sees the first issue in actions AI should not have taken and end up causing injury. The second issue concerns the way AI is composed and trained, where it could potentially step on property rights. He notes that in this current moment, these issues are being taken up by courts because there is no clear legislation. He adds that many politicians have dragged their feet on the issue because "they don't understand this technology and they're worried about making the wrong move."

For more expert insight and the latest market action, click here to watch this full episode of Asking for a Trend.

This post was written by Melanie Riehl