European lawmakers are set to resume discussions this week on drafting and passing the world's first law regulating artificial intelligence. Yahoo Finance Tech Editor Dan Howley highlights the provisions that have been suggested, including consumer concerns and a desire for AI source code transparency.
For more expert insight and the latest market action, click here to watch this full episode of Yahoo Finance Live.
This post was written by Luke Carberry Mogan.
Video Transcript
DIANE KING HALL: Artificial intelligence has taken the limelight this year, but not everyone is buying into the hype.
The EU is taking charge on AI regulation this week.
After a near 24 hour negotiation, regulators will continue drafting the most comprehensive AI regulation on Friday.
For more on what we can expect from the act, Yahoo Finance's Dan Howley is here with us to break it on down-- break it all down, I should say.
Dan, it seems like these were pretty tense negotiations.
So what are we looking for come Friday?
DAN HOWLEY: This is, like you said, a marathon negotiating session on Wednesday with the EU over these AI regulations.
Going into Friday, you can expect them to continue to try to hash out some of the disagreements that they currently have.
This was something that was put together in 2021 before OpenAI had released ChatGPT, before Microsoft was on board with its own Copilot.
Prior to Google's release of its Bard software.
I mean, before the big GenAI explosion.
The fact that the technology is changing so quickly so recently, though, is throwing a wrench into the work.
So just to break it down, the Eu's original proposals have AI broken down into a few categories.
The first is unacceptable risk in which countries would not be able to use AI.
Those include things like social scoring where you would classify people based on their behavior or socioeconomic status, personal characteristics, things like that.
Real-time biometric identification systems and then behavioral manipulation.
So one of the examples the EU puts is voice activated toys that would encourage dangerous behavior in children.
So think of, I guess, a nefarious Tickle Me Elmo or something like that.
High risk systems would be those that have biometric identifications, education and training, things along those lines.
Those would have to be monitored by the EU and then continually checked out.
Generative AI is on this list of items, but they would basically just have to disclose what's going on with the actual platforms to EU regulators.
And there's just limited risk AI systems, those they would have to get looked at and then allowed to function.