AI systems with 'unacceptable risk' are now banned in the EU

As of Sunday in the European Union, the bloc's regulators can ban the use of AI systems they deem to pose "unacceptable risk" or harm.

February 2 is the first compliance deadline for the EU's AI Act, the comprehensive AI regulatory framework that the European Parliament finally approved last March after years of development. The act officially went into force August 1; what's now following is the first of the compliance deadlines.

The specifics are set out in Article 5, but broadly, the Act is designed to cover a myriad of use cases where AI might appear and interact with individuals, from consumer applications through to physical environments.

Under the bloc’s approach, there are four broad risk levels: (1) Minimal risk (e.g., email spam filters) will face no regulatory oversight; (2) limited risk, which includes customer service chatbots, will have a light-touch regulatory oversight; (3) high risk — AI for healthcare recommendations is one example — will face heavy regulatory oversight; and (4) unacceptable risk applications — the focus of this month's compliance requirements — will be prohibited entirely.

Some of the unacceptable activities include:

  • AI used for social scoring (e.g., building risk profiles based on a person's behavior).

  • AI that manipulates a person's decisions subliminally or deceptively.

  • AI that exploits vulnerabilities like age, disability, or socioeconomic status.

  • AI that attempts to predict people committing crimes based on their appearance.

  • AI that uses biometrics to infer a person's characteristics, like their sexual orientation.

  • AI that collects "real time" biometric data in public places for the purposes of law enforcement.

  • AI that tries to infer people’s emotions at work or school.

  • AI that creates — or expands — facial recognition databases by scraping images online or from security cameras.

Companies that are found to be using any of the above AI applications in the EU will be subject to fines, regardless of where they are headquartered. They could be on the hook for up to €35 million (~$36 million), or 7% of their annual revenue from the prior fiscal year, whichever is greater.

The fines won't kick in for some time, noted Rob Sumroy, head of technology at the British law firm Slaughter and May, in an interview with TechCrunch.

"Organizations are expected to be fully compliant by February 2, but … the next big deadline that companies need to be aware of is in August," Sumroy said. "By then, we’ll know who the competent authorities are, and the fines and enforcement provisions will take effect."