EU AI Act: First Compliance Deadline Reached
This blog post was automatically generated (and translated). It is based on the following original, which I selected for publication on this blog:
AI systems with ‘unacceptable risk’ are now banned in the EU | TechCrunch.
EU AI Act: First Compliance Deadline Reached
As of February 2nd, the European Union's AI Act has reached its first compliance deadline, marking a significant step in regulating Artificial Intelligence within the bloc. This comprehensive framework aims to govern the use of AI systems based on their potential risk to individuals and society.
Risk-Based Approach
The AI Act categorizes AI systems into four risk levels, each subject to varying degrees of regulatory oversight:
- Minimal Risk: Systems like email spam filters face no regulatory oversight.
- Limited Risk: Customer service chatbots have light-touch regulatory oversight.
- High Risk: AI for healthcare recommendations, for instance, is subject to heavy regulatory oversight.
- Unacceptable Risk: Applications deemed to pose unacceptable risks are entirely prohibited. The compliance requirements focus on these prohibited systems.
Prohibited AI Activities
The AI Act explicitly bans several AI applications considered to pose unacceptable risks, including:
- AI used for social scoring (building risk profiles based on behavior).
- AI that manipulates decisions subliminally or deceptively.
- AI that exploits vulnerabilities like age, disability, or socioeconomic status.
- AI that attempts to predict crime based on appearance.
- AI that uses biometrics to infer characteristics like sexual orientation.
- AI that collects real-time biometric data in public for law enforcement (with exceptions).
- AI that tries to infer emotions at work or school.
- AI that creates facial recognition databases by scraping images online.
Companies violating these prohibitions face substantial fines, up to €35 million or 7% of annual revenue, whichever is greater.
Enforcement and Compliance
While the February 2nd deadline is a formality, the next key date is in August, when competent authorities will be identified, and enforcement provisions take effect. Organizations are expected to be fully compliant by then.
Notably, over 100 companies, including Amazon, Google, and OpenAI, signed the EU AI Pact, voluntarily pledging to apply AI Act principles ahead of the deadline. However, some tech giants like Meta and Apple, as well as French AI startup Mistral, did not sign the Pact.
Exceptions and Further Guidelines
The AI Act includes exceptions for certain prohibited uses. For instance, law enforcement can use biometric systems for targeted searches in specific, urgent situations with appropriate authorization. Similarly, systems inferring emotions in workplaces and schools may be allowed with medical or safety justifications.
The European Commission plans to release additional guidelines in early 2025 to clarify the AI Act's provisions, following stakeholder consultation.
Overlapping Regulations
It's crucial to understand that the AI Act interacts with existing legal frameworks like GDPR, NIS2, and DORA, potentially creating challenges due to overlapping incident notification requirements. Understanding how these laws fit together is as important as understanding the AI Act itself.
Looking Ahead
The EU AI Act represents a pioneering effort to regulate AI and its impact on society. How will the Act impact innovation in the field of AI? As the enforcement window approaches, the impact of the EU AI Act on the global AI landscape will be closely watched.