ALBAWABA - The European Union (EU) has made significant progress in drafting the EU AI Act, a regulatory framework aimed at addressing the risks associated with artificial intelligence (AI) and safeguarding human interests.
The Act, which has been agreed upon in a draft version by lawmakers, will undergo negotiations with the Council of the European Union and EU member states before becoming law.
While prominent figures in the AI industry, including Microsoft President Brad Smith and OpenAI CEO Sam Altman, have called for greater regulation, the EU has taken the initiative to propose a concrete response to the emerging risks posed by AI. The Act aims to promote the adoption of human-centric and trustworthy AI while ensuring the protection of health, safety, fundamental rights, democracy, the rule of law, and the environment from harmful effects.
Key takeaways from the EU AI Act include the classification of AI systems into high-risk, low-risk, and prohibited categories. Systems falling into the prohibited category, such as real-time facial recognition systems in public spaces, predictive policing tools, and social scoring systems, will be banned outright due to their potential adverse impacts. High-risk AI applications, including those used to influence elections and social media platforms with over 45 million users, will face tight restrictions. The Act also outlines transparency requirements for AI systems, such as disclosing AI-generated content, differentiating deep-fake images from real ones, and implementing safeguards against the generation of illegal content. Detailed summaries of copyrighted data used for training AI systems will also need to be published.
Non-compliance with the regulations can result in significant penalties. Prohibited AI practices may lead to fines up to €40 million ($43 million) or an amount up to 7% of a company's worldwide annual turnover, whichever is higher. This surpasses the fines imposed under the General Data Protection Regulation (GDPR), indicating the seriousness with which legislators are approaching AI regulation.
Despite the strict penalties, the Act also provides protections for innovation, considering the market position of small-scale providers and allowing for regulatory "sandboxes" to test AI systems before deployment. It grants citizens the right to file complaints against AI system providers and establishes an EU AI Office for monitoring enforcement. Member states are required to designate national supervisory authorities for AI.
Technology giants such as Microsoft and IBM have welcomed progress on the EU AI Act while suggesting areas for further refinement. Microsoft emphasizes the need for legislative guardrails, international alignment efforts, and voluntary actions by AI developers and deployers. IBM calls for a risk-based approach and clarity around high-risk AI to ensure that only genuinely high-risk use cases are addressed.
Although the Act may not come into force until 2026, revisions are expected to keep pace with the rapid advancements in AI technology. The legislation has already undergone several updates since its drafting began in 2021, highlighting the intention to adapt the law to the evolving AI landscape.