EU AI Act Sets Global Standard

In a groundbreaking move, European Union lawmakers have recently finalised the terms for a historic piece of legislation aimed at regulating artificial intelligence (AI). This landmark legislation, known as the AIAct, sets the stage for the world’s most restrictive regime on the development and deployment of AI technologies. EU Commissioner Thierry Breton expressed the significance of this agreement, calling it a “historic moment” as the EU becomes the first continent to establish clear rules for the use of AI.

The journey toward the AIAct has been a lengthy one, marked by years of discussions among member states and politicians. The recent deal, confirmed by Commissioner Breton, follows marathon discussions that unfolded over several days. The European Parliament, member states, and the European Commission all played pivotal roles in shaping this legislation, with each entity needing to agree on the final text before it becomes law.

The road to the AIAct was not without its challenges. European companies, particularly in industries where AI innovation is thriving, expressed concerns about the potential impact of overly restrictive rules. Companies like Airbus and Siemens voiced apprehensions that stringent regulations might impede innovation. These concerns were heightened as AI gained traction globally, with notable instances such as the popularisation of OpenAI’s ChatGPT.

An insightful article from The Technocrat, MIT Technology Review’s weekly tech policy newsletter, delves into three key issues that significantly influenced the development of the EU AI Act. These issues revolved around disagreements about foundation models, the industry’s friendliness, and the regulation of biometric data and AI in policing.

The debate over foundation models, the core of powerful general-purpose AI, took centre stage during negotiations. While initial versions of the legislation did not explicitly address these models, their proliferation pushed lawmakers to integrate them into the risk framework. Disagreements arose over whether foundation models should be tightly regulated regardless of their risk category or usage. France, Germany, and Italy argued for exemptions, suggesting that these models should be largely free from AIAct regulations. This debate reflects the delicate balance between regulating AI to protect citizens and fostering an environment conducive to industry growth.

A significant bone of contention emerged regarding the use of biometric data and AI in policing. The European Parliament advocated for stricter restrictions to prevent mass surveillance and protect citizens’ privacy and rights. However, countries like France, hosting the Olympics, pressed for the use of AI in fighting crime and terrorism. This divergence of opinions underscored the challenge of finding a middle ground between security concerns and protecting individual freedoms.

The December 6 deadline was initially set for finalising the AIAct, but negotiations extended beyond that date. The urgency lies in reaching a consensus several months before the EU elections in June to prevent potential delays or a complete withering of the legislation. The implementation and enforcement aspects still require careful consideration and planning.

The recently agreed-upon AIAct includes crucial provisions that will shape the future of AI in the European Union:

1. Banned Applications:

The legislation prohibits the use of biometric categorization systems based on sensitive characteristics, untargeted scraping of facial images for recognition databases, emotion recognition in workplaces and educational institutions, social scoring, and AI systems that manipulate human behaviour.

2. Law Enforcement Exemptions:

Safeguards and narrow exceptions for biometric identification systems in law enforcement were agreed upon, subject to judicial authorisation and for specific, well-defined crimes. Predictive policing is restricted, requiring clear human assessment and objective facts.

3. Obligations for High-Risk Systems:

AI systems classified as high-risk are subject to clear obligations, including mandatory fundamental rights impact assessments. Citizens have the right to launch complaints and receive explanations about decisions made by high-risk AI systems that impact their rights.

4. Guardrails for General Artificial Intelligence Systems:

General-purpose AI systems and their models must adhere to transparency requirements, with additional obligations for high-impact models. Regulatory sandboxes and real-world testing are promoted to support innovation, particularly for SMEs.

5. Sanctions and Entry into Force:

Non-compliance with the AIAct can result in fines ranging from 1.5% to 7% of a company’s global turnover, emphasising the gravity of adherence. The agreed text is yet to be formally adopted by both Parliament and Council.

As the AIAct progresses toward becoming law, key takeaways from MIT Technology Review shed light on its broader implications:

The AIAct imposes legally binding rules on transparency and ethics, compelling tech companies to notify users when interacting with AI systems and label AI-generated content. This marks a significant step beyond voluntary commitments and sets a new standard for responsible AI practices.

While the AIAct introduces rules for foundation models and AI systems, it also allows flexibility based on the computing power needed to train them. This nuanced approach acknowledges the evolving nature of AI technology and aims to balance regulation with innovation.

The creation of the European AI Office positions the EU as the world’s first body to enforce binding rules on AI. The fines for non-compliance underscore the EU’s commitment to ensuring adherence to these rules, potentially making it a global standard.

The AIAct places strict regulations on the use of AI in various domains but does not apply to AI systems developed exclusively for military and defence purposes. This underscores the delicate balance between security needs and civilian rights.

As the EU moves closer to formalising the AIAct, it stands on the precipice of setting a global standard for AI regulation. The legislation not only addresses immediate concerns but also lays the foundation for responsible AI development, fostering innovation while prioritising fundamental rights. The coming months will witness the technical fine-tuning of the legislation and its formal adoption, marking a transformative moment in the regulation of artificial intelligence.

for all my daily news and tips on AI, Emerging technologies at the intersection of humans, just sign up for my FREE newsletter at www.robotpigeon.beehiiv.com
www.robotpigeon.be