Back to Blog

Inside the AI Act: Unpacking the Layers of European AI Regulation

Those who've been closely tracking the European Union's path to AI regulation are well aware of the long journey of negotiations. In April 2021, the first regulatory framework emerged, with the first drafts of the Act. Then, discussions unfolded as the negotiators and lawmakers embarked on an extensive journey.

The process demonstrated the inherent challenges in crafting suitable rules for a technology evolving rapidly and shifting in unforeseeable directions. Finally, on December 8th, 2023, EU lawmakers successfully reached a political agreement on the AI Act, paving the way for subsequent discussion on the technical details needed for its implementation.

So, what key aspects does this regulation cover? Who is subject to the jurisdiction of this law? and how does it outline the governance of AI systems? 

Let’s dive in into the specifics!

European approach to AI regulation

What key aspects does this regulation cover?

According to the legal framework proposed by the Commission, four levels were defined, each with different types of regulation: the higher the risk, the more stringent the regulations.

Classification system based on the risk posed to the user

Minimal risk: A free pass

The majority of AI systems comfortably fit into the minimal risk category. Applications like AI-enabled recommender systems or spam filters pose negligible threats to citizens' rights or safety. Meaning, these systems enjoy a free pass with minimal obligations. However, voluntary commitments from companies to adhere to additional codes of conduct for these AI systems remain an option.

Limited risk or specific transparency risk: Navigating the controversy

This category includes generative AI systems, such as GPT and chatbots in general, deep fakes, and all systems that generate or manipulate text, image, audio or video.

The landscape for this regulation clearly evolved during the negotiations. We need to bear in mind that when the first AI Act proposal was drafted, ChatGPT was not in the picture. Meaning, new and fresh considerations needed to be included as the discussions and negotiations were being carried out. Negotiators ultimately agreed to impose transparency obligations on these models.

Users engaging with these AI systems must be aware of their interactions with machines. Regulations include labeling requirements for deep fakes and other AI-generated content, disclosure of biometric categorization or emotion recognition system usage, and the design of systems for machine-readable detection of synthetic content are key transparency measures.

High-risk: Rigorous Requirements for Critical AI Systems

AI systems identified as high-risk face stringent compliance requirements. These include the implementation of risk-mitigation systems, high-quality data sets, activity logging, detailed documentation, transparent user information, human oversight, and robust cybersecurity measures. Regulatory sandboxes are envisioned to facilitate responsible innovation and ensure the development of compliant high-risk AI systems.

Examples of high-risk AI systems include critical infrastructures in sectors such as water, gas, and electricity; medical devices; access determination systems for educational institutions and recruitment processes; and certain applications in law enforcement, border control, administration of justice, and democratic processes. Besides, biometric identification, categorization, and emotion recognition systems also fall within this high-risk ambit.

Unacceptable risk: Banning AI Systems Threatening Fundamental Rights

AI systems considered a clear threat to fundamental rights will face a categorical ban. This includes systems manipulating human behavior to harm free will, such as voice-assisted toys encouraging dangerous behavior in minors or applications enabling 'social scoring' by governments or companies. Certain uses of biometric systems, including emotion recognition in the workplace and specific real-time remote biometric identification for law enforcement in publicly accessible spaces, specially systems that use sensitive characteristics, will be outright prohibited.

Intense debates during negotiations revolved around the regulation of facial recognition software by police and governments. The final proposal struck a delicate balance, restricting its use outside safety and national security exemptions, which include situations like terrorist attacks or targeted searches for victims of human trafficking. This nuanced approach reflects the European Union's commitment to harnessing the benefits of AI while safeguarding its citizens' rights and well-being.

Who is subject to the jurisdiction of this law? 

In its pursuit of fostering transparency, comprehension, and verifiability in the development and deployment of AI systems, the European Union (EU) has enacted a comprehensive set of regulations. Designed to instill trust among the public, the EU's new rules are poised to exert a uniform influence across all Member States, directly impacting any tech company seeking to engage in business within the 27-nation bloc.

Under these regulations, every tech company operating within the EU will be obligated to adhere to the AI Act's provisions. This entails comprehensive measures, including the disclosure of data and rigorous testing, especially for applications categorized as "high-risk," such as those embedded in self-driving cars and medical equipment. The aim is to establish a standardized approach, ensuring that AI technologies meet stringent criteria and prioritize the safety and well-being of the community.

Exemptions

While transparency remains a central tenet of the legislation, certain exemptions carve out space for specific models. The regulations introduce restrictions for foundation models, yet they extend broad exemptions to open-source models. These models, developed using freely available code that developers can modify for their products and tools, align with the transparency goals sought by regulators. Open-source foundational models, built on open technology, promise the sought-after transparency while maintaining a rapid development pace.

Notably, the legislation's exemptions prove to be a boon for open-source AI companies in Europe, many of whom actively lobbied against certain aspects of the law. Companies such as Mistral in France, Aleph Alpha in Germany, and Meta, which released the open-source model LLaMA, stand to benefit.

The AI Act, in its current form, exempts free and open-source licenses from its purview unless, for instance, they are deemed high-risk or are being employed for purposes already prohibited by the legislation.

This strategic move not only safeguards the principles of transparency but also fuels innovation in the open-source AI sector. Despite criticisms, investors, recognizing the potential of these models to align with regulatory goals while maintaining a swift development pace, have rallied behind the cause. Consequently, the legislation strikes a balance between stringent oversight and the imperative to encourage innovation, presenting a progressive stance in the ever-evolving landscape of AI governance.

How does it outline the governance of AI systems?

The first draft, presented in 2021, presented a comprehensive two-sided plan to ensure compliance and put the system into practice. This included active participation from authorities, users, and suppliers, coupled with the implementation of a meticulous monitoring and categorization system.

Brief description of the monitoring and categorization system proposed by the first AI Act Draft 

The essence of the initial proposal was rooted in collaboration. Authorities, users, and suppliers were envisioned as key players in the governance structure, each contributing to the effective implementation of the AI Act. A specific monitoring system was proposed, aiming to provide a robust mechanism for overseeing the compliance of AI systems within the EU community. However, the final text of the deal was not immediately available, and a news release did not specify what the criteria would be and if this process was subject to any change. 

Fines

One of the pivotal aspects of the AI Act is its commitment to enforce compliance through penalties. Final legislation makes it clear that companies failing to adhere to the stipulated rules will face fines. The severity of fines is tiered, with distinct amounts for different types of violations.

  • Banned AI Applications: Violations related to banned AI applications could result in fines of up to €35 million or 7% of the global annual turnover, whichever is higher.
  • Other Obligations: Companies found in violation of other obligations outlined in the AI Act may face fines of €15 million or 3% of their global annual turnover.
  • Incorrect Information: Supplying incorrect information, a breach of transparency, could lead to fines of €7.5 million or 1.5%.

Recognizing the diverse landscape of businesses, the AI Act incorporates proportional caps for administrative fines applicable to small and medium-sized enterprises (SMEs) and start-ups. This nuanced approach aims to balance the need for stringent enforcement with an understanding of the varying capacities of businesses within the AI ecosystem.

Final considerations 

Next steps: Transitioning timelines

As the European Union (EU) takes a significant leap towards regulating artificial intelligence (AI), final considerations shed light on the next steps in the journey to enforce the AI Act. The recently reached political agreement, a culmination of negotiations and deliberations, now awaits formal approval by the European Parliament and the Council. Once approved, the AI Act is set to come into force 20 days after its publication in the Official Journal. 

The AI Act's applicability is slated to commence two years after its formal entry into force. However, specific provisions will see a more accelerated timeline. Prohibitions, for instance, are poised to take effect after a mere six months, while regulations pertaining to General Purpose AI are scheduled to apply after 12 months.

To navigate the transitional period leading up to the Act's broader application, the European Commission is poised to introduce an innovative solution—the AI Pact. This initiative aims to bring together AI developers from Europe and beyond, rallying them to voluntarily commit to implementing key obligations outlined in the AI Act ahead of the legally stipulated deadlines.

The AI Pact represents a collaborative effort to bridge the gap between political agreement and practical implementation. By convening AI developers on a voluntary basis, the Commission endeavors to instill a proactive approach to compliance, fostering a community dedicated to adhering to the core tenets of the AI Act. This proactive stance is crucial in setting the tone for responsible and accountable AI development and deployment.

The road ahead

With the political agreement secured, EU lawmakers now pivot towards the next phase—negotiating the technical intricacies that will bring the AI Act to life.

While the "why," "what," and "who" have been delineated, the focus now shifts to the critical details of the "how," "when," and "where".

Undoubtedly, this marks a crucial juncture in the evolution of AI governance within the EU. The legislative groundwork has been laid, but the fine-tuning of technical aspects and logistical considerations is where the real work lies. Negotiating and finalizing the granular details could span up to two years, a significant timeframe in the dynamic and rapidly evolving realm of AI.

In conclusion, the EU's strides in regulating AI underscore a commitment to responsible and transparent innovation. As lawmakers embark on the intricate process of detailing the operational aspects of the AI Act, the global AI community eagerly awaits the unveiling of a framework that balances progress with ethical considerations. The journey has set its course, and the forthcoming technical legislation will play a pivotal role in shaping the landscape of AI governance in the EU for years to come.

---

As it remains to be seen how the law will be implemented in practice, how do you envision this regulatory framework shaping the future of AI governance in the EU and beyond?

Let us know your thoughts! 

Related posts