Innovation, Technology & Law

Blog over Kunstmatige Intelligentie, Quantum, Deep Learning, Blockchain en Big Data Law

Blog over juridische, sociale, ethische en policy aspecten van Kunstmatige Intelligentie, Quantum Computing, Sensing & Communication, Augmented Reality en Robotica, Big Data Wetgeving en Machine Learning Regelgeving. Kennisartikelen inzake de EU AI Act, de Data Governance Act, cloud computing, algoritmes, privacy, virtual reality, blockchain, robotlaw, smart contracts, informatierecht, ICT contracten, online platforms, apps en tools. Europese regels, auteursrecht, chipsrecht, databankrechten en juridische diensten AI recht.

EU Artificial Intelligence Act: The European Approach to AI

Stanford - Vienna Transatlantic Technology Law Forum, Transatlantic Antitrust and IPR Developments, Stanford University, Issue No. 2/2021

New Stanford tech policy research: “EU Artificial Intelligence Act: The European Approach to AI”.

By Mauritz Kop

Download the article here: EU AI Act: The European Approach to AI

Citation: Kop, Mauritz, EU Artificial Intelligence Act: The European Approach to AI (September 21, 2021). Stanford - Vienna Transatlantic Technology Law Forum, Transatlantic Antitrust and IPR Developments, Stanford University, Issue No. 2/2021. https://law.stanford.edu/publications/eu-artificial-intelligence-act-the-european-approach-to-ai/

Please find a short abstract below:

EU regulatory framework for AI

On 21 April 2021, the European Commission presented the Artificial Intelligence Act. This contribution will list the main points of this novel regulatory framework for AI.

The EU AI Act sets out horizontal rules for the development, commodification and use of AI-driven products, services and systems within the territory of the EU. The draft regulation provides core artificial intelligence rules that apply to all industries. The EU AI Act introduces a sophisticated ‘product safety framework’ constructed around a set of 4 risk categories. It imposes requirements for market entrance and certification of High-Risk AI Systems through a mandatory CE-marking procedure. To ensure equitable outcomes, this pre-market conformity regime also applies to machine learning training, testing and validation datasets. The Act seeks to codify the high standards of the EU trustworthy AI paradigm, which requires AI to be legally, ethically and technically robust, while respecting democratic values, human rights and the rule of law.


Pyramid of criticality: stricter regulations as risk increases

The Artificial Intelligence Act draft combines a risk-based approach based on the pyramid of criticality, with a modern, layered enforcement mechanism. This means, among other things, that a lighter legal regime applies to AI applications with a negligible risk, and that applications with an unacceptable risk are banned. Between these extremes of the spectrum, stricter regulations apply as risk increases. These range from non-binding self-regulatory soft law impact assessments accompanied by codes of conduct, to heavy, externally audited compliance requirements throughout the life cycle of the application.

The Pyramid of Criticality for AI Systems.

The Pyramid of Criticality for AI Systems.

Enforcement at both Union and Member State level

The draft regulation provides for the installation of a new enforcement body at Union level: the European Artificial Intelligence Board (EAIB). At Member State level, the EAIB will be flanked by national supervisors, similar to the GDPR’s oversight mechanism. Fines for violation of the rules can be up to 6% of global turnover, or 30 million euros for private entities.


CE-marking for High-Risk AI Systems

In line with my recommendations, Article 49 of the Artificial Intelligence Act requires high-risk AI and data-driven systems, products and services to comply with EU benchmarks, including safety and compliance assessments. This is crucial because it requires AI infused products and services to meet the high technical, legal and ethical standards that reflect the core values of trustworthy AI. Only then will they receive a CE marking that allows them to enter the European markets. This pre-market conformity & legal compliance mechanism works in the same manner as the existing CE marking: as safety certification for products traded in the European Economic Area (EEA).


Sector-specific regulations

On top of the new AI rules, AI infused systems, products and services must comply with sector-specific regulations such as the Machinery Directive and the Regulations for medical devices (MDR) and in vitro diagnostics (IVDR), as well. Furthermore, besides the General Data Protection Regulation (GDPR) for personal data, the FFD Regulation for non-personal data and both GDPR and FFD for mixed datasets, the upcoming Data Act will apply. In addition, audits of products and services equipped with AI must fit into existing quality management systems of industries and economic sectors such as logistics, energy and healthcare.

Innovation friendly flexibilities: Legal Sandboxes

The draft aims to prevent the rules from stifling innovation and hindering the creation of a flourishing AI ecosystem in Europe. This is ensured by introducing various flexibilities and exceptions, including the application of legal sandboxes that afford breathing room to AI developers such as research institutions and SME’s. The concept thus seeks to balance divergent interests, including democratic, economic and social values, which means that trade-offs have to be made. Further, an IP Action Plan has been drawn up to modernize technology related intellectual property laws.

EU Artificial Intelligence Act: The European Approach to AI.

EU Artificial Intelligence Act: The European Approach to AI.

Setting global standards for AI

It takes courage and creativity to legislate through this stormy, interdisciplinary matter, forcing US and Chinese companies to conform to values-based EU standards before their AI products and services can access the European market with its 450 million consumers. Consequentially, the proposal has extraterritorial effect.

By drafting the Artificial Intelligence Act and embedding Humanist norms and values into the architecture and infrastructure of our technology, the EU provides direction and leads the world towards a meaningful destination. As the Commission did before with the GDPR, which has now become the international blueprint for privacy, data protection and data sovereignty.

While enforcing the proposed rules will be a whole new adventure, the novel legal-ethical framework for AI enriches the way of thinking about regulating the Fourth Industrial Revolution (4IR).


Trustworthy AI by Design: ex ante and life-cycle auditing

Responsible, trustworthy AI requires awareness from all parties involved, from the first line of code. The way we design our technology is shaping the future of our society. In this vision, democratic values and fundamental rights play a key role. Indispensable tools to facilitate this awareness process are AI impact and conformity assessments, best practices, technology roadmaps and codes of conduct. These tools are executed by inclusive, multidisciplinary teams, that use them to monitor, validate and benchmark AI systems. It will all come down to ex ante and life-cycle auditing.

The new European rules will forever change the way AI is formed. Pursuing trustworthy AI by design seems like a sensible strategy, wherever you are in the world.