Innovation, Quantum-AI Technology & Law

Blog over Kunstmatige Intelligentie, Quantum, Deep Learning, Blockchain en Big Data Law

Blog over juridische, sociale, ethische en policy aspecten van Kunstmatige Intelligentie, Quantum Computing, Sensing & Communication, Augmented Reality en Robotica, Big Data Wetgeving en Machine Learning Regelgeving. Kennisartikelen inzake de EU AI Act, de Data Governance Act, cloud computing, algoritmes, privacy, virtual reality, blockchain, robotlaw, smart contracts, informatierecht, ICT contracten, online platforms, apps en tools. Europese regels, auteursrecht, chipsrecht, databankrechten en juridische diensten AI recht.

Berichten met de tag World Summit AI
Montreal World Summit AI 2022 Features Mauritz Kop Keynote on EU AI Act

Montreal, Canada – May 4, 2022 – Today, at the prestigious World Summit AI Americas held at the Palais des congrès, Mauritz Kop, TTLF Fellow at Stanford Law School and Director of AIRecht, provided a concise overview of the proposed EU Artificial Intelligence Act. He was a featured panellist in a critical discussion titled, "Does the proposed EU Artificial Intelligence Act provide a regulatory framework for AI that should be adopted globally?". The summit, themed "AI with impact: for crisis response and business continuity and recovery," brings together leading AI brains and enterprise leaders.

Mr. Kop joined fellow distinguished panellists Professor Gillian Hadfield from the University of Toronto and Dr. José-Marie Griffiths, President of Dakota State University and former NSCAI Commissioner. The session was moderated by Meredith Broadbent, Former Chairman of the U.S. International Trade Commission and Senior Adviser at CSIS.

Novel Legal Framework for AI

During the panel, Mr. Kop outlined the main points of the novel legal framework for AI presented by the European Commission on April 21, 2021. He explained that the EU AI Act sets out horizontal rules applicable to all industries for the development, commodification, and use of AI-driven products, services, and systems within the EU's territory.

A core component of the Act is its sophisticated ‘product safety framework’, which is constructed around four distinct risk categories in a "pyramid of criticality". This risk-based approach dictates that AI applications with unacceptable risks are banned, while lighter legal regimes apply to low-risk applications. As the risk level increases, so do the stringency of the rules, ranging from non-binding self-regulation and impact assessments for lower-risk systems to potentially heavy, externally audited compliance requirements throughout the lifecycle of high-risk AI systems.

EU "Trustworthy AI" Paradigm

Mr. Kop emphasized that the Act aims to codify the high standards of the EU’s "trustworthy AI" paradigm, which mandates that AI systems must be legal, ethical, and technically robust, all while respecting democratic values, human rights, and the rule of law. A crucial aspect highlighted was the requirement for market entrance and certification of High-Risk AI Systems through a mandatory CE-marking procedure. This pre-market conformity regime also extends to the machine learning training, testing, and validation datasets used by these systems. Only after a declaration of conformity is signed and the CE marking is affixed can these high-risk systems enter and be traded on the European markets.

Enforcement will be managed by a new Union-level body, the European Artificial Intelligence Board (EAIB), supported by national supervisors in each Member State, similar to the GDPR's oversight structure. Mr. Kop noted the seriousness of non-compliance, with potential fines reaching up to 6% of a company's global turnover.

Balancing regulation with innovation, the EU AI Act also introduces legal sandboxes. These are designed to provide AI developers with "breathing room" to test new inventions and foster a flourishing AI ecosystem in Europe.

Meer lezen