Innovation, Quantum-AI Technology & Law

Blog over Kunstmatige Intelligentie, Quantum, Deep Learning, Blockchain en Big Data Law

Blog over juridische, sociale, ethische en policy aspecten van Kunstmatige Intelligentie, Quantum Computing, Sensing & Communication, Augmented Reality en Robotica, Big Data Wetgeving en Machine Learning Regelgeving. Kennisartikelen inzake de EU AI Act, de Data Governance Act, cloud computing, algoritmes, privacy, virtual reality, blockchain, robotlaw, smart contracts, informatierecht, ICT contracten, online platforms, apps en tools. Europese regels, auteursrecht, chipsrecht, databankrechten en juridische diensten AI recht.

Berichten met de tag Data Act
Mauritz Kop Consults Senator Mark Warner on AI & Quantum Technology Policy

Washington D.C., January 4, 2022—As the United States Congress grapples with the complex challenges of regulating artificial intelligence and quantum technology, leading policymakers are seeking expert guidance to inform a robust and forward-thinking national strategy. On January 4, 2022, Mauritz Kop, a distinguished scholar in the field of technology law and governance, was consulted by the legal team of U.S. Senator Mark Warner (D-VA) to provide strategic insights on both AI and quantum technology policy.

This consultation highlights the growing recognition in Washington of the need for deep, interdisciplinary expertise to navigate the geopolitical, economic, and security dimensions of these transformative technologies. Senator Warner's team reached out to Kop based on his influential scholarship, including his extensive work at Stanford on the EU AI Act and the need for a strategic democratic tech alliance, his advisory role for the European Commission led by Ursula von der Leyen on the AI Act and Data Act, and his foundational article in the Yale Journal of Law & Technology proposing a comprehensive legal-ethical framework for quantum technology.

Senator Mark Warner: A Leader on Technology and National Security

Senator Mark Warner's engagement on these issues is both significant and timely. As the Chairman of the Senate Select Committee on Intelligence, he is at the forefront of addressing the national security implications of emerging technologies. His work involves overseeing the U.S. Intelligence Community and ensuring it is equipped to handle the threats and opportunities of the 21st century, where technological competition with nations like China is a central concern.

The Senate Select Committee on Intelligence has a broad mandate that includes analyzing intelligence on the technological capabilities of foreign powers and assessing the vulnerabilities of U.S. critical infrastructure. Senator Warner has been a vocal proponent of developing a national strategy for AI and quantum to maintain the United States' competitive edge and to ensure that these technologies are developed and deployed in a manner consistent with democratic values. This consultation with Mauritz Kop reflects the Senator's commitment to drawing on leading academic research to shape sound, bipartisan policy.

AI Policy: A Transatlantic, Risk-Based Approach that Lets Innovation Breathe

A key focus of the consultation was Kop's analysis of the European Union's AI Act. His Stanford publications argue for a balanced, pro-innovation regulatory model that can serve as a blueprint for international cooperation. Good governance and sensible legislation should incentivize desired behavior and simultaneously create breathing room for sustainable, beneficial innovation to flourish.

Quantum Governance: Establishing a Legal-Ethical Framework

The discussion also delved into the governance of quantum technology, drawing on Kop's seminal work in the Yale Journal of Law & Technology. Recognizing that quantum is rapidly moving from the theoretical to the practical, he stressed the urgency of establishing a legal-ethical framework before the technology is widely deployed and locked-in.

The consultation with Senator Warner's office represents a critical intersection of academic scholarship and high-level policymaking. As the United States charts its course in the era of AI and quantum, the insights provided by experts like Mauritz Kop are invaluable in ensuring that the nation's strategy is not only competitive but also responsible, ethical, and firmly rooted in democratic principles.

Meer lezen
EU Artificial Intelligence Act: The European Approach to AI

Stanford - Vienna Transatlantic Technology Law Forum, Transatlantic Antitrust and IPR Developments, Stanford University, Issue No. 2/2021

New Stanford tech policy research: “EU Artificial Intelligence Act: The European Approach to AI”.

Download the article here: Kop_EU AI Act: The European Approach to AI

EU regulatory framework for AI

On 21 April 2021, the European Commission presented the Artificial Intelligence Act. This Stanford Law School contribution lists the main points of the proposed regulatory framework for AI.

The Act seeks to codify the high standards of the EU trustworthy AI paradigm, which requires AI to be legally, ethically and technically robust, while respecting democratic values, human rights and the rule of law. The draft regulation sets out core horizontal rules for the development, commodification and use of AI-driven products, services and systems within the territory of the EU, that apply to all industries.

Legal sandboxes fostering innovation

The EC aims to prevent the rules from stifling innovation and hindering the creation of a flourishing AI ecosystem in Europe. This is ensured by introducing various flexibilities, including the application of legal sandboxes that afford breathing room to AI developers.

Sophisticated ‘product safety regime’

The EU AI Act introduces a sophisticated ‘product safety framework’ constructed around a set of 4 risk categories. It imposes requirements for market entrance and certification of High-Risk AI Systems through a mandatory CE-marking procedure. To ensure equitable outcomes, this pre-market conformity regime also applies to machine learning training, testing and validation datasets.

Pyramid of criticality

The AI Act draft combines a risk-based approach based on the pyramid of criticality, with a modern, layered enforcement mechanism. This means, among other things, that a lighter legal regime applies to AI applications with a negligible risk, and that applications with an unacceptable risk are banned. Stricter regulations apply as risk increases.

Enforcement at both Union and Member State level

The draft regulation provides for the installation of a new enforcement body at Union level: the European Artificial Intelligence Board (EAIB). At Member State level, the EAIB will be flanked by national supervisors, similar to the GDPR’s oversight mechanism. Fines for violation of the rules can be up to 6% of global turnover, or 30 million euros for private entities.

CE-marking for High-Risk AI Systems

In line with my recommendations, Article 49 of the Act requires high-risk AI and data-driven systems, products and services to comply with EU benchmarks, including safety and compliance assessments. This is crucial because it requires AI infused products and services to meet the high technical, legal and ethical standards that reflect the core values of trustworthy AI. Only then will they receive a CE marking that allows them to enter the European markets. This pre-market conformity mechanism works in the same manner as the existing CE marking: as safety certification for products traded in the European Economic Area (EEA).

Trustworthy AI by Design: ex ante and life-cycle auditing

Responsible, trustworthy AI by design requires awareness from all parties involved, from the first line of code. Indispensable tools to facilitate this awareness process are AI impact and conformity assessments, best practices, technology roadmaps and codes of conduct. These tools are executed by inclusive, multidisciplinary teams, that use them to monitor, validate and benchmark AI systems. It will all come down to ex ante and life-cycle auditing.

The new European rules will forever change the way AI is formed. Pursuing trustworthy AI by design seems like a sensible strategy, wherever you are in the world.

Meer lezen
Shaping the Law of AI: Transatlantic Perspectives

Stanford-Vienna Transatlantic Technology Law Forum, TTLF Working Papers No. 65, Stanford University (2020).

New Stanford innovation policy research: “Shaping the Law of AI: Transatlantic Perspectives”.

Download the article here: Kop_Shaping the Law of AI-Stanford Law

The race for AI dominance

The race for AI dominance is a competition in values, as much as a competition in technology. In light of global power shifts and altering geopolitical relations, it is indispensable for the EU and the U.S to build a transatlantic sustainable innovation ecosystem together, based on both strategic autonomy, mutual economic interests and shared democratic & constitutional values. Discussing available informed policy variations to achieve this ecosystem, will contribute to the establishment of an underlying unified innovation friendly regulatory framework for AI & data. In such a unified framework, the rights and freedoms we cherish, play a central role. Designing joint, flexible governance solutions that can deal with rapidly changing exponential innovation challenges, can assist in bringing back harmony, confidence, competitiveness and resilience to the various areas of the transatlantic markets.

25 AI & data regulatory recommendations

Currently, the European Commission (EC) is drafting its Law of AI. This article gives 25 AI & data regulatory recommendations to the EC, in response to its Inception Impact Assessment on the “Artificial intelligence – ethical and legal requirements” legislative proposal. In addition to a set of fundamental, overarching core AI rules, the article suggests a differentiated industry-specific approach regarding incentives and risks.

European AI legal-ethical framework

Lastly, the article explores how the upcoming European AI legal-ethical framework’s norms, standards, principles and values can be connected to the United States, from a transatlantic, comparative law perspective. When shaping the Law of AI, we should have a clear vision in our minds of the type of society we want, and the things we care so deeply about in the Information Age, at both sides of the Ocean.

Meer lezen