Innovation, Quantum-AI Technology & Law

Blog over Kunstmatige Intelligentie, Quantum, Deep Learning, Blockchain en Big Data Law

Blog over juridische, sociale, ethische en policy aspecten van Kunstmatige Intelligentie, Quantum Computing, Sensing & Communication, Augmented Reality en Robotica, Big Data Wetgeving en Machine Learning Regelgeving. Kennisartikelen inzake de EU AI Act, de Data Governance Act, cloud computing, algoritmes, privacy, virtual reality, blockchain, robotlaw, smart contracts, informatierecht, ICT contracten, online platforms, apps en tools. Europese regels, auteursrecht, chipsrecht, databankrechten en juridische diensten AI recht.

Berichten in European Law
Law, Ethics and Policy of Quantum & AI in Healthcare and Life Sciences published at Harvard, Stanford and European Commission

A collaborative research initiative by scholars from Stanford, Harvard, and MIT, published by the Petrie-Flom Center at Harvard Law School, the Stanford Center for Responsible Quantum Technology, and the European Commission, delves into the complex regulatory and ethical landscape of integrating quantum technologies and artificial intelligence (AI) into the healthcare and life sciences sectors. This series of policy guides and analyses, authored by an interdisciplinary team including Mauritz Kop, Suzan Slijpen, Katie Liu, Jin-Hee Lee, Constanze Albrecht, and I. Glenn Cohen, offers a comprehensive examination of the transformative potential and inherent challenges of this technological convergence.

Regulating Quantum & AI in Healthcare and Medicine: A Brief Policy Guide

This body of research, examining the entangled legal, ethical, and policy dimensions of integrating quantum technologies and AI into healthcare, is articulated across a series of publications in leading academic and policy forums. These works collaboratively build a comprehensive framework for understanding and navigating the future of medicine. A related policy guide was also published on the European Commission's Futurium platform, further disseminating these findings to a key international policymaking audience. The specific publications include:

1. A Brief Quantum Medicine Policy Guidehttps://blog.petrieflom.law.harvard.edu/2024/12/06/a-brief-quantum-medicine-policy-guide/

2. How Quantum Technologies May Be Integrated Into Healthcare, What Regulators Should Considerhttps://law.stanford.edu/publications/how-quantum-technologies-may-be-integrated-into-healthcare-what-regulators-should-consider/

3. EU and US Regulatory Challenges Facing AI Health Care Innovator Firmshttps://blog.petrieflom.law.harvard.edu/2024/04/04/eu-and-us-regulatory-challenges-facing-ai-health-care-innovator-firms/

4. Regulating Quantum & AI in Healthcare: A Brief Policy Guidehttps://futurium.ec.europa.eu/en/european-ai-alliance/document/regulating-quantum-ai-healthcare-brief-policy-guide

by Mauritz Kop, Suzan Slijpen, Katie Liu, Jin-Hee Lee, Constanze Albrecht & I. Glenn Cohen

Forging the Future of Medicine: A Scholarly Perspective on the Law, Ethics, and Policy of Quantum and AI in Healthcare

The research posits that the fusion of AI with second-generation quantum technologies (2G QT)—which harness quantum-mechanical phenomena like superposition and entanglement—is poised to revolutionize precision medicine. This synergy of quantum computing, sensing and simulation with artificial intelligence promises hyper-personalized healthcare solutions, capable of tackling intricate medical problems that lie beyond the grasp of classical computing. The potential applications are vast, spanning from accelerated drug discovery and development workflows and enhanced diagnostic imaging to rapid genome sequencing and real-time health monitoring. For instance, quantum simulations could model molecular interactions to create more effective pharmaceuticals, while quantum dots may offer novel platforms for targeted cancer therapies and treatments for neurodegenerative conditions by overcoming the blood-brain barrier.

However, the authors caution that these groundbreaking advancements are accompanied by significant ethical, legal, socio-economic, and policy (ELSPI) implications. The emergence of Quantum Artificial Intelligence (QAI), Quantum Machine Learning (QML), and Quantum Large Language Models (QLLM) is expected to amplify these ELSPI concerns. The dual-use nature of these technologies, such as their potential application in gain-of-function research, necessitates a principled and human-centric governance approach.

Meer lezen
Mauritz Kop and Mark Lemley Host High-Level EU Cybersecurity Delegation at Stanford Law

Stanford, CA – On February 26, 2024, the Stanford Center for Responsible Quantum Technology (RQT), a leading interdisciplinary hub operating under the aegis of the Stanford Program in Law, Science & Technology, had the distinct honor of hosting a high-level cybersecurity delegation from the European Commission. The meeting, led by the Center’s Founding Director, Mauritz Kop, and Professor Mark A. Lemley, Director of the Stanford Program in Law, Science & Technology, underscored the growing importance of transatlantic dialogue in shaping the future of digital security and responsible innovation in the quantum age.

The Stanford Center for RQT is dedicated to steering the development and application of quantum technologies toward outcomes that are not only innovative but also equitable, transparent, and beneficial for society at large. Its mission is to proactively address the complex ethical, legal, societal, policy and interoperability implications of quantum advancements, fostering a global ecosystem grounded in democratic values and human rights. The Center was officially inaugurated on December 6, 2023, by His Excellency Mark Rutte, then Prime Minister of the Netherlands and the current Secretary General of NATO, a testament to the geopolitical significance of its work. This recent meeting with the EU delegation builds on that foundation, reinforcing the Center’s role as a crucial bridge between Silicon Valley’s technological frontier and the world’s leading policymakers.

The dialogue centered on some of the most pressing challenges and opportunities at the intersection of quantum technology and cybersecurity, including building global capacity for responsible innovation and aligning EU and US national security strategies.

The EU Cybersecurity Delegation at Stanford RQT

The European Commission’s Cybersecurity Delegation was led by Gerard de Graaf, the Senior Envoy for Digital to the U.S. and Head of the European Union Office in San Francisco. A veteran of the European Commission with a distinguished career spanning several key digital policy areas, Mr. de Graaf is at the forefront of the EU’s efforts to promote a human-centric, ethical, and secure digital transition. His role involves strengthening transatlantic cooperation on digital regulation, from data governance and AI to cybersecurity and platform accountability. Mr. de Graaf, who was also present at the Center’s inauguration, has been a pivotal figure in shaping the EU’s landmark digital policies, including the General Data Protection Regulation (GDPR) and the Digital Services Act (DSA). His leadership in the San Francisco office is instrumental in fostering dialogue between European regulators and the heart of the global tech industry.

Accompanying Mr. de Graaf were Joanna Smolinska, Deputy Head of the EU Office in San Francisco and a key figure in transatlantic tech diplomacy, and Ilse Rooze, a Seconded National Expert at the EU Office who brings deep expertise in digital policy and international relations.

Representing Stanford were Mauritz Kop and Professor Mark A. Lemley. Mr. Kop is a pioneering scholar in the governance of emerging technologies, with a focus on quantum, AI, and intellectual property. As the Founding Director of the RQT Center, his work is dedicated to creating robust legal and ethical frameworks to ensure that transformative technologies are developed and deployed responsibly. Professor Lemley is the William H. Neukom Professor of Law at Stanford Law School and one of the world's most cited scholars in intellectual property and technology law. His extensive work on innovation, competition, and the digital economy provides a critical legal and economic lens through which to view the challenges of the quantum era.

The Quantum Cybersecurity Challenge: Preparing for Q-Day

A central theme of the discussion was the looming threat that fault-tolerant quantum computers pose to global cybersecurity. The immense processing power of these future machines will render much of the world’s current cryptographic infrastructure obsolete. This critical juncture, often referred to as “Q-Day” or the “Quantum Apocalypse,” is the moment when a quantum computer will be capable of breaking widely used encryption standards like RSA and ECC, which protect everything from financial transactions and government communications to personal data and critical infrastructure.

The implications of Q-Day are profound. Malicious actors could potentially decrypt vast archives of stolen encrypted data—a scenario known as "harvest now, decrypt later." This retroactive decryption capability poses a severe threat to long-term data security, national security, and economic stability.

In his opening remarks, Mauritz Kop emphasized the urgency of a proactive, coordinated global response. The conversation explored the transition to Post-Quantum Cryptography (PQC), a new generation of cryptographic algorithms designed to be resistant to attacks from both classical and quantum computers. The U.S. National Institute of Standards and Technology (NIST) is in the final stages of standardizing a suite of PQC algorithms, a process closely watched by governments and industries worldwide. The delegation discussed the immense logistical, technical, and financial challenges of migrating global IT systems to these new technical standards—a process that is expected to take more than a decade and require unprecedented public-private collaboration.

The discussion also touched upon other quantum security technologies, such as Quantum Key Distribution (QKD), which uses the principles of quantum mechanics to create secure communication channels. While PQC focuses on developing new mathematical problems that are hard for quantum computers to solve, QKD offers a physics-based approach to security. The participants explored how these different technologies could complement each other in a future-proof security architecture.

Meer lezen
EU Artificial Intelligence Act: The European Approach to AI

Stanford - Vienna Transatlantic Technology Law Forum, Transatlantic Antitrust and IPR Developments, Stanford University, Issue No. 2/2021

New Stanford tech policy research: “EU Artificial Intelligence Act: The European Approach to AI”.

Download the article here: Kop_EU AI Act: The European Approach to AI

EU regulatory framework for AI

On 21 April 2021, the European Commission presented the Artificial Intelligence Act. This Stanford Law School contribution lists the main points of the proposed regulatory framework for AI.

The Act seeks to codify the high standards of the EU trustworthy AI paradigm, which requires AI to be legally, ethically and technically robust, while respecting democratic values, human rights and the rule of law. The draft regulation sets out core horizontal rules for the development, commodification and use of AI-driven products, services and systems within the territory of the EU, that apply to all industries.

Legal sandboxes fostering innovation

The EC aims to prevent the rules from stifling innovation and hindering the creation of a flourishing AI ecosystem in Europe. This is ensured by introducing various flexibilities, including the application of legal sandboxes that afford breathing room to AI developers.

Sophisticated ‘product safety regime’

The EU AI Act introduces a sophisticated ‘product safety framework’ constructed around a set of 4 risk categories. It imposes requirements for market entrance and certification of High-Risk AI Systems through a mandatory CE-marking procedure. To ensure equitable outcomes, this pre-market conformity regime also applies to machine learning training, testing and validation datasets.

Pyramid of criticality

The AI Act draft combines a risk-based approach based on the pyramid of criticality, with a modern, layered enforcement mechanism. This means, among other things, that a lighter legal regime applies to AI applications with a negligible risk, and that applications with an unacceptable risk are banned. Stricter regulations apply as risk increases.

Enforcement at both Union and Member State level

The draft regulation provides for the installation of a new enforcement body at Union level: the European Artificial Intelligence Board (EAIB). At Member State level, the EAIB will be flanked by national supervisors, similar to the GDPR’s oversight mechanism. Fines for violation of the rules can be up to 6% of global turnover, or 30 million euros for private entities.

CE-marking for High-Risk AI Systems

In line with my recommendations, Article 49 of the Act requires high-risk AI and data-driven systems, products and services to comply with EU benchmarks, including safety and compliance assessments. This is crucial because it requires AI infused products and services to meet the high technical, legal and ethical standards that reflect the core values of trustworthy AI. Only then will they receive a CE marking that allows them to enter the European markets. This pre-market conformity mechanism works in the same manner as the existing CE marking: as safety certification for products traded in the European Economic Area (EEA).

Trustworthy AI by Design: ex ante and life-cycle auditing

Responsible, trustworthy AI by design requires awareness from all parties involved, from the first line of code. Indispensable tools to facilitate this awareness process are AI impact and conformity assessments, best practices, technology roadmaps and codes of conduct. These tools are executed by inclusive, multidisciplinary teams, that use them to monitor, validate and benchmark AI systems. It will all come down to ex ante and life-cycle auditing.

The new European rules will forever change the way AI is formed. Pursuing trustworthy AI by design seems like a sensible strategy, wherever you are in the world.

Meer lezen
Safeguards for accelerated market authorization of vaccines in Europe

by Suzan Slijpen & Mauritz Kop

This article has been published by the Stanford Law School ‘Center for Law and the Biosciences’, Stanford University, 15 March 2021. link to the full text: https://law.stanford.edu/2021/03/15/safeguards-for-accelerated-market-authorization-of-vaccines-in-europe/

Download the article here: Slijpen_Kop_Manufacturing Licenses and Market Authorization Vaccines EU-Stanford Law

The first COVID-19 vaccines have been approved

People around the globe are concerned about safety issues encircling the accelerated introduction of corona vaccines. In this article, we discuss the regulatory safeguards for fast-track market authorization of vaccines in Europe. In addition, we explain how the transmission of European Union law into national Member State legislation works. We then clarify what happens before a drug can be introduced into the European market. We conclude that governments should build bridges of mutual understanding between communities and increase trust in the safety of authorized vaccines across all population groups, using the right messengers.

Drug development normally takes several years

Drug development normally takes several years. The fact that it has been a few months now seems ridiculously short. How is the quality and integrity of the vaccine ensured? That people - on both sides of the Atlantic - are concerned about this is entirely understandable. How does one prevent citizens from being harmed by vaccines and medicines that do not work for everyone, because the admission procedures have been simplified too much?

The purpose of this article is to shed a little light upon the accelerated market authorization procedures on the European continent, with a focus on the situation in the Netherlands.

How a vaccine is introduced into the market

In June 2020, the Dutch government, in close cooperation with Germany, France and Italy, formed a Joint Negotiation Team which, under the watchful eye of the European Commission, has been negotiating with vaccine developers. Its objective: to conclude agreements with drug manufacturers at an early stage about the availability of vaccines for European countries. In case these manufacturers are to succeed in developing a successful vaccine for which the so-called Market Authorization (MA) is granted by EMA or CBG, this could lead to the availability of about 50 million vaccines (for the Netherlands alone).

Who is allowed to produce these vaccines?

Who is allowed to produce these vaccines? The Dutch Medicines Act is very clear about this. Only "market authorization holders" are allowed to manufacture medicines, including vaccines. These are parties that have gone through an extensive application procedure, who demonstrably have a solid pharmaceutical quality management system in place and have obtained a pharmaceutical manufacturing license (the MIA, short for Manufacturing and Importation Authorisation). This license is granted after assessment by the Health and Youth Care Inspectorate of the Ministry of Health, Welfare & Sport (IGJ) – by Farmatec. Farmatec is part of the CIBG, an implementing body of the Ministry of Health, Welfare and Sport (VWS). The M-license is mandatory for parties who prepare, or import medicines.

Read more at the Stanford Center for Law and the Biosciences!

Read more on manufacturing licenses, fast track procedures and market authorization by the European Medicines Agency (EMA) and the EC, harmonisation and unification of EU law, CE-markings, antigenic testing kits, mutations, reinfection, multivalent vaccines, mucosal immunity, Good Manufacturing Practices (GMP), pharmacovigilance, the HERA Incubator, clinical trials, compulsory vaccination regimes and continuous quality control at Stanford!

Meer lezen
Shaping the Law of AI: Transatlantic Perspectives

Stanford-Vienna Transatlantic Technology Law Forum, TTLF Working Papers No. 65, Stanford University (2020).

New Stanford innovation policy research: “Shaping the Law of AI: Transatlantic Perspectives”.

Download the article here: Kop_Shaping the Law of AI-Stanford Law

The race for AI dominance

The race for AI dominance is a competition in values, as much as a competition in technology. In light of global power shifts and altering geopolitical relations, it is indispensable for the EU and the U.S to build a transatlantic sustainable innovation ecosystem together, based on both strategic autonomy, mutual economic interests and shared democratic & constitutional values. Discussing available informed policy variations to achieve this ecosystem, will contribute to the establishment of an underlying unified innovation friendly regulatory framework for AI & data. In such a unified framework, the rights and freedoms we cherish, play a central role. Designing joint, flexible governance solutions that can deal with rapidly changing exponential innovation challenges, can assist in bringing back harmony, confidence, competitiveness and resilience to the various areas of the transatlantic markets.

25 AI & data regulatory recommendations

Currently, the European Commission (EC) is drafting its Law of AI. This article gives 25 AI & data regulatory recommendations to the EC, in response to its Inception Impact Assessment on the “Artificial intelligence – ethical and legal requirements” legislative proposal. In addition to a set of fundamental, overarching core AI rules, the article suggests a differentiated industry-specific approach regarding incentives and risks.

European AI legal-ethical framework

Lastly, the article explores how the upcoming European AI legal-ethical framework’s norms, standards, principles and values can be connected to the United States, from a transatlantic, comparative law perspective. When shaping the Law of AI, we should have a clear vision in our minds of the type of society we want, and the things we care so deeply about in the Information Age, at both sides of the Ocean.

Meer lezen
We hebben dringend een recht op dataprocessing nodig

Deze column is gepubliceerd op platform VerderDenken.nl van het Centrum voor Postacademisch Juridisch Onderwijs (CPO) van de Radboud Universiteit Nijmegen. https://www.ru.nl/cpo/verderdenken/columns/we-dringend-recht-dataprocessing-nodig/

5 juridische obstakels voor een succesvol AI-ecosysteem

Eerder schreef ik dat vraagstukken over het (intellectueel) eigendom van data, databescherming en privacy een belemmering vormen voor het (her)gebruiken en delen van hoge kwaliteit data tussen burgers, bedrijven, onderzoeksinstellingen en de overheid. Er bestaat in Europa nog geen goed functionerend juridisch-technisch systeem dat rechtszekerheid en een gunstig investeringsklimaat biedt en bovenal is gemaakt met de datagedreven economie in het achterhoofd. We hebben hier te maken met een complex probleem dat in de weg staat aan exponentiële innovatie.

Auteursrechten, Privacy en Rechtsonzekerheid over eigendom van data

De eerste juridische horde bij datadelen is auteursrechtelijk van aard. Ten tweede kunnen er (sui generis) databankenrechten van derden rusten op (delen van) de training-, testing- of validatiedataset. Ten derde zullen bedrijven na een strategische afweging kiezen voor geheimhouding, en niet voor het patenteren van hun technische vondst. Het vierde probleempunt is rechtsonzekerheid over juridisch eigendom van data. Een vijfde belemmering is de vrees voor de Algemene verordening gegevensbescherming (AVG). Onwetendheid en rechtsonzekerheid resulteert hier in risicomijdend gedrag. Het leidt niet tot spectaculaire Europese unicorns die de concurrentie aankunnen met Amerika en China.

Wat is machine learning eigenlijk?

Vertrouwdheid met technische aspecten van data in machine learning geeft juristen, datawetenschappers en beleidsmakers de mogelijkheid om effectiever te communiceren over toekomstige regelgeving voor AI en het delen van data.

Machine learning en datadelen zijn van elementair belang voor de geboorte en de evolutie van AI. En daarmee voor het behoud van onze democratische waarden, welvaart en welzijn. Een machine learning-systeem wordt niet geprogrammeerd, maar getraind. Tijdens het leerproces ontvangt een computer uitgerust met kustmatige intelligentie zowel invoergegevens (trainingdata), als de verwachte, bij deze inputdata behorende antwoorden. Het AI-systeem moet zelf de bijpassende regels en wetmatigheden formuleren met een kunstmatig brein. Algoritmische, voorspellende modellen kunnen vervolgens worden toegepast op nieuwe datasets om nieuwe, correcte antwoorden te produceren.

Dringend nodig: het recht op dataprocessing

De Europese Commissie heeft de ambitie om datasoevereiniteit terug te winnen. Europa moet een internationale datahub worden. Dit vereist een modern juridisch raamwerk in de vorm van de Europese Data Act, die in de loop van 2021 wordt verwacht. Het is naar mijn idee cruciaal dat de Data Act een expliciet recht op dataprocessing bevat.

Technologie is niet neutraal

Tegelijkertijd kan de architectuur van digitale systemen de sociaal-maatschappelijke impact van digitale transformatie reguleren. Een digitaal inclusieve samenleving moet technologie actief vormgeven. Technologie an sich is namelijk nooit neutraal. Maatschappelijke waarden zoals transparantie, vertrouwen, rechtvaardigheid, controle en cybersecurity moeten worden ingebouwd in het design van AI-systemen en de benodigde trainingdatasets, vanaf de eerste regel code.

Meer lezen
Mauritz Kop becomes TTLF Fellow at Stanford University

AIRecht Partner joins Stanford Law School’s Transatlantic Thinktank

Honoured and thrilled to join Stanford Law School’s Transatlantic Thinktank and become TTLF Fellow at Stanford University. It is the Silicon Valley, California based Transatlantic Technology Law Forum’s objective to raise professional understanding and public awareness of transatlantic challenges in the field of law, science and technology, as well as to support policy-oriented research on transatlantic issues in the field.

Human Centred AI & IPR policy

My comparative, interdisciplinary research project focuses on Human Centred AI & IPR policy. How to realize an impactful transformative tech related IP (intellectual property) policy that facilitates an innovation optimum and protects our common Humanist moral values at the same time?

Focus beyond Intellectual Property Law

With an additional focus beyond IP, the research shall present ideas on how Europe and The United States could apply sustainable disruptive innovation policy pluralism (i.e. mix, match and layer IP alternatives such as competition law and government-market hybrids) to enable fair-trading conditions and balance the effects of exponential innovation within the Transatlantic markets. The research envisages that the presented ideas and viewpoints will be refined towards more actual policies in Brussels and Washington.

Meer lezen