Innovation, Technology & Law

Blog over Kunstmatige Intelligentie, Quantum, Deep Learning, Blockchain en Big Data Law

Blog over juridische, sociale, ethische en policy aspecten van Kunstmatige Intelligentie, Quantum Computing, Sensing & Communication, Augmented Reality en Robotica, Big Data Wetgeving en Machine Learning Regelgeving. Kennisartikelen inzake de EU AI Act, de Data Governance Act, cloud computing, algoritmes, privacy, virtual reality, blockchain, robotlaw, smart contracts, informatierecht, ICT contracten, online platforms, apps en tools. Europese regels, auteursrecht, chipsrecht, databankrechten en juridische diensten AI recht.

Berichten in Stanford
Quantum ELSPI: Ethical, Legal, Social and Policy Implications of Quantum Technology

Call for Papers Quantum ELSPI

Delighted to announce that the Quantum ELSPI call for papers is now open! AIRecht Director & Stanford Law School TTLF Fellow Mauritz Kop has the honor to guest-edit a Topical Collection for Digital Society, a new journal edited by Luciano Floridi (Oxford Internet Institute). This project is a Stanford/Oxford collaboration that aims to explore uncharted territories of Ethical, Legal, Social and Policy Implications of Quantum Technology. Articles should be submitted before 15 February 2022 and will be double blind peer reviewed. Accepted articles will be published by Springer Nature.

You can find the Quantum ELSPI collection page here: https://link.springer.com/collections/eiebhdhagd.

Download the Springer Nature Quantum-ELSPI Call for Papers here: TC_Quantum ELSPI_Call for papers

ELSPI stratagems for quantum technology

Anticipating spectacular advancements in real-world quantum driven products and services, the time is ripe for governments, academia and the market to prepare regulatory and business strategies that balance their societal impact. This topical collection seeks to provide informed suggestions on how to maximize benefits and mitigate risks of applied quantum technology. It intends to deliver insights and actionable recommendations on how and when to address identified opportunities and challenges, which can then be refined into plausible, evidence-based policy decisions by stakeholders across the world.

Special edition of Digital Society

In this special edition of Digital Society, we aim for scholars to reflect on the multifaceted questions associated with Quantum ELSPI. In addition to learning from history and connecting quantum to other big picture trends, quantum should be treated as something completely unique and unprecedented. We especially welcome cross-disciplinary contributions that look beyond research silos and integrate law, economic theory, ethics, sociology, philosophy of science, quantum information science, and sustainable innovation policy, and that consider how to improve ELSPI stratagems for quantum technology. We encourage authors to be pioneers in this complex, and at times counterintuitive field.

Multifaceted questions associated with Quantum ELSPI

Questions and topics that could be addressed by contributions in the topical collection are not restricted to, but could include the following:

-Potential strategies for industries facing disruption such as the cybersecurity industry and financial institutions. What role could antitrust law, intellectual property, prizes, fines, funding, taxes, lifelong learning and labor mobility play while incentivizing innovation?

-How should dual use applications be managed? How do we balance freedom with control? What role could a Quantum Treaty play to make our world a safer place?

-The creation of a list of quantum-specific themes, goals, benefits and risks that need to be addressed by universal, overarching principles of responsible quantum design and application, including a definition of hi-risk quantum-systems.

-How can policy makers learn from history and adjacent fields - such as AI, biotechnology, nanotechnology, semiconductors and nuclear - when regulating exponential innovation and ensuring equal access to quantum computing, sensing and the quantum internet? How can winner take all effects and a quantum divide be prevented? To what extent does governing digitization driven by classical computing paradigms (binary digits) differ from governing quantum computing (qubits)?

-It is not inconceivable that the development and uptake of transnational quantum principles will run along the lines of democratic and authoritarian tech governance models. Against that background, how can we embed cultural norms, liberal values, democratic principles, human rights and fundamental freedoms in globally accepted interoperability standards?

-How can we implement ethically aligned design into our quantum systems architecture and infrastructure? How can quantum technology impact assessments help achieve these goals?

Guest-Editor Quantum ELSPI: Mauritz Kop (Stanford Law School, Stanford University)

Editor-in-Chief Digital Society: Luciano Floridi (Oxford Internet Institute, Oxford University)

Meer lezen
EU Artificial Intelligence Act: The European Approach to AI

Stanford - Vienna Transatlantic Technology Law Forum, Transatlantic Antitrust and IPR Developments, Stanford University, Issue No. 2/2021

New Stanford tech policy research: “EU Artificial Intelligence Act: The European Approach to AI”.

Download the article here: Kop_EU AI Act: The European Approach to AI

EU regulatory framework for AI

On 21 April 2021, the European Commission presented the Artificial Intelligence Act. This Stanford Law School contribution lists the main points of the proposed regulatory framework for AI.

The Act seeks to codify the high standards of the EU trustworthy AI paradigm, which requires AI to be legally, ethically and technically robust, while respecting democratic values, human rights and the rule of law. The draft regulation sets out core horizontal rules for the development, commodification and use of AI-driven products, services and systems within the territory of the EU, that apply to all industries.

Legal sandboxes fostering innovation

The EC aims to prevent the rules from stifling innovation and hindering the creation of a flourishing AI ecosystem in Europe. This is ensured by introducing various flexibilities, including the application of legal sandboxes that afford breathing room to AI developers.

Sophisticated ‘product safety regime’

The EU AI Act introduces a sophisticated ‘product safety framework’ constructed around a set of 4 risk categories. It imposes requirements for market entrance and certification of High-Risk AI Systems through a mandatory CE-marking procedure. To ensure equitable outcomes, this pre-market conformity regime also applies to machine learning training, testing and validation datasets.

Pyramid of criticality

The AI Act draft combines a risk-based approach based on the pyramid of criticality, with a modern, layered enforcement mechanism. This means, among other things, that a lighter legal regime applies to AI applications with a negligible risk, and that applications with an unacceptable risk are banned. Stricter regulations apply as risk increases.

Enforcement at both Union and Member State level

The draft regulation provides for the installation of a new enforcement body at Union level: the European Artificial Intelligence Board (EAIB). At Member State level, the EAIB will be flanked by national supervisors, similar to the GDPR’s oversight mechanism. Fines for violation of the rules can be up to 6% of global turnover, or 30 million euros for private entities.

CE-marking for High-Risk AI Systems

In line with my recommendations, Article 49 of the Act requires high-risk AI and data-driven systems, products and services to comply with EU benchmarks, including safety and compliance assessments. This is crucial because it requires AI infused products and services to meet the high technical, legal and ethical standards that reflect the core values of trustworthy AI. Only then will they receive a CE marking that allows them to enter the European markets. This pre-market conformity mechanism works in the same manner as the existing CE marking: as safety certification for products traded in the European Economic Area (EEA).

Trustworthy AI by Design: ex ante and life-cycle auditing

Responsible, trustworthy AI by design requires awareness from all parties involved, from the first line of code. Indispensable tools to facilitate this awareness process are AI impact and conformity assessments, best practices, technology roadmaps and codes of conduct. These tools are executed by inclusive, multidisciplinary teams, that use them to monitor, validate and benchmark AI systems. It will all come down to ex ante and life-cycle auditing.

The new European rules will forever change the way AI is formed. Pursuing trustworthy AI by design seems like a sensible strategy, wherever you are in the world.

Meer lezen
De Wet op de Artificiële Intelligentie

Een bewerkte versie van deze bijdrage is gepubliceerd op platform VerderDenken.nl van het Centrum voor Postacademisch Juridisch Onderwijs (CPO) van de Radboud Universiteit Nijmegen. https://www.ru.nl/cpo/verderdenken/columns/wet-artificiele-intelligentie-belangrijkste-punten/

Nieuwe regels voor AI gedreven producten, diensten en systemen

Op 21 april 2021 presenteerde de Europese Commissie haar langverwachte Wet op de Artificiële Intelligentie (AI). Deze concept Verordening geeft regels voor de ontwikkeling, commodificatie en gebruik van AI gedreven producten, diensten en systemen binnen het territorium van de Europese Unie. Het was bemoedigend te zien dat het team van President Ursula von der Leyen een belangrijk aantal van onze strategische aanbevelingen op het gebied van de regulering van AI heeft overgenomen, danwel zelfstandig tot dezelfde conclusies is gekomen.

Doelstellingen wettelijk kader voor AI

De concept Verordening biedt horizontale overkoepelende kernregels voor kunstmatige intelligentie die op alle industrieën (verticals) van toepassing zijn. De wet beoogt de hoge maatstaven van het EU Trustworthy AI paradigma te codificeren, dat voorschrijft dat AI wettig, ethisch en technisch robuust dient te zijn en daartoe 7 vereisten hanteert.

De Wet op de Artificiële Intelligentie heeft de volgende 4 doelstellingen:

1. ervoor zorgen dat AI-systemen die in de Unie in de handel worden gebracht en gebruikt, veilig zijn en de bestaande wetgeving inzake grondrechten en waarden van de Unie eerbiedigen;

2. rechtszekerheid garanderen om investeringen en innovatie in AI te vergemakkelijken;

3. het beheer en de doeltreffende handhaving van de bestaande wetgeving inzake grondrechten en veiligheidsvoorschriften die van toepassing zijn op AI-systemen, verbeteren;

4. de ontwikkeling van een eengemaakte markt voor wettige, veilige en betrouwbare AI-toepassingen vergemakkelijken en marktversnippering voorkomen.“

Risico gebaseerde aanpak kunstmatig intelligente applicaties

Om deze doelstellingen te realiseren combineert de concept Artificial Intelligence Act een risk-based approach op basis van de pyramid of criticality, met een modern, gelaagd handhavingsmechanisme. Dit houdt onder meer in dat er voor AI applicaties met een verwaarloosbaar risico een licht wettelijk regime geldt, en onacceptabel risico applicaties verboden worden. Tussen deze 2 uitersten gelden er naarmate het risico toeneemt strengere voorschriften. Deze variëren van vrijblijvende zelfregulerende soft law impact assessments met gedragscodes, tot zwaar, multidisciplinair extern geauditeerde compliance vereisten inzake kwaliteit, veiligheid en transparantie inclusief risicobeheer, monitoring, certificering, benchmarking, validatie, documentatieplicht en markttoezicht gedurende de levenscyclus van de toepassing.

Handhaving en governance

De definitie van hoog risico AI applicaties binnen de diverse industriële sectoren is nog niet in steen gehouwen. Een ondubbelzinnige risicotaxonomie zal bijdragen aan rechtszekerheid en biedt belanghebbenden een adequaat antwoord op vragen over aansprakelijkheid en verzekering. Om ruimte voor innovatie door SME’s waaronder tech-startups te waarborgen, worden er flexibele AI regulatory sandboxes geïntroduceerd en is er IP Action Plan opgesteld voor intellectueel eigendom. De concept Verordening voorziet tenslotte in de installatie van een nieuwe handhavende instantie op Unieniveau: het European Artificial Intelligence Board. De EAIB zal op lidstaatniveau worden geflankeerd door nationale toezichthouders.

Meer lezen
Quantum Computing and Intellectual Property Law

Berkeley Technology Law Journal, Vol. 35, No. 3, 2021

New Stanford University Beyond IP Innovation Law research article: “Quantum Computing and Intellectual Property Law”.

By Mauritz Kop

Citation: Kop, Mauritz, Quantum Computing and Intellectual Property Law (April 8, 2021). Berkeley Technology Law Journal 2021, Vol. 35, No. 3, pp 101-115, February 8, 2022, https://btlj.org/2022/02/quantum-computing-and-intellectual-property-law/

Download the article here: Kop_QC and IP Law BTLJ

Please find a short abstract below:

Intellectual property (IP) rights & the Quantum Computer

What types of intellectual property (IP) rights can be vested in the components of a scalable quantum computer? Are there sufficient market-set innovation incentives for the development and dissemination of quantum software and hardware structures? Or is there a need for open source ecosystems, enrichment of the public domain and even democratization of quantum technology? The article explores possible answers to these tantalizing questions.

IP overprotection leads to exclusive exploitation rights for first movers

The article demonstrates that strategically using a mixture of IP rights to maximize the value of the IP portfolio of the quantum computer’s owner, potentially leads to IP protection in perpetuity. Overlapping IP protection regimes can result in unlimited duration of global exclusive exploitation rights for first movers, being a handful of universities and large corporations. The ensuing IP overprotection in the field of quantum computing leads to an unwanted concentration of market power. Overprotection of information causes market barriers and hinders both healthy competition and industry-specific innovation. In this particular case it slows down progress in an important application area of quantum technology, namely quantum computing.

Fair competition and antitrust laws for quantum technology

In general, our current IP framework is not written with quantum technology in mind. IP should be an exception -limited in time and scope- to the rule that information goods can be used for the common good without restraint. IP law cannot incentivize creation, prevent market failure, fix winner-takes-all effects, eliminate free riding and prohibit predatory market behavior at the same time. To encourage fair competition and correct market skewness, antitrust law is the instrument of choice.

Towards an innovation architecture that mixes freedom and control

The article proposes a solution tailored to the exponential pace of innovation in The Quantum Age, by introducing shorter IP protection durations of 3 to 10 years for Quantum and AI infused creations and inventions. These shorter terms could be made applicable to both the software and the hardware side of things. Clarity about the recommended limited durations of exclusive rights -in combination with compulsory licenses or fixed prized statutory licenses- encourages legal certainty, knowledge dissemination and follow on innovation within the quantum domain. In this light, policy makers should build an innovation architecture that mixes freedom (e.g. access, public domain) and control (e.g. incentive & reward mechanisms).

Creating a thriving global quantum ecosystem

The article concludes that anticipating spectacular advancements in quantum technology, the time is now ripe for governments, research institutions and the markets to prepare regulatory and IP strategies that strike the right balance between safeguarding our fundamental rights & freedoms, our democratic norms & standards, and pursued policy goals that include rapid technology transfer, the free flow of information and the creation of a thriving global quantum ecosystem, whilst encouraging healthy competition and incentivizing sustainable innovation.

Meer lezen
Democratic Countries Should Form a Strategic Tech Alliance

Stanford - Vienna Transatlantic Technology Law Forum, Transatlantic Antitrust and IPR Developments, Stanford University, Issue No. 1/2021

New Stanford innovation policy research: “Democratic Countries Should Form a Strategic Tech Alliance”.

Download the article here: Kop_Democratic Countries-Strategic Tech Alliance-Stanford Law

Exporting values into society through technology

China’s relentless advance in Artificial Intelligence (AI) and quantum computing has engendered a significant amount of anxiety about the future of America’s technological supremacy. The resulting debate centres around the impact of China’s digital rise on the economy, security, employment and the profitability of American companies. Absent in these predominantly economic disquiets is what should be a deeper, existential concern: What are the effects of authoritarian regimes exporting their values into our society through their technology? This essay will address this question by examining how democratic countries can, or should respond, and what you can do about it to influence the outcome.

Towards a global responsible technology governance framework

The essay argues that democratic countries should form a global, broadly scoped Strategic Tech Alliance, built on mutual economic interests and common moral, social and legal norms, technological interoperability standards, legal principles and constitutional values. An Alliance committed to safeguarding democratic norms, as enshrined in the Universal Declaration of Human Rights (UDHR) and the International Covenant on Civil and Political Rights (ICCPR). The US, the EU and its democratic allies should join forces with countries that share our digital DNA, institute fair reciprocal trading conditions, and establish a global responsible technology governance framework that actively pursues democratic freedoms, human rights and the rule of law.

Two dominant tech blocks with incompatible political systems

Currently, two dominant tech blocks exist that have incompatible political systems: the US and China. The competition for AI and quantum ascendancy is a battle between ideologies: liberal democracy mixed with free market capitalism versus authoritarianism blended with surveillance capitalism. Europe stands in the middle, championing a legal-ethical approach to tech governance.

Democratic, value-based Strategic Tech Alliance

The essay discusses political feasibility of cooperation along transatlantic lines, and examines arguments against the formation of a democratic, value-based Strategic Tech Alliance that will set global technology standards. Then, it weighs the described advantages of the establishment of an Alliance that aims to win the race for democratic technological supremacy against disadvantages, unintended consequences and the harms of doing nothing.

Democracy versus authoritarianism: sociocritical perspectives

Further, the essay attempts to approach the identified challenges in light of the ‘democracy versus authoritarianism’ discussion from other, sociocritical perspectives, and inquires whether we are democratic enough ourselves.

How Fourth Industrial Revolution (4IR) technology is shaping our lives

The essay maintains that technology is shaping our everyday lives, and that the way in which we design and utilize our technology is influencing nearly every aspect of the society we live in. Technology is never neutral. The essay describes that regulating emerging technology is an unending endeavour that follows the lifespan of the technology and its implementation. In addition, it debates how democratic countries should construct regulatory solutions that are tailored to the exponential pace of sustainable innovation in the Fourth Industrial Revolution (4IR).

Preventing authoritarianism from gaining ground

The essay concludes that to prevent authoritarianism from gaining ground, governments should do three things: (1) inaugurate a Strategic Tech Alliance, (2) set worldwide core rules, interoperability & conformity standards for key 4IR technologies such as AI, quantum and Virtual Reality (VR), and (3) actively embed our common democratic norms, principles and values into the architecture and infrastructure of our technology.

Meer lezen
Safeguards for accelerated market authorization of vaccines in Europe

by Suzan Slijpen & Mauritz Kop

This article has been published by the Stanford Law School ‘Center for Law and the Biosciences’, Stanford University, 15 March 2021. link to the full text: https://law.stanford.edu/2021/03/15/safeguards-for-accelerated-market-authorization-of-vaccines-in-europe/

Download the article here: Slijpen_Kop_Manufacturing Licenses and Market Authorization Vaccines EU-Stanford Law

The first COVID-19 vaccines have been approved

People around the globe are concerned about safety issues encircling the accelerated introduction of corona vaccines. In this article, we discuss the regulatory safeguards for fast-track market authorization of vaccines in Europe. In addition, we explain how the transmission of European Union law into national Member State legislation works. We then clarify what happens before a drug can be introduced into the European market. We conclude that governments should build bridges of mutual understanding between communities and increase trust in the safety of authorized vaccines across all population groups, using the right messengers.

Drug development normally takes several years

Drug development normally takes several years. The fact that it has been a few months now seems ridiculously short. How is the quality and integrity of the vaccine ensured? That people - on both sides of the Atlantic - are concerned about this is entirely understandable. How does one prevent citizens from being harmed by vaccines and medicines that do not work for everyone, because the admission procedures have been simplified too much?

The purpose of this article is to shed a little light upon the accelerated market authorization procedures on the European continent, with a focus on the situation in the Netherlands.

How a vaccine is introduced into the market

In June 2020, the Dutch government, in close cooperation with Germany, France and Italy, formed a Joint Negotiation Team which, under the watchful eye of the European Commission, has been negotiating with vaccine developers. Its objective: to conclude agreements with drug manufacturers at an early stage about the availability of vaccines for European countries. In case these manufacturers are to succeed in developing a successful vaccine for which the so-called Market Authorization (MA) is granted by EMA or CBG, this could lead to the availability of about 50 million vaccines (for the Netherlands alone).

Who is allowed to produce these vaccines?

Who is allowed to produce these vaccines? The Dutch Medicines Act is very clear about this. Only "market authorization holders" are allowed to manufacture medicines, including vaccines. These are parties that have gone through an extensive application procedure, who demonstrably have a solid pharmaceutical quality management system in place and have obtained a pharmaceutical manufacturing license (the MIA, short for Manufacturing and Importation Authorisation). This license is granted after assessment by the Health and Youth Care Inspectorate of the Ministry of Health, Welfare & Sport (IGJ) – by Farmatec. Farmatec is part of the CIBG, an implementing body of the Ministry of Health, Welfare and Sport (VWS). The M-license is mandatory for parties who prepare, or import medicines.

Read more at the Stanford Center for Law and the Biosciences!

Read more on manufacturing licenses, fast track procedures and market authorization by the European Medicines Agency (EMA) and the EC, harmonisation and unification of EU law, CE-markings, antigenic testing kits, mutations, reinfection, multivalent vaccines, mucosal immunity, Good Manufacturing Practices (GMP), pharmacovigilance, the HERA Incubator, clinical trials, compulsory vaccination regimes and continuous quality control at Stanford!

Meer lezen