Innovation, Quantum-AI Technology & Law

Blog over Kunstmatige Intelligentie, Quantum, Deep Learning, Blockchain en Big Data Law

Blog over juridische, sociale, ethische en policy aspecten van Kunstmatige Intelligentie, Quantum Computing, Sensing & Communication, Augmented Reality en Robotica, Big Data Wetgeving en Machine Learning Regelgeving. Kennisartikelen inzake de EU AI Act, de Data Governance Act, cloud computing, algoritmes, privacy, virtual reality, blockchain, robotlaw, smart contracts, informatierecht, ICT contracten, online platforms, apps en tools. Europese regels, auteursrecht, chipsrecht, databankrechten en juridische diensten AI recht.

Berichten in Certification
Quantum Technology Impact Assessment (EU AI Alliance, European Commission)

Brussels, 20 April 2023—The emergence of powerful new capabilities in large AI models, such as Generative Adversarial Networks (GANs), underscores the critical need to continuously improve and update technology impact assessment tools, ensuring they keep pace with rapid technological development. As defined in recent scholarship, technology impact assessment is the systematic process of monitoring and determining the unintended, indirect, or delayed societal impacts of a future technological innovation. Crucially, it is also about capitalizing on opportunities and enabling responsible innovation from the outset.

An article by Stanford Law’s Mauritz Kop on this topic is also featured on the European Commission's Futurium website.

Shaping the Quantum Innovation Process

Quantum Impact Assessments (QIAs) are emerging as vital practical tools to facilitate the responsible adoption of quantum technologies. There are several related approaches to this assessment: (1) interactive QIA, which seeks to influence and shape the innovation process; (2) constructive QIA, where social issues guide the design of the technology from its earliest stages; and (3) real-time QIA, which connects scientific R&D with social sciences and policy from the start, before a technology becomes locked-in.

Often taking the form of codes of conduct, best practices, roadmaps, and physics de-risking tools, QIA instruments can be used by governments, industry, and academia. These soft law toolsallow stakeholders to explore how current technological developments affect the world we live in and to proactively shape the innovation process toward beneficial, societally robust outcomes.

Exploratory Quantum Technology Assessment

Implementing interdisciplinary, expert-based QIAs can help raise awareness about the ethical, legal, socio-economic, and policy (ELSPI) dimensions of quantum technology, including quantum-classical hybrid systems. For instance, QIAs cultivate a deeper understanding of the potential dual-use character of quantum technology, where beneficial applications (such as quantum sensing for medical diagnostics) can exist alongside potentially harmful ones (such as the same sensors being used for autocratic surveillance).

Building on the foundational work of the 2018 AI Impact Assessment developed by ECP | Platform voor de InformatieSamenleving chaired by Prof. Kees Stuurman, this work presents a prototype of a QIA instrument: the Exploratory Quantum Technology Assessment (EQTA). This pioneering initiative was made possible through a collaboration between the Dutch Ministry of Economic Affairs & Climate Policy, Quantum Delta NL (QDNL), and ECP. The EQTA will be presented by Eline de Jong and Mauritz Kop at the inaugural Stanford Responsible Quantum Technology Conference in May 2023.

Guidance for Responsible Quantum Technology Implementation

The EQTA provides a comprehensive, practical step-by-step plan that encourages stakeholders to initiate a dialogue to clarify which ethical, legal, and social aspects are important in the creation and application of quantum systems and their interaction with classical technologies. This structured approach helps make the use of quantum technology—as well as the data and algorithms that power it—more transparent and accountable from an early stage.

Looking forward, establishing a risk-based legal-ethical framework in combination with standardization, certification, technology impact assessment, and life-cycle auditing of quantum-driven systems is crucial to stewarding society towards responsible quantum innovation. Mauritz Kop’s research group has written more on this framework in their seminal article Towards Responsible Quantum Technology (Harvard).

Meer lezen
EU Artificial Intelligence Act: The European Approach to AI

Stanford - Vienna Transatlantic Technology Law Forum, Transatlantic Antitrust and IPR Developments, Stanford University, Issue No. 2/2021

New Stanford tech policy research: “EU Artificial Intelligence Act: The European Approach to AI”.

Download the article here: Kop_EU AI Act: The European Approach to AI

EU regulatory framework for AI

On 21 April 2021, the European Commission presented the Artificial Intelligence Act. This Stanford Law School contribution lists the main points of the proposed regulatory framework for AI.

The Act seeks to codify the high standards of the EU trustworthy AI paradigm, which requires AI to be legally, ethically and technically robust, while respecting democratic values, human rights and the rule of law. The draft regulation sets out core horizontal rules for the development, commodification and use of AI-driven products, services and systems within the territory of the EU, that apply to all industries.

Legal sandboxes fostering innovation

The EC aims to prevent the rules from stifling innovation and hindering the creation of a flourishing AI ecosystem in Europe. This is ensured by introducing various flexibilities, including the application of legal sandboxes that afford breathing room to AI developers.

Sophisticated ‘product safety regime’

The EU AI Act introduces a sophisticated ‘product safety framework’ constructed around a set of 4 risk categories. It imposes requirements for market entrance and certification of High-Risk AI Systems through a mandatory CE-marking procedure. To ensure equitable outcomes, this pre-market conformity regime also applies to machine learning training, testing and validation datasets.

Pyramid of criticality

The AI Act draft combines a risk-based approach based on the pyramid of criticality, with a modern, layered enforcement mechanism. This means, among other things, that a lighter legal regime applies to AI applications with a negligible risk, and that applications with an unacceptable risk are banned. Stricter regulations apply as risk increases.

Enforcement at both Union and Member State level

The draft regulation provides for the installation of a new enforcement body at Union level: the European Artificial Intelligence Board (EAIB). At Member State level, the EAIB will be flanked by national supervisors, similar to the GDPR’s oversight mechanism. Fines for violation of the rules can be up to 6% of global turnover, or 30 million euros for private entities.

CE-marking for High-Risk AI Systems

In line with my recommendations, Article 49 of the Act requires high-risk AI and data-driven systems, products and services to comply with EU benchmarks, including safety and compliance assessments. This is crucial because it requires AI infused products and services to meet the high technical, legal and ethical standards that reflect the core values of trustworthy AI. Only then will they receive a CE marking that allows them to enter the European markets. This pre-market conformity mechanism works in the same manner as the existing CE marking: as safety certification for products traded in the European Economic Area (EEA).

Trustworthy AI by Design: ex ante and life-cycle auditing

Responsible, trustworthy AI by design requires awareness from all parties involved, from the first line of code. Indispensable tools to facilitate this awareness process are AI impact and conformity assessments, best practices, technology roadmaps and codes of conduct. These tools are executed by inclusive, multidisciplinary teams, that use them to monitor, validate and benchmark AI systems. It will all come down to ex ante and life-cycle auditing.

The new European rules will forever change the way AI is formed. Pursuing trustworthy AI by design seems like a sensible strategy, wherever you are in the world.

Meer lezen
Democratic Countries Should Form a Strategic Tech Alliance

Stanford - Vienna Transatlantic Technology Law Forum, Transatlantic Antitrust and IPR Developments, Stanford University, Issue No. 1/2021

New Stanford innovation policy research: “Democratic Countries Should Form a Strategic Tech Alliance”.

Download the article here: Kop_Democratic Countries-Strategic Tech Alliance-Stanford Law

Exporting values into society through technology

China’s relentless advance in Artificial Intelligence (AI) and quantum computing has engendered a significant amount of anxiety about the future of America’s technological supremacy. The resulting debate centres around the impact of China’s digital rise on the economy, security, employment and the profitability of American companies. Absent in these predominantly economic disquiets is what should be a deeper, existential concern: What are the effects of authoritarian regimes exporting their values into our society through their technology? This essay will address this question by examining how democratic countries can, or should respond, and what you can do about it to influence the outcome.

Towards a global responsible technology governance framework

The essay argues that democratic countries should form a global, broadly scoped Strategic Tech Alliance, built on mutual economic interests and common moral, social and legal norms, technological interoperability standards, legal principles and constitutional values. An Alliance committed to safeguarding democratic norms, as enshrined in the Universal Declaration of Human Rights (UDHR) and the International Covenant on Civil and Political Rights (ICCPR). The US, the EU and its democratic allies should join forces with countries that share our digital DNA, institute fair reciprocal trading conditions, and establish a global responsible technology governance framework that actively pursues democratic freedoms, human rights and the rule of law.

Two dominant tech blocks with incompatible political systems

Currently, two dominant tech blocks exist that have incompatible political systems: the US and China. The competition for AI and quantum ascendancy is a battle between ideologies: liberal democracy mixed with free market capitalism versus authoritarianism blended with surveillance capitalism. Europe stands in the middle, championing a legal-ethical approach to tech governance.

Democratic, value-based Strategic Tech Alliance

The essay discusses political feasibility of cooperation along transatlantic lines, and examines arguments against the formation of a democratic, value-based Strategic Tech Alliance that will set global technology standards. Then, it weighs the described advantages of the establishment of an Alliance that aims to win the race for democratic technological supremacy against disadvantages, unintended consequences and the harms of doing nothing.

Democracy versus authoritarianism: sociocritical perspectives

Further, the essay attempts to approach the identified challenges in light of the ‘democracy versus authoritarianism’ discussion from other, sociocritical perspectives, and inquires whether we are democratic enough ourselves.

How Fourth Industrial Revolution (4IR) technology is shaping our lives

The essay maintains that technology is shaping our everyday lives, and that the way in which we design and utilize our technology is influencing nearly every aspect of the society we live in. Technology is never neutral. The essay describes that regulating emerging technology is an unending endeavour that follows the lifespan of the technology and its implementation. In addition, it debates how democratic countries should construct regulatory solutions that are tailored to the exponential pace of sustainable innovation in the Fourth Industrial Revolution (4IR).

Preventing authoritarianism from gaining ground

The essay concludes that to prevent authoritarianism from gaining ground, governments should do three things: (1) inaugurate a Strategic Tech Alliance, (2) set worldwide core rules, interoperability & conformity standards for key 4IR technologies such as AI, quantum and Virtual Reality (VR), and (3) actively embed our common democratic norms, principles and values into the architecture and infrastructure of our technology.

Meer lezen
Regulating Transformative Technology in The Quantum Age: Intellectual Property, Standardization & Sustainable Innovation

Stanford - Vienna Transatlantic Technology Law Forum, Transatlantic Antitrust and IPR Developments, Stanford University, Issue No. 2/2020

New Stanford cutting edge tech law research: “Regulating Transformative Technology in The Quantum Age: Intellectual Property, Standardization & Sustainable Innovation”.

Download the article here: Kop_Regulation Standardization Innovation Quantum Age-Stanford Law

Quantum technology has many legal aspects

The behavior of nature at the smallest scale can be strange and counterintuitive. In addition to unique physical characteristics, quantum technology has many legal aspects. In this article, we first explain what quantum technology entails. Next, we discuss implementation and areas of application, including quantum computing, quantum sensing and the quantum internet. Through an interdisciplinary lens, we then focus on intellectual property (IP), standardization, ethical, legal & social aspects (ELSA) as well as horizontal & industry-specific regulation of this transformative technology.

The Quantum Age raises many legal questions

The Quantum Age raises many legal questions. For example, which existing legislation applies to quantum technology? What types of IP rights can be vested in the components of a scalable quantum computer? Are there sufficient market-set innovation incentives for the development and dissemination of quantum software and hardware structures? Or is there a need for open source ecosystems, enrichment of the public domain and even democratization of quantum technology? Should we create global quantum safety, security and interoperability standards and make them mandatory in each area of application? In what way can quantum technology enhance artificial intelligence (AI) that is legal, ethical and technically robust?

Regulating quantum computing, quantum sensing & the quantum internet

How can policy makers realize these objectives and regulate quantum computing, quantum sensing and the quantum internet in a socially responsible manner? Regulation that addresses risks in a proportional manner, whilst optimizing the benefits of this cutting edge technology? Without hindering sustainable innovation, including the apportionment of rights, responsibilities and duties of care? What are the effects of standardization and certification on innovation, intellectual property, competition and market-entrance of quantum-startups?

The article explores possible answers to these tantalizing questions.

Meer lezen