Innovation, Quantum-AI Technology & Law

Blog over Kunstmatige Intelligentie, Quantum, Deep Learning, Blockchain en Big Data Law

Blog over juridische, sociale, ethische en policy aspecten van Kunstmatige Intelligentie, Quantum Computing, Sensing & Communication, Augmented Reality en Robotica, Big Data Wetgeving en Machine Learning Regelgeving. Kennisartikelen inzake de EU AI Act, de Data Governance Act, cloud computing, algoritmes, privacy, virtual reality, blockchain, robotlaw, smart contracts, informatierecht, ICT contracten, online platforms, apps en tools. Europese regels, auteursrecht, chipsrecht, databankrechten en juridische diensten AI recht.

Berichten in Artificial Intelligence
Quantum Event Horizon: Addressing the Quantum-AI Control Problem through Quantum-Resistant Constitutional AI

What happens when AI becomes not just superintelligent, but quantum-superintelligent? QAI agents with both classical and quantum capabilities? How do we ensure we remain in control?

This is the central question of my new article, where I introduce the concept of the Quantum Event Horizon to frame the urgency of the QAI control problem. As we near this point of no return, the risk of losing control to misaligned systems—machines taking over or seeing them weaponized—becomes acute.

Simple guardrails are not enough. The solution must be architectural. I propose a new paradigm: Quantum-Resistant Constitutional AI, a method for engineering our core values into the foundation of QAI itself. This is a crucial discussion for policymakers, researchers, builders, and industry leaders.

Navigating the Quantum Event Horizon

This paper addresses the impending control problem posed by the synthesis of quantum computing and artificial intelligence (QAI). It posits that the emergence of potentially superintelligent QAI agents creates a governance challenge that is fundamentally different from and more acute than those posed by classical AI. Traditional solutions focused on technical alignment are necessary but insufficient for the novel risks and capabilities of QAI. The central thesis is that navigating this challenge requires a paradigm shift from reactive oversight to proactive, upfront constitutional design.

The core of the argument is framed by the concept of the ‘Quantum Event Horizon’—a metaphorical boundary beyond which the behavior, development, and societal impact of QAI become computationally opaque and practically impossible to predict or control using conventional methods. Drawing on the Collingridge dilemma and the Copenhagen interpretation, this concept highlights the risk of a "point of no return," where technological lock-in, spurred by a "ChatGPT moment" for quantum, could cement irreversible geopolitical realities, empower techno-authoritarianism, and present an unmanageable control problem (the risk of machines taking over). Confronting this requires a new philosophy for governing non-human intelligence.

Machines Taking Over

The urgency is magnified by a stark geopolitical context, defined by a Tripartite Dilemma between the existential safety concerns articulated by figures like Geoffrey Hinton, the geopolitical security imperative for rapid innovation voiced by Eric Schmidt, and the builder’s need to balance progress with safety, as expressed by Demis Hassabis. This dilemma is enacted through competing global innovation models: the permissionless, market-driven US system; the state-led, top-down Chinese system; and the values-first, deliberative EU model. In this winner-takes-all race, the first actor to achieve a decisive QAI breakthrough could permanently shape global norms and our way of life.

An Atomic Agency for Quantum-AI

Given these stakes, current control paradigms like human-in-the-loop oversight are inadequate. The speed and complexity of QAI render direct human control impossible, a practical manifestation of crossing the Quantum Event Horizon. Therefore, governance must be multi-layered, integrating societal and institutional frameworks. This includes establishing an "Atomic Agency for Quantum-AI" for international oversight and promoting Responsible Quantum Technology (RQT) by Design, guided by principles such as those outlined in our '10 Principles for Responsible Quantum Innovation' article. These frameworks must be led by robust public governance—as corporate self-regulation is insufficient due to misaligned incentives—and must address the distributive justice imperative to prevent a "Quantum Divide."

Towards Quantum-Resistant Constitutional AI

The cornerstone of our proposed solution is Quantum-Resistant Constitutional AI. This approach argues that if we cannot control a QAI agent tactically, we must constrain it architecturally. It builds upon the concept of Constitutional AI by designing a core set of ethical and safety principles (a 'constitution') that are not merely trained into the model but are formally verified and made robust against both classical and quantum-algorithmic exploitation. By hardwiring this quantum-secure constitution into the agent's core, we can create a form of verifiable, built-in control that is more likely to endure as the agent's intelligence scales.

Self-Aware Quantum-AI Agents

Looking toward more speculative futures, the potential for a Human-AI Merger or the emergence of a QAI Hive Mind—a networked, non-human consciousness enabled by quantum entanglement—represents the ultimate challenge and the final crossing of the Quantum Event Horizon. The foundational governance work we do today, including projects like Quantum-ELSPI, is the essential precursor to navigating these profound transformations.

In conclusion, this paper argues that for the European Union, proactively developing and implementing a framework centered on Quantum-Resistant Constitutional AI is not just a defensive measure against existential risk. It is a strategic necessity to ensure that the most powerful technology in human history develops in alignment with democratic principles, securing the EU’s role as a global regulatory leader in the 21st century.

Meer lezen
Music Law and Artificial Intelligence: From Cloned Artists to AI-Generated Works

The rise of artificial intelligence (AI) in the music industry is sparking a revolution, profoundly changing how music is created. This development raises complex legal questions concerning AI and copyright, including related rights. How can we protect the creative rights of artists and composers while simultaneously allowing room for technological innovation? In this comprehensive yet accessible legal overview, we explore key issues regarding AI and music. These include whether AI can legally train on copyrighted materials without consent, TDM exceptions, how various rights organizations (such as Buma/Stemra and Sena) approach AI, the status of AI-generated musical works, the threshold of human creativity required, protection against AI voice cloning via privacy laws and moral rights, contractual implications, new obligations under the EU AI Act, differences between European and American law, and ongoing lawsuits. This article is tailored for artists, composers, music publishers, labels, voice actors, producers, and AI companies seeking clarity on their legal standing.

AI Training on Protected Music and Video Materials: Legal Framework and Debate

Can an AI model in the Netherlands and the EU train on copyrighted material (such as music or video) without permission from the rights holders? Generally, using protected material beyond private use or citation requires permission. Scraping or using data for AI training without permission is typically considered infringement unless a specific legal exception applies.

Buma/Stemra’s Opt-Out Policy

In the Netherlands, Buma/Stemra explicitly uses its opt-out rights, requiring prior consent for TDM on its repertoire, thus ensuring fair compensation for composers and lyricists.

EU AI Act: Transparency Obligations and System Monitoring

The EU AI Act, effective from August 2025, introduces important transparency requirements, obliging generative AI model developers to:

• Disclose training data used, including copyrighted music or texts.

• Maintain policies ensuring compliance with EU copyright law.

• Respect explicit opt-out signals from rights holders during training.

The Act doesn't prohibit using protected material for training outright but enforces transparency and compliance through oversight and penalties.

Composition, Lyrics, and Master Recordings: Different Rights Regimes

Music rights in the Netherlands broadly split into:

Copyright: Protects compositions and lyrics, managed by organizations like Buma/Stemra.

Neighboring Rights: Protect recordings and performances, managed by Sena.

AI-Generated Compositions and Lyrics

Completely AI-generated works often fail to meet traditional copyright criteria, as human creativity is essential.

Neighboring Rights

It remains uncertain whether AI-generated performances and recordings attract neighboring rights, as these typically rely on human involvement.

Copyright Status of AI-Generated Music

In the U.S., fully AI-generated works explicitly do not receive copyright protection. While Europe hasn't clarified explicitly, the prevailing legal view aligns with this stance—AI-generated works likely fall into the public domain unless there's significant human creativity involved.

Hybrid Creations

Music combining human and AI input may qualify for copyright protection depending on the human creative contribution's significance.

AI Voice Cloning: Personality Rights and Privacy

AI voice cloning technology poses challenges regarding personal rights and privacy. Artists may invoke:

• Privacy rights under EU law (Article 8 ECHR).

• Personality rights.

• Potential trademark and image rights analogously.

The EU AI Act mandates transparency in AI-generated content, aiming to mitigate unauthorized use and deepfake concerns.

Music Contracts in the AI Era

Existing music contracts require updates addressing AI-specific matters:

• Explicit licensing terms for AI training.

• Ownership clarity of AI-generated content.

• Liability assignment for copyright infringements involving AI.

Conclusion: Balancing Innovation and Rights—Be Prepared

The intersection of AI and music law presents both opportunities and challenges. Stakeholders should proactively:

• Clearly define rights in AI-generated music contractually and update existing music contracts.

• Specify permissions (licenses) and restrictions (opt-out) regarding AI training explicitly.

• Seek specialized music & AI legal advice to navigate evolving regulations.

By strategically addressing these issues, artists, companies, and AI developers can legally and effectively harness AI innovations, maintaining both creative and commercial control.

Meer lezen
Hoover Institution Invites Mauritz Kop to Speak on Quantum, Democracy and Authoriarianism

Professor Mauritz Kop Addresses Quantum Technology's Role in the Era of Digital Repression at Hoover Institution Workshop

Palo Alto, CA – April 22, 2024 – Professor Mauritz Kop, Founding Director of the Stanford Center for Responsible Quantum Technology (RQT), delivered insightful opening remarks at a breakout session on Quantum Technology as part of the two-day closed door workshop, "Getting Ahead of Digital Repression: Authoritarian Innovation and Democratic Response." The workshop, held on April 22-23, 2024, at Hoover Institution, Stanford University, was a collaborative effort by the National Endowment for Democracy’s International Forum for Democratic Studies, Stanford University’s Global Digital Policy Incubator, and the Hoover Institution’s China’s Global Sharp Power Project.

The event convened leading researchers and advocates to map how digital authoritarians are innovating globally and to identify new strategies for ongoing knowledge-sharing and cooperation to confront this deepening challenge. The agenda focused on understanding how autocrats leverage emerging technologies—from AI and digital currencies to quantum technology—for social control, censorship, and to export their governance models.

Guardrails Against Digital Authoritarianism

Professor Kop's address served as a crucial discussion starter for the breakout session, which aimed to brainstorm how advances in quantum technology might alter the dynamics of the struggle against digital authoritarianism and to explore potential guardrails. His remarks underscored the profound societal impact of quantum technologies and the imperative for proactive, principles-based governance to ensure they are developed and deployed responsibly, safeguarding human rights and democratic values on a global scale.

Meer lezen
Why Quantum Computing Is Even More Dangerous Than Artificial Intelligence (Foreign Policy)

Washington DC, August 21, 2022. Foreign Policy just published an article about regulating quantum technology authored by Vivek Wadhwa and Mauritz Kop. https://foreignpolicy.com/2022/08/21/quantum-computing-artificial-intelligence-ai-technology-regulation/

United States and other democratic nations must prepare for tomorrow's quantum era today

To avoid the ethical problems that went so horribly wrong with AI and machine learning, democratic nations need to institute controls that both correspond to the predicted power of the emerging suite of second generation quantum technologies, and respect & reinforce democratic values, human rights, and fundamental freedoms. In fact, the quantum community itself has issued a call for action to immediately address these matters. We argue that governments must urgently begin to think about regulation, standards, and responsible use—and learn from the way countries handled or mishandled other revolutionary technologies, including AI, nanotechnology, biotechnology, semiconductors, and nuclear fission. Benefits and increased quantum driven prosperity should be equitably shared among members of society, and risks equally distributed. The United States and other democratic nations must not make the same mistake they made with AI—and prepare for tomorrow's quantum era today.

Meer lezen
Intellectual Property in Quantum Computing and Market Power: A Theoretical Discussion and Empirical Analysis (Oxford University Press)

Delighted to see our article ‘Intellectual Property in Quantum Computing and Market Power: A Theoretical Discussion and Empirical Analysis’ -co-authored with my talented friends Prof. Mateo Aboy, PhD, SJD, FIT and Prof. Timo Minssen- published in the Journal of Intellectual Property Law & Practice (Oxford University Press), the flagship IP peer-reviewed OUP Journal, edited by Prof. Eleonora Rosati. Thanks to the JIPLP team for excellent editorial support! Our article: https://academic.oup.com/jiplp/article/17/8/613/6646536

This piece is the sisterpaper of our Max Planck @ Springer Nature published article titled ‘Mapping the Patent Landscape of Quantum Technologies: Patenting Trends, Innovation and Policy Implications’, which we wrote in parallel. The IIC quantum-patent study can be found here: https://link.springer.com/article/10.1007/s40319-022-01209-3. Our teamwork was absolutely gratifying and we hope it will inform strategic, evidence based transatlantic policy making.

IP and Antitrust Law

Please find a short synopsis of our work below:

We are on the verge of a technological revolution associated with quantum technologies, including quantum computing and quantum/artificial intelligence hybrids. Its complexity and global significance are creating potential innovation distortions, which could not have been foreseen when current IP and antitrust systems where developed.

Potential IP Overprotection

Using quantitative methods, we investigated our hypothesis that IP overprotection requires a reform of existing IP regimes for quantum tech, to avoid or repair IP thickets, fragmented exclusionary rights and anticommons concerns, lost opportunity costs, and an unwanted concentration of market power.

Perhaps counter-intuitively, we found that there appear to be (at least so far) no such overprotection problems in the real-world quantum computing field to the extent that their consequences would hinder exponential innovation in this specific branch of applied quantum technology, as more and more quantum patent information enters the public domain.

Patents versus Trade Secrets and State Secrets

However, developments taking place in secrecy, either by trade secrets or state secrets, remains the Achilles heel of our empirical approach, as information about these innovations is not represented by our dataset, and thus cannot be observed, replicated or generalized.

Interplay between IP and Antitrust Law: Open or Closed Innovation Systems

Policy makers should urgently answer questions regarding pushing for open or closed innovation systems including the interplay between IP and antitrust law, taking into account dilemma’s pertaining to equal/equitable access to benefits, risk control, ethics, and overall societal impact. Crucially, intellectual property in quantum technology has a national safety and (cyber)security dimension, often beyond the IP toolkit.

Meer lezen
Montreal World Summit AI 2022 Features Mauritz Kop Keynote on EU AI Act

Montreal, Canada – May 4, 2022 – Today, at the prestigious World Summit AI Americas held at the Palais des congrès, Mauritz Kop, TTLF Fellow at Stanford Law School and Director of AIRecht, provided a concise overview of the proposed EU Artificial Intelligence Act. He was a featured panellist in a critical discussion titled, "Does the proposed EU Artificial Intelligence Act provide a regulatory framework for AI that should be adopted globally?". The summit, themed "AI with impact: for crisis response and business continuity and recovery," brings together leading AI brains and enterprise leaders.

Mr. Kop joined fellow distinguished panellists Professor Gillian Hadfield from the University of Toronto and Dr. José-Marie Griffiths, President of Dakota State University and former NSCAI Commissioner. The session was moderated by Meredith Broadbent, Former Chairman of the U.S. International Trade Commission and Senior Adviser at CSIS.

Novel Legal Framework for AI

During the panel, Mr. Kop outlined the main points of the novel legal framework for AI presented by the European Commission on April 21, 2021. He explained that the EU AI Act sets out horizontal rules applicable to all industries for the development, commodification, and use of AI-driven products, services, and systems within the EU's territory.

A core component of the Act is its sophisticated ‘product safety framework’, which is constructed around four distinct risk categories in a "pyramid of criticality". This risk-based approach dictates that AI applications with unacceptable risks are banned, while lighter legal regimes apply to low-risk applications. As the risk level increases, so do the stringency of the rules, ranging from non-binding self-regulation and impact assessments for lower-risk systems to potentially heavy, externally audited compliance requirements throughout the lifecycle of high-risk AI systems.

EU "Trustworthy AI" Paradigm

Mr. Kop emphasized that the Act aims to codify the high standards of the EU’s "trustworthy AI" paradigm, which mandates that AI systems must be legal, ethical, and technically robust, all while respecting democratic values, human rights, and the rule of law. A crucial aspect highlighted was the requirement for market entrance and certification of High-Risk AI Systems through a mandatory CE-marking procedure. This pre-market conformity regime also extends to the machine learning training, testing, and validation datasets used by these systems. Only after a declaration of conformity is signed and the CE marking is affixed can these high-risk systems enter and be traded on the European markets.

Enforcement will be managed by a new Union-level body, the European Artificial Intelligence Board (EAIB), supported by national supervisors in each Member State, similar to the GDPR's oversight structure. Mr. Kop noted the seriousness of non-compliance, with potential fines reaching up to 6% of a company's global turnover.

Balancing regulation with innovation, the EU AI Act also introduces legal sandboxes. These are designed to provide AI developers with "breathing room" to test new inventions and foster a flourishing AI ecosystem in Europe.

Meer lezen
Mauritz Kop Lecturer AI Regulation and Intellectual Property Law at CEIPI, University of Strasbourg

Strasbourg, France – We are pleased to feature insights from a lecture on "Intellectual Property and Ownership of AI Input and Output Data" delivered by Professor Mauritz Kop at the Centre for International Intellectual Property Studies (CEIPI), University of Strasbourg. This session was part of the University Diploma in Artificial Intelligence and Intellectual Property.

Rights and responsibilities pertaining to AI and data

Professor Kop, a Fellow at Stanford University and a strategic IP lawyer, shared his expertise on the rights and responsibilities pertaining to AI and data, offering both theoretical perspectives and practical tips at the current state of technological and legal development. The lecture aimed to equip attendees with a bird's-eye view of the intertwined key elements of this multidimensional topic.

AI, data governance, and intellectual property law

Professor Kop's session underscored the dynamic interplay between AI advancement, data governance, and intellectual property law. It highlighted the necessity for legal professionals to be "double or triple educated" to navigate this complex field and for ongoing efforts to create legal frameworks that foster responsible innovation while addressing societal and ethical considerations.

The lecture concluded by stressing that AI literacy and awareness, continuous learning, and proactive legal strategies are essential for all stakeholders in the AI ecosystem.

Meer lezen
EU Artificial Intelligence Act: The European Approach to AI

Stanford - Vienna Transatlantic Technology Law Forum, Transatlantic Antitrust and IPR Developments, Stanford University, Issue No. 2/2021

New Stanford tech policy research: “EU Artificial Intelligence Act: The European Approach to AI”.

Download the article here: Kop_EU AI Act: The European Approach to AI

EU regulatory framework for AI

On 21 April 2021, the European Commission presented the Artificial Intelligence Act. This Stanford Law School contribution lists the main points of the proposed regulatory framework for AI.

The Act seeks to codify the high standards of the EU trustworthy AI paradigm, which requires AI to be legally, ethically and technically robust, while respecting democratic values, human rights and the rule of law. The draft regulation sets out core horizontal rules for the development, commodification and use of AI-driven products, services and systems within the territory of the EU, that apply to all industries.

Legal sandboxes fostering innovation

The EC aims to prevent the rules from stifling innovation and hindering the creation of a flourishing AI ecosystem in Europe. This is ensured by introducing various flexibilities, including the application of legal sandboxes that afford breathing room to AI developers.

Sophisticated ‘product safety regime’

The EU AI Act introduces a sophisticated ‘product safety framework’ constructed around a set of 4 risk categories. It imposes requirements for market entrance and certification of High-Risk AI Systems through a mandatory CE-marking procedure. To ensure equitable outcomes, this pre-market conformity regime also applies to machine learning training, testing and validation datasets.

Pyramid of criticality

The AI Act draft combines a risk-based approach based on the pyramid of criticality, with a modern, layered enforcement mechanism. This means, among other things, that a lighter legal regime applies to AI applications with a negligible risk, and that applications with an unacceptable risk are banned. Stricter regulations apply as risk increases.

Enforcement at both Union and Member State level

The draft regulation provides for the installation of a new enforcement body at Union level: the European Artificial Intelligence Board (EAIB). At Member State level, the EAIB will be flanked by national supervisors, similar to the GDPR’s oversight mechanism. Fines for violation of the rules can be up to 6% of global turnover, or 30 million euros for private entities.

CE-marking for High-Risk AI Systems

In line with my recommendations, Article 49 of the Act requires high-risk AI and data-driven systems, products and services to comply with EU benchmarks, including safety and compliance assessments. This is crucial because it requires AI infused products and services to meet the high technical, legal and ethical standards that reflect the core values of trustworthy AI. Only then will they receive a CE marking that allows them to enter the European markets. This pre-market conformity mechanism works in the same manner as the existing CE marking: as safety certification for products traded in the European Economic Area (EEA).

Trustworthy AI by Design: ex ante and life-cycle auditing

Responsible, trustworthy AI by design requires awareness from all parties involved, from the first line of code. Indispensable tools to facilitate this awareness process are AI impact and conformity assessments, best practices, technology roadmaps and codes of conduct. These tools are executed by inclusive, multidisciplinary teams, that use them to monitor, validate and benchmark AI systems. It will all come down to ex ante and life-cycle auditing.

The new European rules will forever change the way AI is formed. Pursuing trustworthy AI by design seems like a sensible strategy, wherever you are in the world.

Meer lezen
Cyber Week 2021 Tel Aviv University Israel

AIRecht Director Mauritz Kop will speak at Cyber Week 2021 Tel Aviv University Israel, and participate in the Panel 'Debating Collective Cyber Defense for Democracies'. He will present his Stanford essay ‘Democratic Countries Should Form a Strategic Tech Alliance’ on July 22nd at 20:00 Israel time, see: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3814409

Cyber Week 2021 hosts a range of distinguished speakers from across the globe, including the Prime Minister of Israel Naftali Bennett, see: https://cw2021.b2b-wizard.com/expo/speakers

Debating Collective Cyber Defense for Democracies

Line-up and speakers of the ‘Debating Collective Cyber Defense for Democracies’ panel (notice the strong Dutch@Stanford representation):

Keynote: Ambassador Heli Tiirmaa-Klaar, Ambassador-at-Large for Cyber Diplomacy at the Estonian Ministry of Foreign Affairs

Lectures by:

Prof. Chris Demchak, Strategic and Operational Research Department, U.S. Naval War College

Dr. Lior Tabansky, Ph.D., (Moderator), Head of Research Development, Blavatnik Interdisciplinary Cyber Research Center, Tel Aviv University

Mauritz Kop, Stanford Law School TTLF Fellow, Founder of MusicaJuridica, and Strategic Intellectual Property Lawyer at AIRecht

Marietje Schaake, International Policy Director at the Cyber Policy Center; International Policy Fellow at the Institute for Human-Centered Artificial Intelligence, Stanford University

See the complete agenda at: https://cw2021.b2b-wizard.com/expo/agenda

Democratic Countries Should Form a Strategic Tech Alliance

Kop’s essay titled ‘Democratic Countries Should Form a Strategic Tech Alliance’ concludes that to prevent authoritarianism from gaining ground, democratic governments should do four things: (1) inaugurate a Strategic Tech Alliance, (2) set worldwide core rules, interoperability & conformity standards for key 4IR technologies such as AI, quantum, 6G and Virtual Reality (VR), (3) win the race for 4IR technology supremacy, and (4) actively embed our common democratic norms, principles and values into the architecture and infrastructure of our technology.

REGISTER for the conference following the link: https://cw2021.b2b-wizard.com/expo/home

Meer lezen
Quantum Computing and Intellectual Property Law

Berkeley Technology Law Journal, Vol. 35, No. 3, 2021

New Stanford University Beyond IP Innovation Law research article: “Quantum Computing and Intellectual Property Law”.

By Mauritz Kop

Citation: Kop, Mauritz, Quantum Computing and Intellectual Property Law (April 8, 2021). Berkeley Technology Law Journal 2021, Vol. 35, No. 3, pp 101-115, February 8, 2022, https://btlj.org/2022/02/quantum-computing-and-intellectual-property-law/

Download the article here: Kop_QC and IP Law BTLJ

Please find a short abstract below:

Intellectual property (IP) rights & the Quantum Computer

What types of intellectual property (IP) rights can be vested in the components of a scalable quantum computer? Are there sufficient market-set innovation incentives for the development and dissemination of quantum software and hardware structures? Or is there a need for open source ecosystems, enrichment of the public domain and even democratization of quantum technology? The article explores possible answers to these tantalizing questions.

IP overprotection leads to exclusive exploitation rights for first movers

The article demonstrates that strategically using a mixture of IP rights to maximize the value of the IP portfolio of the quantum computer’s owner, potentially leads to IP protection in perpetuity. Overlapping IP protection regimes can result in unlimited duration of global exclusive exploitation rights for first movers, being a handful of universities and large corporations. The ensuing IP overprotection in the field of quantum computing leads to an unwanted concentration of market power. Overprotection of information causes market barriers and hinders both healthy competition and industry-specific innovation. In this particular case it slows down progress in an important application area of quantum technology, namely quantum computing.

Fair competition and antitrust laws for quantum technology

In general, our current IP framework is not written with quantum technology in mind. IP should be an exception -limited in time and scope- to the rule that information goods can be used for the common good without restraint. IP law cannot incentivize creation, prevent market failure, fix winner-takes-all effects, eliminate free riding and prohibit predatory market behavior at the same time. To encourage fair competition and correct market skewness, antitrust law is the instrument of choice.

Towards an innovation architecture that mixes freedom and control

The article proposes a solution tailored to the exponential pace of innovation in The Quantum Age, by introducing shorter IP protection durations of 3 to 10 years for Quantum and AI infused creations and inventions. These shorter terms could be made applicable to both the software and the hardware side of things. Clarity about the recommended limited durations of exclusive rights -in combination with compulsory licenses or fixed prized statutory licenses- encourages legal certainty, knowledge dissemination and follow on innovation within the quantum domain. In this light, policy makers should build an innovation architecture that mixes freedom (e.g. access, public domain) and control (e.g. incentive & reward mechanisms).

Creating a thriving global quantum ecosystem

The article concludes that anticipating spectacular advancements in quantum technology, the time is now ripe for governments, research institutions and the markets to prepare regulatory and IP strategies that strike the right balance between safeguarding our fundamental rights & freedoms, our democratic norms & standards, and pursued policy goals that include rapid technology transfer, the free flow of information and the creation of a thriving global quantum ecosystem, whilst encouraging healthy competition and incentivizing sustainable innovation.

Meer lezen