Innovation, Technology & Law

Blog over Kunstmatige Intelligentie, Quantum, Deep Learning, Blockchain en Big Data Law

Blog over juridische, sociale, ethische en policy aspecten van Kunstmatige Intelligentie, Quantum Computing, Sensing & Communication, Augmented Reality en Robotica, Big Data Wetgeving en Machine Learning Regelgeving. Kennisartikelen inzake de EU AI Act, de Data Governance Act, cloud computing, algoritmes, privacy, virtual reality, blockchain, robotlaw, smart contracts, informatierecht, ICT contracten, online platforms, apps en tools. Europese regels, auteursrecht, chipsrecht, databankrechten en juridische diensten AI recht.

What are the main requirements for AI systems in Healthcare?

1. Main barriers to adoptation of Artificial Intelligence in healthcare

Absence of a specific AI law, or clear legal framework from the perspective of both professional users (A) and patients (B).

When constructing such a framework, it is important to make a distinction between the various sub-areas of healthcare, such as research and development (products such as eHealth apps, wearables, MRI scanners, smart medicine), professional care providers (primary care, drug distribution, complex surgery) and recipients of care (patients). Because each sub-area has different needs.

Moreover, there are already carefully built up European and worldwide legal and ethical frameworks per sector/sub-area. Existing good practices should be built upon, where possible and applicable.

What are the main requirements for Artificial Intelligence systems in Healthcare?

What are the main requirements for Artificial Intelligence systems in Healthcare?

A. Barriers for professional users: Unfamiliarity with AI-systems, their advantages, their legal requirements/boundaries and fear of the unknown.

- If a system (any system) has been working with or according to well-known principles/standards/certain established methods, it is ‘difficult’ to change this (path dependency).

- This is even more so in healthcare, since traceability is key within any healthcare system. Changing one thing within the system, means that everything else has to be reassessed, and/or changed as well. Blockchain could play an important role in taking down barriers.

- It is simply unclear for companies and private and academic research institutes in the medical sector what is and is not allowed in the field of AI, blockchain, virtual reality, big data, deep learning algorithms, cognitive computing, computer & machine vision and robotics. Both at European level and at national level. This knowledge is important for the commodification of their inventions/creations. Two practical examples are permission from Farmatec and obtaining a CE-marking.

- These stakeholders already experience a lot of uncertainty about legal matters such as liability (professional indemnity, insurance, product liability, statutory and strict liability, punitive damages) and intellectual property (copyrights, patents, trade secrets, database rights, sui generis rights on computer generated works).

- These stakeholders already experience a lot of uncertainty about the new REGULATION (EU) 2017/745 of 5 April 2017 on Medical Devices (which replaces COUNCIL DIRECTIVE 93/42/EEC of 14 June 1993 concerning Medical Devices, mid-2020), and the Machinery Directive (2006/42/EC).

It is important that a new law (AI Regulation or Directive) does not add to the confusion.

B. Barriers for patients: Unfamiliar with or unable to work with AI-driven technology and consequences for privacy.

2. Requirements for sustained use of AI in healthcare

- It should fit into existing QAQC-systems (quality assurance and quality control).

- It should be able to implement and/or adhere to principles of Eudralex (The body of European Union legislation in the pharmaceutical sector), Good Manufacturing Practices (GMP) and Good Distribution Practices (GDP) in particular.

- It should be easy to adjust/correct ‘bugs’ in the system.

- Privacy of both patients/consumers and users/businesses is a great priority (GDPR-Compliant).

- It should be practical and easy to use.

- Overly stringent and complex legal requirements hinder innovation (incentive & reward), and create market barriers for tech startups.

- Enforcement should be carried out by a government agency/public body such as Farmatec, with a multidisciplinary approach. Thus by healthcare experts, IT experts, ethicists and privacy experts together, coordinated by this central body. Instead of enforcement by notified bodies who have commercial interests when they issue a CE-marking. Compare this with the way the FDA (Food and Drug Administration) operates in the United States.

3. Steps to overcome barriers

- Inform and teach.

- Start actual pilots with AI-driven technology.

- Make sure that AI is a help within the sector. Let the AI ​​process run in parallel with the old process, where relevant.

- Include AI, robotics, DLT blockchain in the Curricula of Medical School, Law School, Business School, primary and secondary education.

- Perform an Artificial Intelligence Impact Assessment.

- Make sure privacy and other fundamental rights are respected.


Mauritz Kop & Suzan Slijpen (European AI Alliance)