top of page
Search

Comparison of MDR/IVDR and EU AI Act Requirements

  • Writer: Dalia Givony
    Dalia Givony
  • Oct 23
  • 4 min read
Comparison of MDR/IVDR and EU AI Act Requirements


Artificial Intelligence (AI) is increasingly integrated into medical devices and in vitro diagnostic medical devices (IVDs), creating overlap between the regulatory frameworks of the Medical Device Regulation (EU) 2017/745 (MDR), the In Vitro Diagnostic Regulation (EU) 2017/746 (IVDR), and the Artificial Intelligence Act (EU) 2024/1689. The MDCG 2025-6 guidance document provides detailed clarification on how these frameworks interact.

This comparative summary aims to assist manufacturers in understanding how AI-specific obligations under the AI Act complement the existing MDR/IVDR requirements. It highlights areas of alignment- such as quality management, post-market monitoring, and technical documentation- and identifies additional obligations introduced by the AI Act, including data governance, human oversight, and fundamental-rights impact considerations.

The table below presents a structured comparison of the key regulatory domains, outlining the relevant requirements under each framework and emphasizing their interdependencies and distinctions as interpreted in MDCG 2025-6.

Requirement

MDR (EU 2017/745)/ IVDR (EU 2017/746)

AI Act (Regulation (EU) 2024/1689)

Differences / Notes

Scope / Applicability

Applies to medical devices placed on the EU market. Devices must meet essential safety & performance requirements, classified by risk class (MDR I, IIa, IIb, III;  IVDR Class A, B, C, D)

AI Act applies to all AI systems, including those forming part of medical devices, whether or not they undergo NB assessment. Only those requiring NB assessment under MDR/IVDR are automatically “high-risk” under Article 6(1).

MDR classification remains, but AI components in devices subject to NB review are also high-risk under the AI Act.  The AI Act does not impact the risk classification of the MDR/IVDR.

Quality Management System (QMS)

Manufacturers must implement a QMS (per MDR/IVDR Annex IX) aligned with ISO 13485, addressing safety, performance, and PMS.

Providers must establish a QMS covering the AI lifecycle, including data governance, human oversight, transparency, and logging.

AI Act adds AI-specific QMS requirements such as data and data governance, record-keeping, transparency, human oversight (non-exhaustive).

The development of harmonized European and international standards relevant to quality management systems for high-risk AI system  is ongoing.

Risk Management & Safety

Requires continuous risk management (ISO 14971) and clinical evaluation to demonstrate safety and performance.

Requires AI-specific risk management covering bias, robustness, accuracy, transparency, and impacts on human rights.

MDR focuses on patient safety and performance; AI Act expands to ethical and societal risks.

Data Governance & Training/Validation/ Testing Dataset

Requires clinical/performance evidence and appropriate data, representative of the device’s intended use population and environment for evaluation of device safety and effectiveness. Data governance and management fully compliant with GDPR

Requires high-quality, representative of the intended use environment and population (for example, in terms of age, gender, sex, race, ethnicity, geographical location, medical condition) as well as traceable datasets for AI training, validation, and testing to ensure the device operates as intended and meets safety and performance requirements. Data governance and management shall be fully compliant with GDPR

AI Act introduces stricter dataset quality to ensure generalization of results, labeling and bias-control, trackability by  record-keeping and logging requirements.

Clinical /Performance Evaluation and Testing

Requires Clinical/ Performance Evaluation to demonstrate the safety, performance and effectiveness

AI Act requires testing against prior defined metrics and probabilistic thresholds to ensure device performs consistently for their intended purpose and that it is in compliance with the requirements of the safety performance  

AI Act does not explicitly use the term “clinical evaluation” or "performance evaluation," it mandates requirements such as accuracy, robustness, and cybersecurity which are essential aspects of performance and safety

Transparency / Human Oversight

Labeling, IFU, and usability engineering ensure user understanding and safe use; oversight implied for professional users.

Explicit requirements for transparency, human oversight, user awareness of AI involvement, logic, limitations, and behavior. Requires capability to override system decisions.

AI Act imposes explicit user disclosure, explainability, and override obligations absent in MDR.

 

Post-Market Monitoring / Surveillance

Requires post-market monitoring and surveillance systems to monitor the performance and safety. This includes systematic collecting and analyzing data on device performance, risk analysis, adverse events assessment and reporting, and other safety-related issues,

Requires post-market monitoring and a post-market monitoring plan: continuous AI monitoring, event logging, and corrective action mechanisms for high-risk AI systems.

AI Act adds algorithm monitoring, model drift detection, detection for interaction with other AI systems, including other devices and software. AI Act PMS requirements shall be integrated into already existing MDR/IVDR PMS plan.

Cybersecurity

Requires robust cybersecurity measures in both the pre-market and post-market stages. The cybersecurity measures implemented by manufacturers must aim to prevent unauthorized access, cyberattacks, exploits, manipulation and ensure operational resilience.

Requires implementation of technical measures to address AI-specific vulnerabilities, including safeguards for AI-specific assets such as training datasets and trained models, as well as for the underlying information and communication technology (ICT) infrastructure.

AI Act adds requirements to solutions to address AI specific vulnerabilities.

Conformity Assessment / Notified Body Involvement

Higher-risk devices require NB assessment; conformity based on technical documentation and audits.

For non-high-risk AI systems, the “internal control” procedure described in Annex VI of the AI Act may apply. High-risk AI systems require a conformity assessment involving a Notified Body. It is recommended that, where applicable, the same Notified Body perform both the MDR/IVDR and AI Act conformity assessments.

If MDR NB involvement is required, the AI system is automatically high-risk under AI Act; both assessments apply concurrently. AI Act indicates that a single set of technical documentation shall be drawn up to cover both MDR and AI Act requirements.

Significant Modifications / Change Management

Device changes affecting safety or performance require new conformity assessment.

Defines 'substantial modification'- AI system updates beyond pre-defined scope trigger reassessment.

AI Act formalizes 'predetermined change control plans' similar to FDA PCCP (i.e the plan was assessed during initial conformity assessment and is part the technical file); adds specific AI-change governance to MDR.

 

In summary, AI-enabled medical devices must comply with both MDR/IVDR and the AI Act. Manufacturers are encouraged to integrate AI Act obligations into their existing MDR technical documentation and QMS to streamline conformity assessment.


Dalia Givony, Regulatory & Clinical Consulting, provides guidance and support for regulatory compliance with over 20 years of experience in the medical device and IVD fields. 


We're here to help with any questions you may have. Don't hesitate to get in touch with us today!






 
 
 

Comments


Regulatory & Clinical consulting services

Contact Us

 +972-54-5449235

7172157; ISRAEL 

©2022 by Dalia Givony Regulatory & Clinical Consulting.

  • LinkedIn Social Icon
bottom of page