EU AI Act's Impact On Medical Devices
- Dalia Givony
- Nov 19, 2024
- 3 min read
Updated: Jan 6

The EU Artificial Intelligence Act approved by the European Commission, is a pioneering regulatory framework aimed at overseeing artificial intelligence technologies within the EU. The Act came into force on 1 August 2024. This landmark legislation categorizes AI systems based on their risk levels - unacceptable, high, limited, and minimal- introducing stringent requirements, particularly for high-risk applications. The AI Act's primary goals are to ensure safety, promote transparency, and enhance public trust in AI systems while fostering innovation and protecting fundamental rights.
Medical devices increasingly incorporate AI to enhance diagnostic accuracy, personalize treatments, and improve patient outcomes.
Medical devices which are required to undergo a third-party conformity assessment by a Notified Body are required to comply with the requirements of high-risk level under the new AI Act (AIA) and adhere to rigorous standards concerning risk management, data governance, and transparency. Compliance with these requirements necessitates thorough conformity assessments, ongoing post-market surveillance, and robust documentation practices.
The intersection of the AI Act (AIA) with the existing Medical Device Regulation (MDR) and In Vitro Diagnostic Regulation (IVDR) further intensifies the regulatory landscape.
High risk medical device AI system must comply with the following key requirements,
Integration of AI-specific obligations within the MDR Quality Management System (QMS): The quality management system obligations under the AIA are specifically targeted to the AI system. Therefore, the AIA requirements are complementary to the quality management system required under the MDR or IVDR and include data and data governance, record-keeping, transparency, human oversight (non-exhaustive list).
Lifecycle management covering safety, performance, robustness, and data governance.
Comprehensive technical documentation, including AI algorithm transparency, data handling and bias mitigation strategies, detailed information about the device's design, development, key design choices, functionality, performance characteristics, system architecture, computational resources to develop, train and test.
Transperancy: systems are designed, documented, and deployed in a transparent and explainable manner. Verification of these requirements collectively ensure that users, deployers, and patients are adequately informed about the nature, operation and limitations of AI system.
Human oversight requirements ensuring operators can interpret, override, or intervene in AI decisions.
Rigorous risk management, explicitly covering AI-related risks (bias, drift, cybersecurity, robustness).
Data governance obligations: datasets for training, validation, and testing must be relevant, representative , high-quality datasets with traceability, logging mechanisms, automatic recording of events (logs) over the lifetime of the system (including post market). Manufacturers must implement procedures ensuring data transparency and integrity and examine the data in view of possible biases, with detailed documentation of compliance.
Post-market surveillance systems adapted to monitor AI-specific risks such as performance drift.
Combined conformity assessment under MDR and AI Act, ideally with a Notified Body designated for both frameworks.
Clear strategy for managing modifications and updates, with substantial changes requiring re-assessment under the AI Act.
Overall, the AI Act represents a major milestone in establishing comprehensive AI governance, with important implications for the medical device industry. Additional horizontal guidance and harmonized standards, still to be published, are expected to provide further clarification and support for its implementation.
The implementation timeline for medical devices classified as high-risk under the EU AI Act is set at 36 months from the Act's entry into force.
While MDR/ IVDR provides the foundation for safety and performance, the AI Act introduces specific provisions addressing AI-related risks such as bias, transparency, and cybersecurity.
Manufacturers should proactively integrate these AI-specific requirements into their quality management systems, technical documentation, and lifecycle processes to ensure full readiness by the compliance deadline of 2 August 2027.
Turn regulatory complexity into a competitive advantage.
With over 20 years of global regulatory and clinical experience, Dalia Givony, Regulatory & Clinical Consulting helps innovative medical device and AI companies reach the market faster and with less risk.
Contact us to schedule a free consultation and move your product forward with confidence.




Comments