The potential of AI in medicine and how it's changing regulation

by ,

Isabel Losantos, senior consultant - Design Assurance, at Cambridge Consultants and Joe Corrigan, head of Medical Technology examine the potential AI represents in life sciences and the regulatory changes it brings. 

Digital health is growing, but compared to the growth in other digital applications, growth is relatively slow, with fewer than 100 approved products placed on the market in the last five years. Many of these are limited to decision support technologies rather than being true diagnostics, not fulfilling their potential. So why is the market moving slowly? 

Part of the reason is that Artificial Intelligence/Machine Learning (AI/ML) systems to be used in the medical field are not just another software product to be marketed. Their application can bring lifelong benefits to their users, but poorly thought through implementation can bring harm as well, so caregivers, developers and regulators have moved with caution.

As experience with AI systems grows, new innovative players in medical AI are emerging, bringing accelerated workflows and improved outcomes to areas never before possible, from AI-based image enhancement for radiology to cognitive behavioural therapy. With an increasing demand for high-quality AI systems, US regulators are responding with the development of more agile approval processes. These involve interactive and regular reviews to facilitate the continuous monitoring of the safety, effectiveness, and performance of marketed AI systems.

For new entrants to the regulated markets, regulation can seem daunting as it will mean developing new skills and processes to manage their business, not just their products. For more established players, existing processes will still function but adapting to new regulation can accelerate approval. 

AI and Machine Learning systems

In the US, the Food and Drug Administration (FDA) defines an AI/ML system as a system that has the capacity to learn based on training on a specific task by tracking performance measure(s). Medical AI/ML systems are considered “software as a medical device” (SaMD) and particular regulatory requirements apply to them as defined by the Code of Federal Regulations (CFR) and supplemented by standards and guidance documents. 

For the purpose of this discussion, we can consider that there are two different types of AI/ML systems: “Locked” and “Adaptive”. The vast majority of applications are trained and tested on one set of data and the algorithm is “Locked”. The output in the market will be the same as that provided at the time of submission. But for applications where continuous learning is an advantage, “Adaptive” algorithms can change their behaviour using a defined learning process; the output after a time on the market will be different from that provided at the time of submission, hopefully improved.

From a development and regulatory perspective, Locked systems follow the normal submission procedure, with changes/updates made following a defined change control process before or after deployment and new software versions deployed only after all changes have been successfully verified. 

By contrast, because the risk from an Adaptive system can change as it learns, existing regulation does not effectively manage potential risks and a new approach is required. The FDA has published a proposed regulatory framework for modifications to AI/ML-based SaMD, but this is not yet implemented.

Precertification pilot program

Until the new modifications are implemented, the FDA is piloting a precertification program for SaMD. The precertification program focusses on developers that manufacture standalone (not embedded) software. 

The program follows the Total Product Life Cycle (TPLC) approach to the regulation of software products, enabling the evaluation of developers and their products throughout their lifetimes, with precertification being awarded to the developer, not the product. The process works like this:

First, developers must demonstrate a culture of quality and organisational excellence. The appraisal is carried out by a review of Key Performance Indicators (KPIs) and post-market product performance, with KPI reports collected by the FDA at regular intervals. There are two levels of precertification: Level 1 for software developers with little or no experience of medical devices and targeting low-risk SaMDs, and Level 2 for experienced SaMD developers targeting low and moderate risk SaMDs.  

Secondly, the level of review is determined: there are two levels of review for the SaMD, Streamlined Review (SR) or no review, based on the precertification level (as above) and the risk category. For the risk category, the FDA uses criteria based on the “healthcare situation or condition” that drives the need for accurate and/or timely diagnosis or treatment, which aims to help developers provide a comprehensive risk-based definition of the product.

Thirdly, the review itself focuses on the product-related elements of the submission, including the clinical algorithm, cybersecurity, hazard analysis/risk management, IFU/labelling, regulatory pathway (such as 510(k) or de Novo), requirements, revision history, architecture and validation. The FDA also requires a list of product-specific elements to be provided by pre-certified organisations, from the significance of the information provided by the SaMD to the healthcare decision to SaMD performance. 

Finally, the FDA believes organisations can demonstrate excellence through proactive monitoring of Real-world Performance (RWP) data related to their products, in a similar way to post-market surveillance of traditional medical devices. Organisations are expected to collect and analyse RWP data following product launch, to ensure that the product remains safe and effective and continues to perform as expected. The data collected should be comprehensive and cover real-world health analytics, user experience, and product performance.

Conclusion

While the evaluation programme is not yet concluded, the FDA has indicated that regulatory decisions could be made from the mock reviews carried out with only fine-tuning remaining before adoption. The FDA’s new regulatory approaches are intended to accelerate the development of AI medical products to market without exposing patients to unnecessary risk. The FDA has also indicated that the pilot Precertification program may be extended to embedded software, broadening the scope of this fast-track process. In the European Union there is no such accelerated review process, and there is a lack of guidance specific to medical AI/ML. However, applications are still being approved under existing regulations.  

For developers, the lack of clarity around regulations can make them seem cumbersome and approval a frustrating process, particularly with AI. Regulators frequently reject new approaches with poorly justified approaches to risk, and because ML approaches are often novel, the chance of initial rejection is higher than normal. To mitigate this, it is essential to approach the regulator early in the development process with an open approach and well thought out procedures to uncover and manage potential risks. Once a product is approved, there are many benefits to the business, from demonstrating credibility to opening high-value markets and delivering patient benefits and healthcare solutions more widely and at a lower cost than has ever been possible before. 

Back to topbutton