Learning, Potential, and Retention: An Approach for Evaluating Adaptive AI-Enabled Medical Devices
2026-04-06 • Artificial Intelligence
Artificial IntelligencePerformance
AI summaryⓘ
The authors tackle the problem of checking how well adaptive AI in medical devices works when both the AI and data change over time. They propose three measurements—learning (how much the AI gets better on current data), potential (how dataset changes affect performance), and retention (how well the AI remembers past knowledge)—to separate effects from AI updates versus environment shifts. Using examples with simulated population changes, they show these measures help understand how quickly AI adapts and what trade-offs happen. Their approach can help regulators confidently evaluate adaptive AI safety and effectiveness as it evolves.
adaptive AImedical devicesperformance evaluationlearning measurementpotential measurementretention measurementpopulation shiftplasticitystabilityregulatory science
Authors
Alexis Burgon, Berkman Sahiner, Nicholas A Petrick, Gene Pennello, Ravi K Samala
Abstract
This work addresses challenges in evaluating adaptive artificial intelligence (AI) models for medical devices, where iterative updates to both models and evaluation datasets complicate performance assessment. We introduce a novel approach with three complementary measurements: learning (model improvement on current data), potential (dataset-driven performance shifts), and retention (knowledge preservation across modification steps), to disentangle performance changes caused by model adaptations versus dynamic environments. Case studies using simulated population shifts demonstrate the approach's utility: gradual transitions enable stable learning and retention, while rapid shifts reveal trade-offs between plasticity and stability. These measurements provide practical insights for regulatory science, enabling rigorous assessment of the safety and effectiveness of adaptive AI systems over sequential modifications.