Faegre Drinker Biddle & Reath LLP, a Delaware limited liability partnership | This website contains attorney advertising.
October 08, 2025

FDA Requests Public Comment on Metrics and Methods for Evaluating Performance of AI-Driven Medical Devices Deployed in Clinical Settings

Written Responses Due by December 1, 2025

At a Glance

  • The U.S. Food & Drug Administration (FDA) is soliciting public input on how to measure and evaluate the safety, effectiveness and reliability of AI-enabled medical devices in real-world clinical use.
  • The agency is particularly interested in methods, metrics, best practices and challenges encountered by industry and stakeholders.
  • Comments received will help shape future FDA guidance and regulatory policy.

The U.S. Food & Drug Administration (FDA) is inviting public comment to inform future guidance on the measurement and evaluation of artificial intelligence (AI)-enabled medical devices in real-world clinical settings. This initiative is being issued by FDA’s Digital Health Center of Excellence, which has become the central hub for the agency’s major activities and leadership in AI-related medical device regulation. The agency seeks stakeholder feedback on performance metrics, monitoring methods, data sources, response protocols and best practices for ensuring the safety, effectiveness and reliability of these rapidly evolving technologies.

Industry participants, clinicians, researchers and other interested parties are encouraged to submit written responses to FDA’s detailed questions on postmarket evaluation, human-AI interaction and implementation challenges by December 1, 2025. This input will help shape regulatory science and policy for AI in medical devices moving forward.

FDA Multipart Questions Seek Public Comment on the Following

  1. What metrics and methods should be used to measure and evaluate the performance of AI-enabled medical devices, including during premarket review and postmarket surveillance?
  2. What best practices exist for validation and ongoing monitoring of AI algorithms, including adaptive or continuously learning algorithms?
  3. What approaches can be used to ensure transparency, explainability and reliability of AI outputs in clinical settings?
  4. What considerations should be made to promote equitable outcomes and minimize bias in AI-enabled devices?
  5. What unique challenges do AI-enabled medical devices present in terms of cybersecurity, and how should these be addressed?
  6. How can real-world data and evidence be leveraged to evaluate and monitor the performance of AI-enabled medical devices?
  7. Are there additional factors FDA should consider related to the measurement and evaluation of AI-enabled medical devices?

Responses

We encourage health and life sciences stakeholders to comment. The responses will help shape future FDA guidance and regulatory policy. Visit FDA’s official webpage for further information and submission details.

The material contained in this communication is informational, general in nature and does not constitute legal advice. The material contained in this communication should not be relied upon or used without consulting a lawyer to consider your specific circumstances. This communication was published on the date specified and may not include any changes in the topics, laws, rules or regulations covered. Receipt of this communication does not establish an attorney-client relationship. In some jurisdictions, this communication may be considered attorney advertising.