FDA joins international push for transparency in AI development

FDA joins international push for transparency in AI development

The FDA is joining hands with regulators in Canada and the U.K. on policies to ensure transparency in the development of medical devices powered by machine learning.

Alongside Health Canada and the U.K. Medicines and Healthcare products Regulatory Agency, the trio in 2021 published a set of 10 best practices for applying artificial intelligence in patient care—with principles such as employing multi-disciplinary expertise in development, using diverse, representative datasets and adhering to common cybersecurity standards.

The new update underlines the importance of delivering transparency as well—not just for the health professionals who use the system, but also the patients who receive care, as well as administrators, payors and governing bodies.

That includes clearly communicating a product’s intended use and rates of performance—and, when possible, its internal logic.

“The responsible development of AI in health care is a central focus for regulators both in the U.S. and around the globe,” Troy Tazbaz, director of the FDA’s Digital Health Center of Excellence, said in an agency statement. “This information holds the potential to influence the trust of healthcare professionals and patients toward a medical device and inform decisions regarding its use.”

Regulators also highlighted the significance of pursuing human-centered design in product development, which accounts for the entire user experience. Effective transparency can also assist in controlling risks, the FDA said, by incorporating help from users in the detection of errors or investigations into declining performance.

“This joint publication is our third international collaboration on Guiding Principles for AI-enabled devices between the FDA, Health Canada, and the MHRA. Each of these documents demonstrates how the FDA is thinking globally about AI in health care and health equity,” Tazbaz said.

“The comprehensive integration of these guiding principles of transparency across the entirety of the product lifecycle serves to ensure that informational requirements are adequately addressed, thereby promoting the safe and effective utilization of [machine learning-enabled medical devices],” he added.

AI safety has been a focus of the Biden administration, which late last year announced a federal organization dedicated to evaluating AI in healthcare as well as its broader use. Operating under the National Institute of Standards and Technology, the new U.S. AI Safety Institute will develop technical guidance for future regulatory rulemaking. Last fall, President Biden also signed an executive order to establish security and equity standards for AI systems.

Share:
error: Content is protected !!