AI’s Impact on Pharma and MedTech Compliance

Ai Driven Reg Intelligence Software

An article in partnership with Corporate Compliance Insights, written by Caroline Shleifer, the CEO and founder of RegASK, a RegTech company that employs cutting-edge technology to tackle challenges in the regulatory intelligence and compliance industry.

Leadership Caroline Shleifer

Practitioners in the pharmaceutical and medical technology sector are no strangers to regulation, and to be sure, rules governing AI in medicine are coming. RegASK’s Caroline Shleifer examines efforts by states and the FDA to put up guardrails around the technology.

In the dynamic world of pharmaceuticals and medical technology, innovation is constant and regulations are ever-evolving. This interplay has been turbo-charged in recent years by the arrival of artificial intelligence (AI), a force with the potential to streamline processes, enhance productivity and improve patient outcomes.

However, the advent of AI has been accompanied by fears that if left unchecked, it could undermine core democratic values of transparency, accountability, privacy and equality. As a result, governments across the world are developing regulations to harness AI’s power safely — moves that have wide-ranging consequences for pharmaceutical and medical technology (medtech) companies.

In light of these developments, it is imperative that companies can effectively track and comply with these regulations. Even with increasing calls for regulatory coordination on the global stage, companies operating in multiple jurisdictions have no choice but to stay abreast of multiple evolving landscapes.

The delicate balance of regulation

From medical imaging and diagnosis to surgery and clinical-trial design, the promise of AI is immense. For companies seeking to optimize its use, it is important to understand the dual purposes of regulators in democratic societies: to foster innovation and improvement, while ensuring data privacy, accountability and patient safety. Thus, emerging regulation will seek to be robust, precise and clear — even while rapid advancement in AI use will likely lead to frequent regulatory updates.

The guiding principles of these regulations are beginning to crystallize. Legislative initiatives governing AI are prioritizing interdisciplinary collaboration, the protection of individuals from unintended effects, safeguards against abusive data practices, transparency, nondiscrimination and accountability for AI developers and deployers. Prospective laws aim to align AI systems with democratic values and to avert the risk that automated systems undermine civil rights.

EU regulations and their influence

The European Union has been at the forefront of AI regulation, alert to both the risks and benefits of its widespread adoption. The EU’s proposed Artificial Intelligence Act aims to establish a comprehensive regulatory scheme, encompassing areas like data governance, algorithmic transparency and risk analysis. The act proposes impact-assessment and compliance-evaluation mechanisms, outlines an AI governance program and suggests areas for global coordination and voluntary commitments. It also offers a framework for the identification of high-risk systems and specifically prohibits certain practices and functions — e.g., facial recognition and so-called “social scoring.”

This framework is underpinned by a set of democratic values that will inform the regulations of other countries as well, particularly the United States. We can see this in the Biden administration’s blueprint for an AI bill of rights and in the bills enacted by multiple U.S. states.

State regulations on AI in the U.S.

Over the past five years, 17 states have enacted 29 bills regulating artificial intelligence. While California, Colorado and Virginia have so far offered the most comprehensive guidelines, almost all states emphasize data privacy and accountability. Other themes also reflect the principles specified above: multiple-stakeholder collaboration in AI development and use, safety guardrails, transparency of use and nondiscrimination/equitable treatment of citizens.

FDA strategy on AI in medical products

The U.S. Food and Drug Administration’s recently released regulatory strategy for AI in medical products encompasses any AI application that supports drug development and compliance: from drug creation to clinical-trial design to post-market surveillance. It will have a particular impact on drug design and on the processes used to collect post-market data.

The FDA schematic includes four priorities — broad collaboration among…

Read the full article on corporatecomplianceinsights.com

Have a
regulatory affairs
challenge?

Regulatory Affair Icon