The pharma industry received long-awaited insights on how the FDA plans to regulate AI in drug development last week — but many are still left wanting more.
The draft guidance offers a seven-step framework to assess the risk of AI models in drug and biological product development. While it doesn’t address AI models in the drug discovery process, where AI is often used, it provides the industry its first look at the FDA’s priorities as the technology is integrated more deeply into R&D.
“The FDA is committed to supporting innovative approaches for the development of medical products by providing an agile, risk-based framework that promotes innovation and ensures the agency’s robust scientific and regulatory standards are met,” FDA Commissioner Dr. Robert Califf said in a press release accompanying the guidance. “With the appropriate safeguards in place, [AI] has transformative potential to advance clinical research and accelerate medical product development to improve patient care.”
For AI companies partnering with pharmas in clinical trials and data management, the guidance is a jumping off point as the need for more details persists.
A risk-based approach
Through the draft guidance, the FDA makes it clear that the relative risk of AI models used in drug development needs to be explained. For example, the agency flagged the risks of using datasets that carry the potential for bias, which could raise questions about the reliability of results. In the framework, several steps are dedicated to defining the AI model and its data, and assessing its risk.
“If you train the model using data that's not cleaned [or is] biased in some way, then it's not going to yield very good results,” said Miruna Sasu, CEO of COTA, an AI tech company that supports oncology research. “And the FDA is not cool with that in the context of medicine.”
The pharma industry has long awaited clarity from the FDA on how it will assess drugs developed with AI tools, but this first guidance fell short of the “revolutionary” changes previously indicated by Califf. The guidance’s reach on what it defines is also limited, according to other AI companies.
"It's an innovative way of thinking, and it can't be captured in the box.”
Miruna Sasu
CEO, COTA
“It's about [20] pages for something as complex as artificial intelligence and the regulation of it as it pertains to drug drug development. Something like that is never going to bring enormous amounts of clarity,” said Dr. Adam Petrich, chief medical officer of QuantHealth, which offers an AI clinical simulator that predicts how trial participants will respond to treatments. “It's a step in the right direction, but there's still a long way to go.”
Needed input and flexibility
With the draft guidance published, the FDA is now seeking feedback during its public comment period from industry stakeholders on how well it aligns with companies’ experience. AI is already ingrained in the drug development process for many companies, and the agency has seen AI modeling in submissions growing since 2016.
As the FDA’s views on the technology take shape, pharma and AI companies will play a big part, Petrich predicted.
“We're going to see the FDA guidance evolve towards [being] highly dependent upon these use cases that sponsors, along with companies like us, bring forward to help them constantly refine and revisit their guidance,” Petrich said
According to Sasu, the FDA’s initial crack at AI regulation is too rigid and may not fully recognize the nuances of AI and large language models. The assessment framework may also be over-reliant on more traditional statistical approaches in analyzing clinical trial data, she said.
“To allow these models to be developed and have investment dollars behind the companies that are developing these models, you [have to] have a little bit of flexibility in how it how we achieve the markers that we're looking for because we're not going to always understand everything about the models as we have with traditional statistical techniques,” she said. “There are so many models that are not just working with one type of statistics, but many statistical methodologies. It's an innovative way of thinking, and it can't be captured in the box.”
Petrich, however, viewed the guidance differently.
“It's too brief and too high level to be considered overbearing or rigid at this point,” Petrich said.
As the industry waits for more information, the draft guidance provides a peek into the regulatory framework to come. Although AI adoption has picked up steam in pharma, many of the models are relatively new, and for now, the FDA has steered clear of giving specific criteria for each use case. Instead, the guidance advises companies to discuss plans with the agency and be prepared to face stringent criteria if it deems the inherent risks of the AI model as high.
For now, this regulatory strategy leaves a lot up in the air.
“They've stuck to their corner,” Sasu said. “They say not everything is encompassed in this guidance. So because of that, there's a lot that falls into [the] gray area.”