There’s incredible potential for the use of machine learning and AI in healthcare, and the FDA plays a major role in how that will happen. But the agency can’t manage that new world on its own, said FDA Commissioner Dr. Robert Califf during the Consumer Electronics Show (CES) in Las Vegas last week.
“The digitization of almost everything is a phenomenon that I don't think we've fully grasped yet what it means,” Califf told interviewer Lisa Dwyer, a partner with the law firm King & Spalding. “It has a huge impact at FDA.”
Califf is no stranger to emerging health tech. He was head of medical strategy and a senior advisor at Alphabet, Google’s parent company, in between his first stint as head of the FDA during the Obama administration and his current one. At Alphabet, Califf said he “was immersed in the changes of technology.”
Despite its incredible potential, though, there are challenges when it comes to implementing advanced technology in healthcare.
“Federal computing is not quite at the same level as Alphabet,” Califf noted in a 2022 interview with Health Affairs’ editor-in-chief Alan Weil.
Califf echoed that sentiment during his remarks in Las Vegas while discussing the need to continually assess and update algorithms that are used in healthcare.
“The algorithm’s not only living, but the assessment of the algorithm needs to be continuous,” Califf said.
However, “the FDA can't do this alone. We need another doubling of size, and last I looked, the taxpayer’s not very interested in doing that. So we’ve got to have a community of entities that do the assessments in a way that gives us certification that the algorithm’s actually doing good and not harm, and that’s an active piece of work in process.”
Here are two additional takeaways from Califf’s remarks at CES.
“The one thing that I’m 100% sure of is [if] you put an algorithm in a healthcare system or healthcare environment and leave it there, it's gonna get worse.”
When asked about how the FDA regulates AI and machine learning within medical products, Califf made the distinction between fixed, or locked, algorithms that don’t change and adaptive algorithms that learn and change based on data.
While fixed algorithms don’t lend themselves to the most exciting and cutting-edge technology, the successful use of adaptive algorithms is only possible with continual assessment and tuning, Califf said.
“If we develop a system that has tuning of algorithms, I think it'll be an amazing time for medicine and health care,” he said.
Equally important is the data that’s fed into the algorithms. Without it, any predictions or conclusions derived from the technology will be flawed.
“To make it better, you've got to have complete outcomes in the population to which it's applied,” he said.
However, the U.S. healthcare system isn’t built to keep track of peoples’ health outcomes over time, including whether or not they’re even still alive, Califf noted.
“You would think in our health system we'd be able to tell who’s dead and alive, but believe it or not, when people drop off the map in a health system, there's no record of it anywhere for the most part,” he said.
That’s also true when patients move between states, change healthcare providers or make other life changes.
“In our entire healthcare system, if you ask the question, ‘Can I follow an individual person over time to find out what happened to them?’ the answer is the whole system is built in a way that doesn't allow that to happen effectively,” he said. “We've got to fix that — otherwise, as the algorithms adapt, we won't know if they're getting better or worse.”
“We have a health system in the U.S. which is structurally designed to advantage people with money and power.”
Another challenge with using AI and machine learning in healthcare are the racial, gender and other biases that can be built into the technology.
In her interview with Califf, Dwyer asked how the FDA intends to prevent such biases and pointed out that in 2022, California Attorney General Rob Bonta launched an inquiry into how healthcare providers identify and address racial and ethnic disparities in the algorithms that power healthcare decision-support tools.
Califf said preventing biases through consistent review “should be part of the standard assessment of any algorithm applied in healthcare.”
He also noted other biases that exist, such as ones against people living in rural areas.
“People who are highly educated [and] tech savvy already always take advantage of things first,” he said. “So we’ve got to take all these things into account in the assessments.”