When the generative AI program ChatGPT rolled out last year, H1 CEO and co-founder Ariel Katz instantly saw its potential to improve his company’s Trial Landscape platform, a database designed to accelerate clinical trial planning and design.
“I thought, this is amazing. It doesn’t have H1’s data, but what if it did?” Katz said, positing that users could quickly get answers to simple requests like finding doctors best suited to run a clinical trial. H1, a healthcare data and analytics company, moved quickly to fund the project in January and launched the program, called GenosAI, last month.
But before that could happen, company leaders had to overcome several challenges often encountered by AI developers — most importantly, keeping the system from generating false responses without compromising usability, a problem that even the most seasoned industry titans are still wrangling with, Katz said.
A few tweaks away from transforming trials
Generative AI, which has only been on the scene for a few years, is already making its mark on the industry. Several companies, including Insilico Medicine and Adaptyv Bio, use generative AI programs to speed drug discovery. One analysis found that efficient AI programs could slash drug discovery costs by 70%, a potential boon for an industry facing steep patent cliffs and cost pressures.
But while conversational AI tools like H1’s have advantages, they also bring risks if they produce inaccurate or biased results or raise data security issues, Katz said. To ensure seamless conversations, developers need to train the system on everything from industry jargon to regional slang, Katz said, adding that they also need to guard data privacy. To avoid these problems, developers should invest time and money into training, system planning and testing, he said.
"We're going to have a lot of examples for how the technology is better at designing a protocol, predicting enrollment rate, predicting the sites, identifying patients and picking the investigator than any human being."
Ariel Katz
CEO, co-founder, H1
A more complex challenge is keeping the system from generating fake responses, called hallucinations. If a generative AI system doesn’t know the answer to a question it will sometimes make one up, Katz said.
“It's not okay if I tell [the system] to choose 100 doctors to run a clinical trial and it makes up three doctors who are not real human beings,” Katz said.
But at the same time, it can be frustrating for a user if the system can’t respond at all in the face of uncertainty. For example, if someone prompts, “Tell me the best doctor to work on my atopic dermatitis clinical trial,” the system might respond, “I don't know what you mean by best,” Katz said.
“It's not a great experience,” he said, noting that finding a balance between allowing for some uncertainty and still maintaining accuracy is tricky. “Everyone is dealing with it — Google, [OpenAI, which developed] ChatGPT and Microsoft, every tech company. They’re lying to you if they say they don't have this problem.”
That space is something H1 will continue to monitor and tweak going forward. And although the generative AI interface will continue to evolve, it’s already helping users save time, Katz said.
“Over the next year, we're going to have a lot of examples for how the technology is better at designing a protocol, predicting enrollment rate, predicting the sites, identifying the patients and picking the investigator than any human being,” Katz said. “And we're going to back test that — I think it's going to transform the industry pretty quickly.”