This article is an installment of Future Explored, a weekly guide to world-changing technology. You can get stories like this one straight to your inbox every Thursday morning by subscribing here.
The FDA approved 55 brand-new drugs in 2023, and thanks to these approvals, countless people with cancer, Alzheimer’s, ALS, and other conditions will have a better shot at a longer, healthier life than they would have just a year prior.
The number of new drugs reaching the market might have been a lot higher, though, if finding participants for clinical trials wasn’t such a challenge — but AI might be able to help fix it.
Missed connections
All the lab tests and animal studies in the world can’t tell us for sure whether a drug candidate is going to work in people — to find that out, we need to actually test it in people, and that’s what clinical trials are for.
In the current clinical trial process, drugs are tested first in a small number of people to make sure they’re safe and to determine an ideal dosage. Depending on how that goes, larger trials will follow, typically with the drug being tested against a placebo or existing treatment option to find out efficacy.
The FDA will not approve new drugs that haven’t proven themselves in clinical trials, but recruiting and retaining enough people to effectively test drugs has long been a major challenge for developers — an estimated 90% of clinical trials experience delays due to low enrollment, and failure to enroll enough participants is the number one reason given for early trial termination.
Surprisingly, the problem goes the other way, too — people with rare diseases, terminal illnesses, or conditions that aren’t responding to existing treatments will often seek out clinical trials, only to discover that finding one they qualify for is a huge challenge.
“Despite the stakes, from the patient’s perspective, the clinical trial process is impressively broken, obtuse, and confusing,” Bess Stillman, an ER doctor, writes of her experience trying to find a clinical trial for her husband, Jake, who is facing an aggressive type of cancer.
“[The process is] one that I gather no one likes,” she continues. “Patients don’t, their families don’t, hospitals and oncologists who run the clinical trials don’t, drug companies must not, and the people who die while waiting to get into a trial probably don’t.”
Two healthcare professionals might use different language to describe the same thing.
Neither issue — drug developers’ trouble finding trial participants, and patients’ problems finding trials — has a single cause. One hurdle they share, however, is the fact that current approaches for matching patients and trials are time consuming and labor intensive — even for medical professionals.
“Although I’m a doctor,” writes Stillman, “I’ve been stymied by the clinical trial process.”
Trials have very strict criteria on who can and can’t participate, and study staff will sometimes start their search for people who qualify by hunting through the electronic health records (EHRs) of patients at their study sites.
The challenge with this is that you might be able to filter EHRs to show you people of a certain age or with a certain condition, but crucial details on, say, how they responded to a specific treatment might have been typed into a text box by whoever treated the patient.
Two healthcare professionals might use different language to describe the same thing, so if that treatment response is key to trial eligibility, the only way to screen for it might be to have someone who understands medical lingo manually review EHRs and match them with trial criteria.
Not only is that inefficient, human reviewers might accidentally overlook qualifying patients — after all, they’re only human.
Patients trying to find a clinical trial on their own, meanwhile, will typically be directed to ClinicalTrials.gov, the NIH’s online database of trials, which they can filter by their condition, location, or the name of an in-development drug (if they happen to know about it).
Their challenge is the same as recruiters. Because the information in the database is entered by the humans running each trial, it’s not 100% standardized, and even the standardized boxes aren’t always used the same way — if you’re looking to fight a certain kind of cancer, for example, a search for just it can easily miss a drug that is designated to fight all solid tumors.
As Stillman shared in her article, she got a different list of results from the database if she changed her wording even slightly — “head and neck cancer,” for example, returned nearly 7,000 trials, while “cancer of the head and heck” delivered about 4,300 results.
She then needed to manually sift through the results to figure out which ones her husband met the criteria to join — while users can filter results by basic info, such as acceptable age ranges, they can’t sort out trials that, say, don’t accept people who have already undergone chemo.
AI to the rescue?
After Stillman shared her story, she was contacted by companies that use algorithms to match patients to trials, but their systems didn’t seem any better than her manual attempts — one sent her a list of five trials, and Jake didn’t even qualify for two.
“The ideas sound good … Based on what we’ve seen so far, however, the ‘AI’ tech isn’t there yet,” she writes, adding, “I’d like to see a scaleable AI solution, but as long as the data the AI trains on (what’s currently available on ClinicalTrials.gov) is incomplete and poorly structured, I don’t see it happening.”
Advances in AI, though, could be chipping away at this problem, thanks to new tools called large language models (LLMs).
These AIs learn how to understand and generate text in natural language — the kind we use to talk to one another or, say, jot notes in medical records — by learning from huge datasets of text written in natural language.
They’re the basis for ChatGPT and other AI chatbots, and because they can understand data that isn’t highly structured, some scientists believe they have great potential to help overcome the challenge of matching patients to trials and vice versa, using info written in EHRs and in trial databases.
It would cost a drug developer about 10 cents to screen a patient using the AI.
A team led by researchers from Harvard Medical School, for example, recently shared a paper on the preprint server medRXiv detailing RECTIFIER, an AI they trained to identify people who met the criteria to join an actual heart failure trial.
This trial had 6 inclusion criteria (patients must have these specific conditions) and 17 exclusion criteria (patients cannot have any of these). Ten of these 23 requirements could be determined by looking at simple, structured data in EHRs (like age, for example), so the Harvard team decided to focus on the other 13, designing prompts for their AI such as “Is the patient currently undergoing dialysis?” and “Is the patient currently pregnant or breastfeeding?”
To get the AI to accurately answer these questions, they had to train it to find the relevant data within clinician notes in patients’ EHRs. It then needed to be able to understand the information enough to decide whether a patient did or didn’t meet the criteria.
To test the AI, they compared its determinations to the conclusions reached by the heart failure study’s human staff, as well as an expert clinician. The expert’s answers were considered the “gold standard,” and while both the AI and the staff closely matched the expert’s judgment, the AI was in slightly better agreement.
More importantly, it was also cheap. While Microsoft donated access to OpenAI’s GPT-4 LLM for the study, the team determined that it would cost a drug developer about 10 cents to screen a patient using their system and GPT-4 Turbo — an even more capable update to the system — at its current price.
LLMs might also be able to help patients with finding a clinical trial.
In April 2023, myTomorrows, a healthcare company that helps match patients and trials, beta launched TrialSearch AI, an LLM-based tool designed to help doctors quickly find worthwhile trials for their patients. The company details their system in a paper on the preprint server arXiv.
To use TrialSearch AI, a doctor feeds the system a summary of the patient’s medical history written in unstructured text. The AI then searches ClinicalTrials.gov and EudraCT (the European Union’s online database of clinical trials) to identify trials the patient might qualify to join.
Within minutes, the doctor will have a list of trials to review.
“[T]he tool saves precious time for physicians,” writes myTomorrows. “The physician can still make an informed decision, based on transparent and personalized results, and is then able to use the myTomorrows end-to-end platform to make patient referrals directly.”
The big picture
Neither of these LLMs is perfect.
RECTIFIER, for example, sometimes missed small nuances in physician notes that were key to trial eligibility, while TrialSearch AI could only evaluate trials that had separate inclusion and exclusion criteria listed in the databases — and about 30% of all posted trials don’t, a point in favor of Stillman’s observation that bad underlying databases are still a huge challenge for AI.
“The incidence of patient availability sharply decreases when a clinical trial begins and returns to its original level as soon as the trial is completed.”
Lasagna’s Law (1970)
Even if they did work exactly as designed, the AIs can’t begin to solve all of the problems with the current clinical trial system.
Low trial enrollment, for example, isn’t just due to doctors having trouble finding people whose medical histories are a good fit — potential participants often face financial and logistical barriers to joining trials, so addressing those could improve enrollment rates.
For patients, identifying some trials they qualify for isn’t the same as finding the best trial for their specific situation. Stillman and her husband ended up choosing one only after hiring an expensive expert, Eileen, to spend many hours researching and narrowing their options down to the truly most promising trials.
In the future, though, experts like Eileen might be able to help many more patients if AI tools really can help to speed up the matching process — and many other groups are exploring ways to use LLMs to match patients and trials, too.
Based on early results, it seems the tech has the potential to help more drug trials avoid failure and to help more patients find trials — and if LLMs live up to this potential, it could mean more approved drugs to help all of us live longer, healthier lives.
We’d love to hear from you! If you have a comment about this article or if you have a tip for a future Freethink story, please email us at [email protected].