LAKE BUENA VISTA, FLA. – Digital health technology is vastly expanding the real-world data pool for clinical and comparative effectiveness research, according to Jeffrey Curtis, MD.
The trick is to harness the power of that data to improve patient care and outcomes, and that can be achieved in part through linkage of data sources and through point-of-care access, Dr. Curtis, professor of medicine in the division of clinical immunology and rheumatology at the University of Alabama at Birmingham (UAB), said at the annual meeting of the Florida Society of Rheumatology.
“We want to take care of patients, but probably what you and I also want is to have real-world evidence ... evidence relevant for people [we] take care of on a day-to-day basis – not people in highly selected phase 3 or even phase 4 trials,” he said.
Real-world data, which gained particular cachet through the 21st Century Cures Act permitting the Food and Drug Administration to consider real-world evidence as part of the regulatory process and in post-marketing surveillance, includes information from electronic health records (EHRs), health plan claims, traditional registries, and mobile health and technology, explained Dr. Curtis, who also is codirector of the UAB Pharmacoepidemiology and Pharmacoeconomics Unit.
“And you and I want it because patients are different, and in medicine we only have about 20% of patients where there is direct evidence about what we should do,” he added. “Give me the trial that describes the 75-year-old African American smoker with diabetes and how well he does on biologic du jour; there’s no trial like that, and yet you and I need to make those kinds of decisions in light of patients’ comorbidities and other features.”
Generating real-world evidence, however, requires new approaches and new tools, he said, explaining that efficiency is key for applying the data in busy practices, as is compatibility with delivering an intervention and with randomization.
Imagine using the EHR at the point of care to look up what happened to “the last 10 patients like this” based on how they were treated by you or your colleagues, he said.
“That would be useful information to have. In fact, the day is not so far in the future where you could, perhaps, randomize within your EHR if you had a clinically important question that really needed an answer and a protocol attached,” he added.
Real-world data collection
Pragmatic trials offer one approach to garnering real-world data by addressing a simple question – usually with a hard outcome – using very few inclusion and exclusion criteria, Dr. Curtis said, describing the recently completed VERVE Zoster Vaccine trial.
He and his colleagues randomized 617 patients from 33 sites to look at the safety of the live-virus Zostavax herpes zoster vaccine in rheumatoid arthritis patients over age 50 years on any anti–tumor necrosis factor (anti-TNF) therapy. Half of the patients received saline, the other half received the vaccine, and no cases of varicella zoster occurred in either group.
“So, to the extent that half of 617 people with zero cases was reassuring, we now have some evidence where heretofore there was none,” he said, noting that those results will be presented at the 2019 American College of Rheumatology annual meeting. “But the focus of this talk is not on vaccination, it’s really on how we do real-world effectiveness or safety studies in a way that doesn’t slow us way down and doesn’t require some big research operation.”
One way is through efficient recruitment, and depending on how complicated the study is, qualified patients may be easily identifiable through the EHR. In fact, numerous tools are available to codify and search both structured and unstructured data, Dr. Curtis said, noting that he and his colleagues used the web-based i2b2 Query Tool for the VERVE study.
The study sites that did the best with recruiting had the ability to search their own EHRs for patients who met the inclusion criteria, and those patients were then invited to participate. A short video was created to educate those who were interested, and a “knowledge review” quiz was administered afterward to ensure informed consent, which was provided via digital signature.
Health plan and other “big data” can also be very useful for answering certain questions. One example is how soon biologics should be stopped before elective orthopedic surgery? Dr. Curtis and colleagues looked at this using claims data for nearly 4,300 patients undergoing elective hip or knee arthroplasty and found no evidence that administering infliximab within 4 weeks of surgery increased serious infection risk within 30 days or prosthetic joint infection within 1 year.
“Where else are you going to go run a prospective study of 4,300 elective hips and knees,” he said, stressing that it wouldn’t be easy.
Other sources that can help generate real-world effectiveness data include traditional or single-center registries and EHR-based registries.
“The EHR registries are, I think, the newest that many are part of in our field,” he said, noting that “a number of groups are aggregating that,” including the ACR RISE registry and some physician groups, for example.
“What we’re really after is to have a clinically integrated network and a learning health care environment,” he explained, adding that the goal is to develop care pathways.
The approach represents a shift from evidence-based practice to practice-based evidence, he noted.
“When you and I practice, we’re generating that evidence and now we just need to harness that data to get smarter to take care of patients,” he said, adding that the lack of randomization for much of these data isn’t necessarily a problem.
“Do you have to randomize? I would argue that you don’t necessarily have to randomize if the source of variability in how we treat patients is very related to patients’ characteristics,” he said.
If the evidence for a specific approach is weak, or a decision is based on physician preference, physician practice, or insurance company considerations instead of patient characteristics, randomization may not be necessary, he explained.
In fact, insurance company requirements often create “natural experiments” that can be used to help identify better practices. For example, if one only covers adalimumab for first-line TNF inhibition, and another has a “different fail-first policy and that’s not first line and everybody gets some other TNF inhibitor, then I can probably compare those quite reasonably,” he said.
“That’s a great setting where you might not need randomization.”
Of note, “having more data sometimes trumps smarter algorithms,” but that means finding and linking more data that “exist in the wild,” Dr. Curtis said.