Article Type
Changed
Mon, 08/19/2019 - 14:59

 

– Digital health technology is vastly expanding the real-world data pool for clinical and comparative effectiveness research, according to Jeffrey Curtis, MD.

person interacting with high tech digital health computer renderings
HASLOO/ThinkStock

The trick is to harness the power of that data to improve patient care and outcomes, and that can be achieved in part through linkage of data sources and through point-of-care access, Dr. Curtis, professor of medicine in the division of clinical immunology and rheumatology at the University of Alabama at Birmingham (UAB), said at the annual meeting of the Florida Society of Rheumatology.

“We want to take care of patients, but probably what you and I also want is to have real-world evidence ... evidence relevant for people [we] take care of on a day-to-day basis – not people in highly selected phase 3 or even phase 4 trials,” he said.

Real-world data, which gained particular cachet through the 21st Century Cures Act permitting the Food and Drug Administration to consider real-world evidence as part of the regulatory process and in post-marketing surveillance, includes information from electronic health records (EHRs), health plan claims, traditional registries, and mobile health and technology, explained Dr. Curtis, who also is codirector of the UAB Pharmacoepidemiology and Pharmacoeconomics Unit.

Jeffrey R. Curtis, MD, University of Alabama at Birmingham
Courtesy University of Alabama at Birmingham
Dr. Jeffrey R. Curtis

“And you and I want it because patients are different, and in medicine we only have about 20% of patients where there is direct evidence about what we should do,” he added. “Give me the trial that describes the 75-year-old African American smoker with diabetes and how well he does on biologic du jour; there’s no trial like that, and yet you and I need to make those kinds of decisions in light of patients’ comorbidities and other features.”

Generating real-world evidence, however, requires new approaches and new tools, he said, explaining that efficiency is key for applying the data in busy practices, as is compatibility with delivering an intervention and with randomization.

Imagine using the EHR at the point of care to look up what happened to “the last 10 patients like this” based on how they were treated by you or your colleagues, he said.

“That would be useful information to have. In fact, the day is not so far in the future where you could, perhaps, randomize within your EHR if you had a clinically important question that really needed an answer and a protocol attached,” he added.
 

Real-world data collection

Pragmatic trials offer one approach to garnering real-world data by addressing a simple question – usually with a hard outcome – using very few inclusion and exclusion criteria, Dr. Curtis said, describing the recently completed VERVE Zoster Vaccine trial.

He and his colleagues randomized 617 patients from 33 sites to look at the safety of the live-virus Zostavax herpes zoster vaccine in rheumatoid arthritis patients over age 50 years on any anti–tumor necrosis factor (anti-TNF) therapy. Half of the patients received saline, the other half received the vaccine, and no cases of varicella zoster occurred in either group.



“So, to the extent that half of 617 people with zero cases was reassuring, we now have some evidence where heretofore there was none,” he said, noting that those results will be presented at the 2019 American College of Rheumatology annual meeting. “But the focus of this talk is not on vaccination, it’s really on how we do real-world effectiveness or safety studies in a way that doesn’t slow us way down and doesn’t require some big research operation.”

One way is through efficient recruitment, and depending on how complicated the study is, qualified patients may be easily identifiable through the EHR. In fact, numerous tools are available to codify and search both structured and unstructured data, Dr. Curtis said, noting that he and his colleagues used the web-based i2b2 Query Tool for the VERVE study.

The study sites that did the best with recruiting had the ability to search their own EHRs for patients who met the inclusion criteria, and those patients were then invited to participate. A short video was created to educate those who were interested, and a “knowledge review” quiz was administered afterward to ensure informed consent, which was provided via digital signature.

Health plan and other “big data” can also be very useful for answering certain questions. One example is how soon biologics should be stopped before elective orthopedic surgery? Dr. Curtis and colleagues looked at this using claims data for nearly 4,300 patients undergoing elective hip or knee arthroplasty and found no evidence that administering infliximab within 4 weeks of surgery increased serious infection risk within 30 days or prosthetic joint infection within 1 year.

“Where else are you going to go run a prospective study of 4,300 elective hips and knees,” he said, stressing that it wouldn’t be easy.

Other sources that can help generate real-world effectiveness data include traditional or single-center registries and EHR-based registries.

“The EHR registries are, I think, the newest that many are part of in our field,” he said, noting that “a number of groups are aggregating that,” including the ACR RISE registry and some physician groups, for example.



“What we’re really after is to have a clinically integrated network and a learning health care environment,” he explained, adding that the goal is to develop care pathways.

The approach represents a shift from evidence-based practice to practice-based evidence, he noted.

“When you and I practice, we’re generating that evidence and now we just need to harness that data to get smarter to take care of patients,” he said, adding that the lack of randomization for much of these data isn’t necessarily a problem.

“Do you have to randomize? I would argue that you don’t necessarily have to randomize if the source of variability in how we treat patients is very related to patients’ characteristics,” he said.

If the evidence for a specific approach is weak, or a decision is based on physician preference, physician practice, or insurance company considerations instead of patient characteristics, randomization may not be necessary, he explained.

In fact, insurance company requirements often create “natural experiments” that can be used to help identify better practices. For example, if one only covers adalimumab for first-line TNF inhibition, and another has a “different fail-first policy and that’s not first line and everybody gets some other TNF inhibitor, then I can probably compare those quite reasonably,” he said.

“That’s a great setting where you might not need randomization.”

Of note, “having more data sometimes trumps smarter algorithms,” but that means finding and linking more data that “exist in the wild,” Dr. Curtis said.

 

 

Linking data sources

When he and his colleagues wanted to assess the cost of not achieving RA remission, no single data source provided all of the information they needed. They used both CORRONA registry data and health claims data to look at various outcome measures across disease activity categories and with adjustment for comorbidity clusters. They previously reported on the feasibility and validity of the approach.

“We’re currently doing another project where one of the local Blue Cross plans said ‘I’m interested to support you to see how efficient you are; we will donate or loan you our claims data [and] let you link it to your practice so you can actually tell us ... cost conditional on [a patient’s] disease activity,’ ” he said.

Another example involves a recent study looking at biomarker-based cardiovascular disease risk prediction in RA using data from nearly 31,000 Medicare patients linked with multibiomarker disease activity (MBDA) test results, with which they “basically built and validated a risk prediction model,” he said.

The point is that such data linkage provided tools for use at the point of care that can predict CVD risk using “some simple things that you and I have in our EHR,” he said. “But you couldn’t do this if you had to assemble a prospective cohort of tens of thousands of arthritis patients and then wait years for follow-up.”

Patient-reported outcomes collected at the point of care and by patients at home between visits, such as digital data collected via wearable technology, can provide additional information to help improve patient care and management.

“My interest is not to think about [these data sources] in isolation, but really to think about how we bring these together,” he said. “I’m interested in maximizing value for both patients and clinicians, and not having to pick only one of these data sources, but really to harness several of them if that’s what we need to take better care of patients and to answer important questions.”

Doing so is increasingly important given the workforce shortage in rheumatology, he noted.

“The point is that we’re going to need to be a whole lot more efficient as a field because there are going to be fewer of us even at a time when more of us are needed,” he said.

It’s a topic in which the ACR has shown a lot of interest, he said, noting that he cochaired a preconference course on mobile health technologies at the 2018 ACR annual meeting and is involved with a similar course on “big data” ahead of the 2019 meeting.

The thought of making use of the various digital health and “big data” sources can be overwhelming, but the key is to start with the question that needs an answer or the problem that needs to be solved.

“Don’t start with the data,” he explained. “Start with [asking] ... ‘What am I trying to do?’ ”

Dr. Curtis reported funding from the National Institute on Arthritis and Musculoskeletal and Skin Diseases and the Patient-Centered Outcomes Research Institute. He has also consulted for or received research grants from Amgen, AbbVie, Bristol-Myers Squibb, CORRONA, Lilly, Janssen, Myriad, Novartis, Roche, Pfizer, and Sanofi/Regeneron.

Meeting/Event
Publications
Topics
Sections
Meeting/Event
Meeting/Event

 

– Digital health technology is vastly expanding the real-world data pool for clinical and comparative effectiveness research, according to Jeffrey Curtis, MD.

person interacting with high tech digital health computer renderings
HASLOO/ThinkStock

The trick is to harness the power of that data to improve patient care and outcomes, and that can be achieved in part through linkage of data sources and through point-of-care access, Dr. Curtis, professor of medicine in the division of clinical immunology and rheumatology at the University of Alabama at Birmingham (UAB), said at the annual meeting of the Florida Society of Rheumatology.

“We want to take care of patients, but probably what you and I also want is to have real-world evidence ... evidence relevant for people [we] take care of on a day-to-day basis – not people in highly selected phase 3 or even phase 4 trials,” he said.

Real-world data, which gained particular cachet through the 21st Century Cures Act permitting the Food and Drug Administration to consider real-world evidence as part of the regulatory process and in post-marketing surveillance, includes information from electronic health records (EHRs), health plan claims, traditional registries, and mobile health and technology, explained Dr. Curtis, who also is codirector of the UAB Pharmacoepidemiology and Pharmacoeconomics Unit.

Jeffrey R. Curtis, MD, University of Alabama at Birmingham
Courtesy University of Alabama at Birmingham
Dr. Jeffrey R. Curtis

“And you and I want it because patients are different, and in medicine we only have about 20% of patients where there is direct evidence about what we should do,” he added. “Give me the trial that describes the 75-year-old African American smoker with diabetes and how well he does on biologic du jour; there’s no trial like that, and yet you and I need to make those kinds of decisions in light of patients’ comorbidities and other features.”

Generating real-world evidence, however, requires new approaches and new tools, he said, explaining that efficiency is key for applying the data in busy practices, as is compatibility with delivering an intervention and with randomization.

Imagine using the EHR at the point of care to look up what happened to “the last 10 patients like this” based on how they were treated by you or your colleagues, he said.

“That would be useful information to have. In fact, the day is not so far in the future where you could, perhaps, randomize within your EHR if you had a clinically important question that really needed an answer and a protocol attached,” he added.
 

Real-world data collection

Pragmatic trials offer one approach to garnering real-world data by addressing a simple question – usually with a hard outcome – using very few inclusion and exclusion criteria, Dr. Curtis said, describing the recently completed VERVE Zoster Vaccine trial.

He and his colleagues randomized 617 patients from 33 sites to look at the safety of the live-virus Zostavax herpes zoster vaccine in rheumatoid arthritis patients over age 50 years on any anti–tumor necrosis factor (anti-TNF) therapy. Half of the patients received saline, the other half received the vaccine, and no cases of varicella zoster occurred in either group.



“So, to the extent that half of 617 people with zero cases was reassuring, we now have some evidence where heretofore there was none,” he said, noting that those results will be presented at the 2019 American College of Rheumatology annual meeting. “But the focus of this talk is not on vaccination, it’s really on how we do real-world effectiveness or safety studies in a way that doesn’t slow us way down and doesn’t require some big research operation.”

One way is through efficient recruitment, and depending on how complicated the study is, qualified patients may be easily identifiable through the EHR. In fact, numerous tools are available to codify and search both structured and unstructured data, Dr. Curtis said, noting that he and his colleagues used the web-based i2b2 Query Tool for the VERVE study.

The study sites that did the best with recruiting had the ability to search their own EHRs for patients who met the inclusion criteria, and those patients were then invited to participate. A short video was created to educate those who were interested, and a “knowledge review” quiz was administered afterward to ensure informed consent, which was provided via digital signature.

Health plan and other “big data” can also be very useful for answering certain questions. One example is how soon biologics should be stopped before elective orthopedic surgery? Dr. Curtis and colleagues looked at this using claims data for nearly 4,300 patients undergoing elective hip or knee arthroplasty and found no evidence that administering infliximab within 4 weeks of surgery increased serious infection risk within 30 days or prosthetic joint infection within 1 year.

“Where else are you going to go run a prospective study of 4,300 elective hips and knees,” he said, stressing that it wouldn’t be easy.

Other sources that can help generate real-world effectiveness data include traditional or single-center registries and EHR-based registries.

“The EHR registries are, I think, the newest that many are part of in our field,” he said, noting that “a number of groups are aggregating that,” including the ACR RISE registry and some physician groups, for example.



“What we’re really after is to have a clinically integrated network and a learning health care environment,” he explained, adding that the goal is to develop care pathways.

The approach represents a shift from evidence-based practice to practice-based evidence, he noted.

“When you and I practice, we’re generating that evidence and now we just need to harness that data to get smarter to take care of patients,” he said, adding that the lack of randomization for much of these data isn’t necessarily a problem.

“Do you have to randomize? I would argue that you don’t necessarily have to randomize if the source of variability in how we treat patients is very related to patients’ characteristics,” he said.

If the evidence for a specific approach is weak, or a decision is based on physician preference, physician practice, or insurance company considerations instead of patient characteristics, randomization may not be necessary, he explained.

In fact, insurance company requirements often create “natural experiments” that can be used to help identify better practices. For example, if one only covers adalimumab for first-line TNF inhibition, and another has a “different fail-first policy and that’s not first line and everybody gets some other TNF inhibitor, then I can probably compare those quite reasonably,” he said.

“That’s a great setting where you might not need randomization.”

Of note, “having more data sometimes trumps smarter algorithms,” but that means finding and linking more data that “exist in the wild,” Dr. Curtis said.

 

 

Linking data sources

When he and his colleagues wanted to assess the cost of not achieving RA remission, no single data source provided all of the information they needed. They used both CORRONA registry data and health claims data to look at various outcome measures across disease activity categories and with adjustment for comorbidity clusters. They previously reported on the feasibility and validity of the approach.

“We’re currently doing another project where one of the local Blue Cross plans said ‘I’m interested to support you to see how efficient you are; we will donate or loan you our claims data [and] let you link it to your practice so you can actually tell us ... cost conditional on [a patient’s] disease activity,’ ” he said.

Another example involves a recent study looking at biomarker-based cardiovascular disease risk prediction in RA using data from nearly 31,000 Medicare patients linked with multibiomarker disease activity (MBDA) test results, with which they “basically built and validated a risk prediction model,” he said.

The point is that such data linkage provided tools for use at the point of care that can predict CVD risk using “some simple things that you and I have in our EHR,” he said. “But you couldn’t do this if you had to assemble a prospective cohort of tens of thousands of arthritis patients and then wait years for follow-up.”

Patient-reported outcomes collected at the point of care and by patients at home between visits, such as digital data collected via wearable technology, can provide additional information to help improve patient care and management.

“My interest is not to think about [these data sources] in isolation, but really to think about how we bring these together,” he said. “I’m interested in maximizing value for both patients and clinicians, and not having to pick only one of these data sources, but really to harness several of them if that’s what we need to take better care of patients and to answer important questions.”

Doing so is increasingly important given the workforce shortage in rheumatology, he noted.

“The point is that we’re going to need to be a whole lot more efficient as a field because there are going to be fewer of us even at a time when more of us are needed,” he said.

It’s a topic in which the ACR has shown a lot of interest, he said, noting that he cochaired a preconference course on mobile health technologies at the 2018 ACR annual meeting and is involved with a similar course on “big data” ahead of the 2019 meeting.

The thought of making use of the various digital health and “big data” sources can be overwhelming, but the key is to start with the question that needs an answer or the problem that needs to be solved.

“Don’t start with the data,” he explained. “Start with [asking] ... ‘What am I trying to do?’ ”

Dr. Curtis reported funding from the National Institute on Arthritis and Musculoskeletal and Skin Diseases and the Patient-Centered Outcomes Research Institute. He has also consulted for or received research grants from Amgen, AbbVie, Bristol-Myers Squibb, CORRONA, Lilly, Janssen, Myriad, Novartis, Roche, Pfizer, and Sanofi/Regeneron.

 

– Digital health technology is vastly expanding the real-world data pool for clinical and comparative effectiveness research, according to Jeffrey Curtis, MD.

person interacting with high tech digital health computer renderings
HASLOO/ThinkStock

The trick is to harness the power of that data to improve patient care and outcomes, and that can be achieved in part through linkage of data sources and through point-of-care access, Dr. Curtis, professor of medicine in the division of clinical immunology and rheumatology at the University of Alabama at Birmingham (UAB), said at the annual meeting of the Florida Society of Rheumatology.

“We want to take care of patients, but probably what you and I also want is to have real-world evidence ... evidence relevant for people [we] take care of on a day-to-day basis – not people in highly selected phase 3 or even phase 4 trials,” he said.

Real-world data, which gained particular cachet through the 21st Century Cures Act permitting the Food and Drug Administration to consider real-world evidence as part of the regulatory process and in post-marketing surveillance, includes information from electronic health records (EHRs), health plan claims, traditional registries, and mobile health and technology, explained Dr. Curtis, who also is codirector of the UAB Pharmacoepidemiology and Pharmacoeconomics Unit.

Jeffrey R. Curtis, MD, University of Alabama at Birmingham
Courtesy University of Alabama at Birmingham
Dr. Jeffrey R. Curtis

“And you and I want it because patients are different, and in medicine we only have about 20% of patients where there is direct evidence about what we should do,” he added. “Give me the trial that describes the 75-year-old African American smoker with diabetes and how well he does on biologic du jour; there’s no trial like that, and yet you and I need to make those kinds of decisions in light of patients’ comorbidities and other features.”

Generating real-world evidence, however, requires new approaches and new tools, he said, explaining that efficiency is key for applying the data in busy practices, as is compatibility with delivering an intervention and with randomization.

Imagine using the EHR at the point of care to look up what happened to “the last 10 patients like this” based on how they were treated by you or your colleagues, he said.

“That would be useful information to have. In fact, the day is not so far in the future where you could, perhaps, randomize within your EHR if you had a clinically important question that really needed an answer and a protocol attached,” he added.
 

Real-world data collection

Pragmatic trials offer one approach to garnering real-world data by addressing a simple question – usually with a hard outcome – using very few inclusion and exclusion criteria, Dr. Curtis said, describing the recently completed VERVE Zoster Vaccine trial.

He and his colleagues randomized 617 patients from 33 sites to look at the safety of the live-virus Zostavax herpes zoster vaccine in rheumatoid arthritis patients over age 50 years on any anti–tumor necrosis factor (anti-TNF) therapy. Half of the patients received saline, the other half received the vaccine, and no cases of varicella zoster occurred in either group.



“So, to the extent that half of 617 people with zero cases was reassuring, we now have some evidence where heretofore there was none,” he said, noting that those results will be presented at the 2019 American College of Rheumatology annual meeting. “But the focus of this talk is not on vaccination, it’s really on how we do real-world effectiveness or safety studies in a way that doesn’t slow us way down and doesn’t require some big research operation.”

One way is through efficient recruitment, and depending on how complicated the study is, qualified patients may be easily identifiable through the EHR. In fact, numerous tools are available to codify and search both structured and unstructured data, Dr. Curtis said, noting that he and his colleagues used the web-based i2b2 Query Tool for the VERVE study.

The study sites that did the best with recruiting had the ability to search their own EHRs for patients who met the inclusion criteria, and those patients were then invited to participate. A short video was created to educate those who were interested, and a “knowledge review” quiz was administered afterward to ensure informed consent, which was provided via digital signature.

Health plan and other “big data” can also be very useful for answering certain questions. One example is how soon biologics should be stopped before elective orthopedic surgery? Dr. Curtis and colleagues looked at this using claims data for nearly 4,300 patients undergoing elective hip or knee arthroplasty and found no evidence that administering infliximab within 4 weeks of surgery increased serious infection risk within 30 days or prosthetic joint infection within 1 year.

“Where else are you going to go run a prospective study of 4,300 elective hips and knees,” he said, stressing that it wouldn’t be easy.

Other sources that can help generate real-world effectiveness data include traditional or single-center registries and EHR-based registries.

“The EHR registries are, I think, the newest that many are part of in our field,” he said, noting that “a number of groups are aggregating that,” including the ACR RISE registry and some physician groups, for example.



“What we’re really after is to have a clinically integrated network and a learning health care environment,” he explained, adding that the goal is to develop care pathways.

The approach represents a shift from evidence-based practice to practice-based evidence, he noted.

“When you and I practice, we’re generating that evidence and now we just need to harness that data to get smarter to take care of patients,” he said, adding that the lack of randomization for much of these data isn’t necessarily a problem.

“Do you have to randomize? I would argue that you don’t necessarily have to randomize if the source of variability in how we treat patients is very related to patients’ characteristics,” he said.

If the evidence for a specific approach is weak, or a decision is based on physician preference, physician practice, or insurance company considerations instead of patient characteristics, randomization may not be necessary, he explained.

In fact, insurance company requirements often create “natural experiments” that can be used to help identify better practices. For example, if one only covers adalimumab for first-line TNF inhibition, and another has a “different fail-first policy and that’s not first line and everybody gets some other TNF inhibitor, then I can probably compare those quite reasonably,” he said.

“That’s a great setting where you might not need randomization.”

Of note, “having more data sometimes trumps smarter algorithms,” but that means finding and linking more data that “exist in the wild,” Dr. Curtis said.

 

 

Linking data sources

When he and his colleagues wanted to assess the cost of not achieving RA remission, no single data source provided all of the information they needed. They used both CORRONA registry data and health claims data to look at various outcome measures across disease activity categories and with adjustment for comorbidity clusters. They previously reported on the feasibility and validity of the approach.

“We’re currently doing another project where one of the local Blue Cross plans said ‘I’m interested to support you to see how efficient you are; we will donate or loan you our claims data [and] let you link it to your practice so you can actually tell us ... cost conditional on [a patient’s] disease activity,’ ” he said.

Another example involves a recent study looking at biomarker-based cardiovascular disease risk prediction in RA using data from nearly 31,000 Medicare patients linked with multibiomarker disease activity (MBDA) test results, with which they “basically built and validated a risk prediction model,” he said.

The point is that such data linkage provided tools for use at the point of care that can predict CVD risk using “some simple things that you and I have in our EHR,” he said. “But you couldn’t do this if you had to assemble a prospective cohort of tens of thousands of arthritis patients and then wait years for follow-up.”

Patient-reported outcomes collected at the point of care and by patients at home between visits, such as digital data collected via wearable technology, can provide additional information to help improve patient care and management.

“My interest is not to think about [these data sources] in isolation, but really to think about how we bring these together,” he said. “I’m interested in maximizing value for both patients and clinicians, and not having to pick only one of these data sources, but really to harness several of them if that’s what we need to take better care of patients and to answer important questions.”

Doing so is increasingly important given the workforce shortage in rheumatology, he noted.

“The point is that we’re going to need to be a whole lot more efficient as a field because there are going to be fewer of us even at a time when more of us are needed,” he said.

It’s a topic in which the ACR has shown a lot of interest, he said, noting that he cochaired a preconference course on mobile health technologies at the 2018 ACR annual meeting and is involved with a similar course on “big data” ahead of the 2019 meeting.

The thought of making use of the various digital health and “big data” sources can be overwhelming, but the key is to start with the question that needs an answer or the problem that needs to be solved.

“Don’t start with the data,” he explained. “Start with [asking] ... ‘What am I trying to do?’ ”

Dr. Curtis reported funding from the National Institute on Arthritis and Musculoskeletal and Skin Diseases and the Patient-Centered Outcomes Research Institute. He has also consulted for or received research grants from Amgen, AbbVie, Bristol-Myers Squibb, CORRONA, Lilly, Janssen, Myriad, Novartis, Roche, Pfizer, and Sanofi/Regeneron.

Publications
Publications
Topics
Article Type
Sections
Article Source

EXPERT ANALYSIS FROM FSR 2019

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.