Cardioprotective Effect of Metformin in Patients with Decreased Renal Function

Article Type
Changed
Tue, 05/03/2022 - 15:12
Display Headline
Cardioprotective Effect of Metformin in Patients with Decreased Renal Function

Study Overview

Objective. To assess whether metformin use is associated with lower risk of fatal or nonfatal major adverse cardiovascular events (MACE) as compared to sulfonylurea use among diabetic patients with reduced kidney function.

Design. Retrospective cohort study of US Veterans receiving care within the Veterans Health Administration, with data supplemented by linkage to Medicare, Medicaid, and National Death Index data from 2001 through 2016.

Setting and participants. A retrospective cohort of Veterans Health Administration (VHA) patients, aged 18 years and older. Pharmacy data included medication, date filled, days supplied, and number of pills dispensed. For Medicare and Medicaid patients, enrollees’ claims files and prescription (Part D) data were obtained. In addition, dates and cause of death were obtained from vital status and the National Death Index files.

Patients with new-onset type 2 diabetes were identified by selecting new users of metformin, glipizide, glyburide, or glimepiride. These patients were followed longitudinally and the date of cohort entry and start of follow-up was the day of reaching a reduced kidney function threshold, defined as either an estimated glomerular filtration rate (eGFR) of less than 60 mL/min/1.73 m2 or serum creatinine level of 1.5 mg/dL for men or 1.4 mg/dL for women. Patients were excluded for nonpersistence, defined as 90 days without an antidiabetic drug; censoring, defined as the 181st day of no VHA contact; or study end date of December 31, 2016.

Main outcome measures. Primary outcome was the composite of MACE including hospitalization for acute myocardial infarction (AMI), ischemic or hemorrhagic stroke, transient ischemic attack (TIA), or date of cardiovascular death. The secondary outcome excluded TIA as part of the composite MACE event because not all patients who sustain a TIA are admitted to the hospital.

Main results. From January 1, 2002 through December 30, 2015, 67,749 new metformin users and 28,976 new sulfonylurea users who persisted with treatment were identified. After using propensity score-weighted matching, 24,679 metformin users and 24,799 sulfonylurea users entered the final analysis. Cohort patients were 98% male and 81.8% white. Metformin users were younger than sulfonylurea users, with a median age of 61 years versus 71 years, respectively.

For the main outcome, there were 1048 composite MACE events among metformin patients with reduced kidney function and 1394 MACE events among sulfonylurea patients, yielding 23.0 (95% confidence interval [CI], 21.7-24.4) versus 29.2 (95% CI, 27.7-30.7) events per 1000 person-years of use, respectively, after propensity score-weighting. After covariate adjustment, the cause-specific adjusted hazard ratio (aHR) for MACE was 0.80 (95% CI, 0.75-0.86) among metformin users compared with sulfonylurea users. The adjusted incidence rate difference was 5.8 (95% CI, 4.1-7.3) fewer events per 1000-person years for metformin compared with sulfonylurea users. Results were also consistent for each component of the primary outcome, including cardiovascular hospitalizations (aHR, 0.87; 95% CI, 0.80-0.95) and cardiovascular deaths (aHR, 0.70; 95% CI, 0.63-0.78).

 

 

Analysis of secondary outcomes, which included AMI, stroke, and cardiovascular death and excluded TIA, demonstrated similar results, with a cause-specific aHR of 0.78 (95% CI, 0.72-0.84) among metformin users compared with sulfonylurea users. The adjusted incidence rate difference was 5.9 (95% CI, 4.3-7.6) fewer events per 1000-person years for metformin compared with sulfonylurea users.

Conclusion. For patients with diabetes and reduced kidney function, treatment with metformin monotherapy, as compared with a sulfonylurea, was associated with a lower risk of MACE.

Commentary

There are approximately 30 million US adults with a diagnosis of type 2 diabetes (T2DM), of whom 20% also have impaired kidney function or chronic kidney disease (CKD).1 Metformin hydrochloride has remained the preferred first-line treatment for T2DM based on safety and effectiveness, as well as low cost.2 Metformin is eliminated by the kidneys and can accumulate as eGFR declines. Based on the negative clinical experience, the US Food and Drug Administration (FDA) issued a safety warning restricting metformin for patients with serum creatinine levels of 1.5 mg/dL or greater for men or 1.4 mg/dL or greater for women. The FDA recommended against starting metformin therapy in patients with CKD with eGFR between 30 and 45 mL/min/1.73 m2, although patients already taking metformin can continue with caution in that setting.1,3

There are several limitations in conducting observational studies comparing metformin to other glucose-lowering medications. First, metformin trials typically excluded patients with CKD due to the FDA warnings. Second, there is usually a time-lag bias in which patients who initiate glucose-lowering medications other than metformin are at a later stage of disease. Third, there is often an allocation bias, as there are substantial differences in baseline characteristics between metformin and sulfonylurea monotherapy users, with metformin users usually being younger and healthier.4

In this retrospective cohort study by Roumie et al, the authors used propensity score–weighted matching to reduce the impacts on time-lag and allocation bias. However, several major limitations remained in this study. First, the study design excluded those who began diabetes treatment after the onset of reduced kidney function; therefore, this study cannot be generalized to patients who already have reduced eGFR at the time of metformin initiation. Second, cohort entry and the start of follow-up was either an elevated serum creatinine or reduced eGFR less than 60 mL/min/1.73 m2. The cohort may have included some patients with an acute kidney injury event, rather than progression to CKD, who recovered from their acute kidney injury. Third, the study population was mostly elderly white men; together with the lack of dose analysis, this study may not be generalizable to other populations.

 

 

Applications for Clinical Practice

The current study demonstrated that metformin use, as compared to sulfonylureas, has a lower risk of fatal or nonfatal major adverse cardiovascular events among patients with reduced kidney function. When clinicians are managing hyperglycemia in patients with type 2 diabetes, it is important to keep in mind that all medications have adverse effects. There are now 11 drug classes for treating diabetes, in addition to multiple insulin options, and the challenge for clinicians is to present clear information to guide patients using shared decision making, based on each patient’s clinical circumstances and preferences, to achieve individualized glycemic target ranges.

–Ka Ming Gordon Ngai, MD, MPH

References

1. Geiss LS, Kirtland K, Lin J, et al. Changes in diagnosed diabetes, obesity, and physical inactivity prevalence in US counties, 2004-2012. PLoS One. 2017;12:e0173428.

2. Good CB, Pogach LM. Should metformin be first-line therapy for patients with type 2 diabetes and chronic kidney disease? JAMA Intern Med. 2018;178:911-912.

3. US Food and Drug Administration. FDA revises warnings regarding use of the diabetes medicine metformin in certain patients with reduced kidney function. https://www.fda.gov/downloads/Drugs/DrugSafety/UCM494140.pdf. Accessed September 30, 2019.

4. Wexler DJ. Sulfonylureas and cardiovascular safety the final verdict? JAMA. 2019;322:1147-1149.

Article PDF
Issue
Journal of Clinical Outcomes Management - 26(6)
Publications
Topics
Page Number
255-256
Sections
Article PDF
Article PDF

Study Overview

Objective. To assess whether metformin use is associated with lower risk of fatal or nonfatal major adverse cardiovascular events (MACE) as compared to sulfonylurea use among diabetic patients with reduced kidney function.

Design. Retrospective cohort study of US Veterans receiving care within the Veterans Health Administration, with data supplemented by linkage to Medicare, Medicaid, and National Death Index data from 2001 through 2016.

Setting and participants. A retrospective cohort of Veterans Health Administration (VHA) patients, aged 18 years and older. Pharmacy data included medication, date filled, days supplied, and number of pills dispensed. For Medicare and Medicaid patients, enrollees’ claims files and prescription (Part D) data were obtained. In addition, dates and cause of death were obtained from vital status and the National Death Index files.

Patients with new-onset type 2 diabetes were identified by selecting new users of metformin, glipizide, glyburide, or glimepiride. These patients were followed longitudinally and the date of cohort entry and start of follow-up was the day of reaching a reduced kidney function threshold, defined as either an estimated glomerular filtration rate (eGFR) of less than 60 mL/min/1.73 m2 or serum creatinine level of 1.5 mg/dL for men or 1.4 mg/dL for women. Patients were excluded for nonpersistence, defined as 90 days without an antidiabetic drug; censoring, defined as the 181st day of no VHA contact; or study end date of December 31, 2016.

Main outcome measures. Primary outcome was the composite of MACE including hospitalization for acute myocardial infarction (AMI), ischemic or hemorrhagic stroke, transient ischemic attack (TIA), or date of cardiovascular death. The secondary outcome excluded TIA as part of the composite MACE event because not all patients who sustain a TIA are admitted to the hospital.

Main results. From January 1, 2002 through December 30, 2015, 67,749 new metformin users and 28,976 new sulfonylurea users who persisted with treatment were identified. After using propensity score-weighted matching, 24,679 metformin users and 24,799 sulfonylurea users entered the final analysis. Cohort patients were 98% male and 81.8% white. Metformin users were younger than sulfonylurea users, with a median age of 61 years versus 71 years, respectively.

For the main outcome, there were 1048 composite MACE events among metformin patients with reduced kidney function and 1394 MACE events among sulfonylurea patients, yielding 23.0 (95% confidence interval [CI], 21.7-24.4) versus 29.2 (95% CI, 27.7-30.7) events per 1000 person-years of use, respectively, after propensity score-weighting. After covariate adjustment, the cause-specific adjusted hazard ratio (aHR) for MACE was 0.80 (95% CI, 0.75-0.86) among metformin users compared with sulfonylurea users. The adjusted incidence rate difference was 5.8 (95% CI, 4.1-7.3) fewer events per 1000-person years for metformin compared with sulfonylurea users. Results were also consistent for each component of the primary outcome, including cardiovascular hospitalizations (aHR, 0.87; 95% CI, 0.80-0.95) and cardiovascular deaths (aHR, 0.70; 95% CI, 0.63-0.78).

 

 

Analysis of secondary outcomes, which included AMI, stroke, and cardiovascular death and excluded TIA, demonstrated similar results, with a cause-specific aHR of 0.78 (95% CI, 0.72-0.84) among metformin users compared with sulfonylurea users. The adjusted incidence rate difference was 5.9 (95% CI, 4.3-7.6) fewer events per 1000-person years for metformin compared with sulfonylurea users.

Conclusion. For patients with diabetes and reduced kidney function, treatment with metformin monotherapy, as compared with a sulfonylurea, was associated with a lower risk of MACE.

Commentary

There are approximately 30 million US adults with a diagnosis of type 2 diabetes (T2DM), of whom 20% also have impaired kidney function or chronic kidney disease (CKD).1 Metformin hydrochloride has remained the preferred first-line treatment for T2DM based on safety and effectiveness, as well as low cost.2 Metformin is eliminated by the kidneys and can accumulate as eGFR declines. Based on the negative clinical experience, the US Food and Drug Administration (FDA) issued a safety warning restricting metformin for patients with serum creatinine levels of 1.5 mg/dL or greater for men or 1.4 mg/dL or greater for women. The FDA recommended against starting metformin therapy in patients with CKD with eGFR between 30 and 45 mL/min/1.73 m2, although patients already taking metformin can continue with caution in that setting.1,3

There are several limitations in conducting observational studies comparing metformin to other glucose-lowering medications. First, metformin trials typically excluded patients with CKD due to the FDA warnings. Second, there is usually a time-lag bias in which patients who initiate glucose-lowering medications other than metformin are at a later stage of disease. Third, there is often an allocation bias, as there are substantial differences in baseline characteristics between metformin and sulfonylurea monotherapy users, with metformin users usually being younger and healthier.4

In this retrospective cohort study by Roumie et al, the authors used propensity score–weighted matching to reduce the impacts on time-lag and allocation bias. However, several major limitations remained in this study. First, the study design excluded those who began diabetes treatment after the onset of reduced kidney function; therefore, this study cannot be generalized to patients who already have reduced eGFR at the time of metformin initiation. Second, cohort entry and the start of follow-up was either an elevated serum creatinine or reduced eGFR less than 60 mL/min/1.73 m2. The cohort may have included some patients with an acute kidney injury event, rather than progression to CKD, who recovered from their acute kidney injury. Third, the study population was mostly elderly white men; together with the lack of dose analysis, this study may not be generalizable to other populations.

 

 

Applications for Clinical Practice

The current study demonstrated that metformin use, as compared to sulfonylureas, has a lower risk of fatal or nonfatal major adverse cardiovascular events among patients with reduced kidney function. When clinicians are managing hyperglycemia in patients with type 2 diabetes, it is important to keep in mind that all medications have adverse effects. There are now 11 drug classes for treating diabetes, in addition to multiple insulin options, and the challenge for clinicians is to present clear information to guide patients using shared decision making, based on each patient’s clinical circumstances and preferences, to achieve individualized glycemic target ranges.

–Ka Ming Gordon Ngai, MD, MPH

Study Overview

Objective. To assess whether metformin use is associated with lower risk of fatal or nonfatal major adverse cardiovascular events (MACE) as compared to sulfonylurea use among diabetic patients with reduced kidney function.

Design. Retrospective cohort study of US Veterans receiving care within the Veterans Health Administration, with data supplemented by linkage to Medicare, Medicaid, and National Death Index data from 2001 through 2016.

Setting and participants. A retrospective cohort of Veterans Health Administration (VHA) patients, aged 18 years and older. Pharmacy data included medication, date filled, days supplied, and number of pills dispensed. For Medicare and Medicaid patients, enrollees’ claims files and prescription (Part D) data were obtained. In addition, dates and cause of death were obtained from vital status and the National Death Index files.

Patients with new-onset type 2 diabetes were identified by selecting new users of metformin, glipizide, glyburide, or glimepiride. These patients were followed longitudinally and the date of cohort entry and start of follow-up was the day of reaching a reduced kidney function threshold, defined as either an estimated glomerular filtration rate (eGFR) of less than 60 mL/min/1.73 m2 or serum creatinine level of 1.5 mg/dL for men or 1.4 mg/dL for women. Patients were excluded for nonpersistence, defined as 90 days without an antidiabetic drug; censoring, defined as the 181st day of no VHA contact; or study end date of December 31, 2016.

Main outcome measures. Primary outcome was the composite of MACE including hospitalization for acute myocardial infarction (AMI), ischemic or hemorrhagic stroke, transient ischemic attack (TIA), or date of cardiovascular death. The secondary outcome excluded TIA as part of the composite MACE event because not all patients who sustain a TIA are admitted to the hospital.

Main results. From January 1, 2002 through December 30, 2015, 67,749 new metformin users and 28,976 new sulfonylurea users who persisted with treatment were identified. After using propensity score-weighted matching, 24,679 metformin users and 24,799 sulfonylurea users entered the final analysis. Cohort patients were 98% male and 81.8% white. Metformin users were younger than sulfonylurea users, with a median age of 61 years versus 71 years, respectively.

For the main outcome, there were 1048 composite MACE events among metformin patients with reduced kidney function and 1394 MACE events among sulfonylurea patients, yielding 23.0 (95% confidence interval [CI], 21.7-24.4) versus 29.2 (95% CI, 27.7-30.7) events per 1000 person-years of use, respectively, after propensity score-weighting. After covariate adjustment, the cause-specific adjusted hazard ratio (aHR) for MACE was 0.80 (95% CI, 0.75-0.86) among metformin users compared with sulfonylurea users. The adjusted incidence rate difference was 5.8 (95% CI, 4.1-7.3) fewer events per 1000-person years for metformin compared with sulfonylurea users. Results were also consistent for each component of the primary outcome, including cardiovascular hospitalizations (aHR, 0.87; 95% CI, 0.80-0.95) and cardiovascular deaths (aHR, 0.70; 95% CI, 0.63-0.78).

 

 

Analysis of secondary outcomes, which included AMI, stroke, and cardiovascular death and excluded TIA, demonstrated similar results, with a cause-specific aHR of 0.78 (95% CI, 0.72-0.84) among metformin users compared with sulfonylurea users. The adjusted incidence rate difference was 5.9 (95% CI, 4.3-7.6) fewer events per 1000-person years for metformin compared with sulfonylurea users.

Conclusion. For patients with diabetes and reduced kidney function, treatment with metformin monotherapy, as compared with a sulfonylurea, was associated with a lower risk of MACE.

Commentary

There are approximately 30 million US adults with a diagnosis of type 2 diabetes (T2DM), of whom 20% also have impaired kidney function or chronic kidney disease (CKD).1 Metformin hydrochloride has remained the preferred first-line treatment for T2DM based on safety and effectiveness, as well as low cost.2 Metformin is eliminated by the kidneys and can accumulate as eGFR declines. Based on the negative clinical experience, the US Food and Drug Administration (FDA) issued a safety warning restricting metformin for patients with serum creatinine levels of 1.5 mg/dL or greater for men or 1.4 mg/dL or greater for women. The FDA recommended against starting metformin therapy in patients with CKD with eGFR between 30 and 45 mL/min/1.73 m2, although patients already taking metformin can continue with caution in that setting.1,3

There are several limitations in conducting observational studies comparing metformin to other glucose-lowering medications. First, metformin trials typically excluded patients with CKD due to the FDA warnings. Second, there is usually a time-lag bias in which patients who initiate glucose-lowering medications other than metformin are at a later stage of disease. Third, there is often an allocation bias, as there are substantial differences in baseline characteristics between metformin and sulfonylurea monotherapy users, with metformin users usually being younger and healthier.4

In this retrospective cohort study by Roumie et al, the authors used propensity score–weighted matching to reduce the impacts on time-lag and allocation bias. However, several major limitations remained in this study. First, the study design excluded those who began diabetes treatment after the onset of reduced kidney function; therefore, this study cannot be generalized to patients who already have reduced eGFR at the time of metformin initiation. Second, cohort entry and the start of follow-up was either an elevated serum creatinine or reduced eGFR less than 60 mL/min/1.73 m2. The cohort may have included some patients with an acute kidney injury event, rather than progression to CKD, who recovered from their acute kidney injury. Third, the study population was mostly elderly white men; together with the lack of dose analysis, this study may not be generalizable to other populations.

 

 

Applications for Clinical Practice

The current study demonstrated that metformin use, as compared to sulfonylureas, has a lower risk of fatal or nonfatal major adverse cardiovascular events among patients with reduced kidney function. When clinicians are managing hyperglycemia in patients with type 2 diabetes, it is important to keep in mind that all medications have adverse effects. There are now 11 drug classes for treating diabetes, in addition to multiple insulin options, and the challenge for clinicians is to present clear information to guide patients using shared decision making, based on each patient’s clinical circumstances and preferences, to achieve individualized glycemic target ranges.

–Ka Ming Gordon Ngai, MD, MPH

References

1. Geiss LS, Kirtland K, Lin J, et al. Changes in diagnosed diabetes, obesity, and physical inactivity prevalence in US counties, 2004-2012. PLoS One. 2017;12:e0173428.

2. Good CB, Pogach LM. Should metformin be first-line therapy for patients with type 2 diabetes and chronic kidney disease? JAMA Intern Med. 2018;178:911-912.

3. US Food and Drug Administration. FDA revises warnings regarding use of the diabetes medicine metformin in certain patients with reduced kidney function. https://www.fda.gov/downloads/Drugs/DrugSafety/UCM494140.pdf. Accessed September 30, 2019.

4. Wexler DJ. Sulfonylureas and cardiovascular safety the final verdict? JAMA. 2019;322:1147-1149.

References

1. Geiss LS, Kirtland K, Lin J, et al. Changes in diagnosed diabetes, obesity, and physical inactivity prevalence in US counties, 2004-2012. PLoS One. 2017;12:e0173428.

2. Good CB, Pogach LM. Should metformin be first-line therapy for patients with type 2 diabetes and chronic kidney disease? JAMA Intern Med. 2018;178:911-912.

3. US Food and Drug Administration. FDA revises warnings regarding use of the diabetes medicine metformin in certain patients with reduced kidney function. https://www.fda.gov/downloads/Drugs/DrugSafety/UCM494140.pdf. Accessed September 30, 2019.

4. Wexler DJ. Sulfonylureas and cardiovascular safety the final verdict? JAMA. 2019;322:1147-1149.

Issue
Journal of Clinical Outcomes Management - 26(6)
Issue
Journal of Clinical Outcomes Management - 26(6)
Page Number
255-256
Page Number
255-256
Publications
Publications
Topics
Article Type
Display Headline
Cardioprotective Effect of Metformin in Patients with Decreased Renal Function
Display Headline
Cardioprotective Effect of Metformin in Patients with Decreased Renal Function
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Article PDF Media

Delayed Cardioversion Noninferior to Early Cardioversion in Recent-Onset Atrial Fibrillation

Article Type
Changed
Thu, 04/23/2020 - 15:20
Display Headline
Delayed Cardioversion Noninferior to Early Cardioversion in Recent-Onset Atrial Fibrillation

Study Overview

Objective. To assess whether immediate restoration of sinus rhythm is necessary in hemodynamically stable, recent onset (< 36 hr), symptomatic atrial fibrillation in the emergency department.

Design. Multicenter, randomized, open-label, noninferiority trial, RACE 7 ACWAS (Rate Control versus Electrical Cardioversion Trial 7--Acute Cardioversion versus Wait and See).

Setting and participants. 15 hospitals in the Netherlands, including 3 academic hospitals, 8 nonacademic teaching hospitals, and 4 nonteaching hospitals. Patients 18 years of age or older with recent-onset (< 36 hr), symptomatic atrial fibrillation without signs of myocardial ischemia or a history of persistent atrial fibrillation who presented to the emergency department were randomized in a 1:1 ratio to either a wait-and-see approach or early cardioversion. The wait-and-see approach consisted of the administration of rate-control medication, including intravenous or oral beta-adrenergic-receptor blocking agents, nondihydropyridine calcium-channel blockers, or digoxin to achieve a heart rate of 110 beats per minute or less and symptomatic relief. Patients were then discharged with an outpatient visit scheduled for the next day and a referral for cardioversion as close as possible to 48 hours after the onset of symptoms. The cardioconversion group received pharmacologic cardioversion with flecainide unless contraindicated, then electrical cardioversion was performed.

Main outcome measures. Primary outcome was the presence of sinus rhythm on electrocardiogram (ECG) recorded at the 4-week trial visit. Secondary endpoints included the duration of the index visit at the emergency department, emergency department visits related to atrial fibrillation, cardiovascular complications, and time until recurrence of atrial fibrillation.

Main results. From October 2014 through September 2018, 437 patients underwent randomization, with 218 patients assigned to the delayed cardioversion group and 219 to the early cardioversion group. Mean age was 65 years, and a majority of the patients (60%) were men (n = 261). The primary end point of the presence of sinus rhythm on the ECG recorded at the 4-week visit was present in 193 of 212 patients (91%) in the delayed cardioversion group and in 202 of 215 patients (94%) in the early cardioversion group. The –2.9 percentage points with confidence interval [CI] –8.2 to 2.2 (P = 0.005) met the criteria for the noninferiority of the wait-and-see approach.

For secondary outcomes, the median duration of the index visit was 120 minutes (range, 60 to 253) in the delayed cardioversion group and 158 minutes (range, 110 to 228) in the early cardioversion group. The median difference between the 2 groups was 30 minutes (95% CI, 6 to 51 minutes). There was no significant difference in cardiovascular complications between the 2 groups. Fourteen of 212 patients (7%) in the delayed cardioversion group and 14 of 215 patients (7%) in the early cardioversion group had subsequent visits to the emergency department because of a recurrence of atrial fibrillation. Telemetric ECG recordings were available for 335 of the 437 patients. Recurrence of atrial fibrillation occurred in 49 of the 164 (30%) patients in the delayed cardioversion group and 50 of the 171 (29%) patients in the early cardioversion group.

In terms of treatment, conversion to sinus rhythm within 48 hours occurred spontaneously in 150 of 218 patients (69%) in the delayed cardioversion group after receiving rate-control medications only. Of the 218 patients, 61 (28%) had delayed cardioversion (9 by pharmacologic and 52 by electrical cardioversion) as per protocol and achieved sinus rhythm within 48 hours. In the early cardioversion group, conversion to sinus rhythm occurred spontaneously in 36 of 219 patients (16%) before the initiation of the cardioversion and in 171 of 219 (78%) after cardioversion (83 by pharmacologic and 88 by electrical).

 

 

Conclusion. For patients with recent-onset, symptomatic atrial fibrillation, allowing a short time for spontaneous conversion to sinus rhythm is reasonable as demonstrated by this noninferiority study.

Commentary

Atrial fibrillation accounts for nearly 0.5% of all emergency department visits, and this number is increasing.1,2 Patients commonly undergo immediate restoration of sinus rhythm by means of pharmacologic or electrical cardioversion. However, it is questionable whether immediate restoration of sinus rhythm is necessary, as spontaneous conversion to sinus rhythm occurs frequently. In addition, the safety of cardioversion between 12 and 48 hours after the onset of atrial fibrillation is questionable.3,4

In this pragmatic trial, the findings suggest that rate-control therapy alone can achieve prompt symptom relief in almost all eligible patients, had a low risk of complications, and reduced the median length of stay in the emergency department to 2 hours. Independent of cardioversion strategy, the authors stressed the importance of management of stroke risk when patients present with atrial fibrillation to the emergency department. In this trial, 2 patients had cerebral embolism even though both were started on anticoagulation in the index visit. One patient from the delayed cardioversion group was on dabigatran after spontaneous conversion to sinus rhythm and had an event 5 days after the index visit. The other patient, from the early cardioversion group, was on rivaroxaban and had an event 10 days after electrical cardiology. In order for the results of this trial to be broadly applicable, exclusion of intraatrial thrombus on transesophageal echocardiography may be necessary when the onset of atrial fibrillation is not as clear.

There are several limitations of this study. First, this study included only 171 of the 3706 patients (4.6%) screened systematically at the 2 academic centers, but included 266 from 13 centers without systematic screening. The large amount of patients excluded from the controlled environment made the results less generalizable in the broader scope. Second, the reported incidence of recurrent atrial fibrillation within 4 weeks after randomization was an underestimation of the true recurrence rate since the trial used intermittent monitoring. Although the incidence of about 30% was similar between the 2 groups, the authors suggested that the probability of recurrence of atrial fibrillation was not affected by management approach during the acute event. Finally, for these results to be applicable in the general population, defined treatment algorithms and access to prompt follow-up are needed, and these may not be practical in other clinical settings.2,5

Applications for Clinical Practice

The current study demonstrated immediate cardioversion is not necessary for patients with recent-onset, symptomatic atrial fibrillation in the emergency department. Allowing a short time for spontaneous conversion to sinus rhythm is reasonable as long as the total time in atrial fibrillation is less than 48 hours. Special consideration for anticoagulation is critical because stroke has been associated with atrial fibrillation duration between 24 and 48 hours.

—Ka Ming Gordon Ngai, MD, MPH

References

1. Rozen G, Hosseini SM, Kaadan MI, et al. Emergency department visits for atrial fibrillation in the United States: trends in admission rates and economic burden from 2007 to 2014. J Am Heart Assoc. 2018;7(15):e009024.

2. Healey JS, McIntyre WF. The RACE to treat atrial fibrillation in the emergency department. N Engl J Med. 2019 Mar 18.

3. Andrade JM, Verma A, Mitchell LB, et al. 2018 Focused update of the Canadian Cardiovascular Society guidelines for the management of atrial fibrillation. Can J Cardiol. 2018;34:1371-1392. 


4. Nuotio I, Hartikainen JE, Grönberg T, et al. Time to cardioversion for acute atrial fibrillation and thromboembolic complications. JAMA. 2014;312:647-649

5. Baugh CW, Clark CL, Wilson JW, et al. Creation and implementation of an outpatient pathway for atrial fibrillation in the emergency department setting: results of an expert panel. Acad Emerg Med. 2018;25:1065-1075.

Article PDF
Issue
Journal of Clinical Outcomes Management - 26(3)
Publications
Topics
Page Number
113-114
Sections
Article PDF
Article PDF

Study Overview

Objective. To assess whether immediate restoration of sinus rhythm is necessary in hemodynamically stable, recent onset (< 36 hr), symptomatic atrial fibrillation in the emergency department.

Design. Multicenter, randomized, open-label, noninferiority trial, RACE 7 ACWAS (Rate Control versus Electrical Cardioversion Trial 7--Acute Cardioversion versus Wait and See).

Setting and participants. 15 hospitals in the Netherlands, including 3 academic hospitals, 8 nonacademic teaching hospitals, and 4 nonteaching hospitals. Patients 18 years of age or older with recent-onset (< 36 hr), symptomatic atrial fibrillation without signs of myocardial ischemia or a history of persistent atrial fibrillation who presented to the emergency department were randomized in a 1:1 ratio to either a wait-and-see approach or early cardioversion. The wait-and-see approach consisted of the administration of rate-control medication, including intravenous or oral beta-adrenergic-receptor blocking agents, nondihydropyridine calcium-channel blockers, or digoxin to achieve a heart rate of 110 beats per minute or less and symptomatic relief. Patients were then discharged with an outpatient visit scheduled for the next day and a referral for cardioversion as close as possible to 48 hours after the onset of symptoms. The cardioconversion group received pharmacologic cardioversion with flecainide unless contraindicated, then electrical cardioversion was performed.

Main outcome measures. Primary outcome was the presence of sinus rhythm on electrocardiogram (ECG) recorded at the 4-week trial visit. Secondary endpoints included the duration of the index visit at the emergency department, emergency department visits related to atrial fibrillation, cardiovascular complications, and time until recurrence of atrial fibrillation.

Main results. From October 2014 through September 2018, 437 patients underwent randomization, with 218 patients assigned to the delayed cardioversion group and 219 to the early cardioversion group. Mean age was 65 years, and a majority of the patients (60%) were men (n = 261). The primary end point of the presence of sinus rhythm on the ECG recorded at the 4-week visit was present in 193 of 212 patients (91%) in the delayed cardioversion group and in 202 of 215 patients (94%) in the early cardioversion group. The –2.9 percentage points with confidence interval [CI] –8.2 to 2.2 (P = 0.005) met the criteria for the noninferiority of the wait-and-see approach.

For secondary outcomes, the median duration of the index visit was 120 minutes (range, 60 to 253) in the delayed cardioversion group and 158 minutes (range, 110 to 228) in the early cardioversion group. The median difference between the 2 groups was 30 minutes (95% CI, 6 to 51 minutes). There was no significant difference in cardiovascular complications between the 2 groups. Fourteen of 212 patients (7%) in the delayed cardioversion group and 14 of 215 patients (7%) in the early cardioversion group had subsequent visits to the emergency department because of a recurrence of atrial fibrillation. Telemetric ECG recordings were available for 335 of the 437 patients. Recurrence of atrial fibrillation occurred in 49 of the 164 (30%) patients in the delayed cardioversion group and 50 of the 171 (29%) patients in the early cardioversion group.

In terms of treatment, conversion to sinus rhythm within 48 hours occurred spontaneously in 150 of 218 patients (69%) in the delayed cardioversion group after receiving rate-control medications only. Of the 218 patients, 61 (28%) had delayed cardioversion (9 by pharmacologic and 52 by electrical cardioversion) as per protocol and achieved sinus rhythm within 48 hours. In the early cardioversion group, conversion to sinus rhythm occurred spontaneously in 36 of 219 patients (16%) before the initiation of the cardioversion and in 171 of 219 (78%) after cardioversion (83 by pharmacologic and 88 by electrical).

 

 

Conclusion. For patients with recent-onset, symptomatic atrial fibrillation, allowing a short time for spontaneous conversion to sinus rhythm is reasonable as demonstrated by this noninferiority study.

Commentary

Atrial fibrillation accounts for nearly 0.5% of all emergency department visits, and this number is increasing.1,2 Patients commonly undergo immediate restoration of sinus rhythm by means of pharmacologic or electrical cardioversion. However, it is questionable whether immediate restoration of sinus rhythm is necessary, as spontaneous conversion to sinus rhythm occurs frequently. In addition, the safety of cardioversion between 12 and 48 hours after the onset of atrial fibrillation is questionable.3,4

In this pragmatic trial, the findings suggest that rate-control therapy alone can achieve prompt symptom relief in almost all eligible patients, had a low risk of complications, and reduced the median length of stay in the emergency department to 2 hours. Independent of cardioversion strategy, the authors stressed the importance of management of stroke risk when patients present with atrial fibrillation to the emergency department. In this trial, 2 patients had cerebral embolism even though both were started on anticoagulation in the index visit. One patient from the delayed cardioversion group was on dabigatran after spontaneous conversion to sinus rhythm and had an event 5 days after the index visit. The other patient, from the early cardioversion group, was on rivaroxaban and had an event 10 days after electrical cardiology. In order for the results of this trial to be broadly applicable, exclusion of intraatrial thrombus on transesophageal echocardiography may be necessary when the onset of atrial fibrillation is not as clear.

There are several limitations of this study. First, this study included only 171 of the 3706 patients (4.6%) screened systematically at the 2 academic centers, but included 266 from 13 centers without systematic screening. The large amount of patients excluded from the controlled environment made the results less generalizable in the broader scope. Second, the reported incidence of recurrent atrial fibrillation within 4 weeks after randomization was an underestimation of the true recurrence rate since the trial used intermittent monitoring. Although the incidence of about 30% was similar between the 2 groups, the authors suggested that the probability of recurrence of atrial fibrillation was not affected by management approach during the acute event. Finally, for these results to be applicable in the general population, defined treatment algorithms and access to prompt follow-up are needed, and these may not be practical in other clinical settings.2,5

Applications for Clinical Practice

The current study demonstrated immediate cardioversion is not necessary for patients with recent-onset, symptomatic atrial fibrillation in the emergency department. Allowing a short time for spontaneous conversion to sinus rhythm is reasonable as long as the total time in atrial fibrillation is less than 48 hours. Special consideration for anticoagulation is critical because stroke has been associated with atrial fibrillation duration between 24 and 48 hours.

—Ka Ming Gordon Ngai, MD, MPH

Study Overview

Objective. To assess whether immediate restoration of sinus rhythm is necessary in hemodynamically stable, recent onset (< 36 hr), symptomatic atrial fibrillation in the emergency department.

Design. Multicenter, randomized, open-label, noninferiority trial, RACE 7 ACWAS (Rate Control versus Electrical Cardioversion Trial 7--Acute Cardioversion versus Wait and See).

Setting and participants. 15 hospitals in the Netherlands, including 3 academic hospitals, 8 nonacademic teaching hospitals, and 4 nonteaching hospitals. Patients 18 years of age or older with recent-onset (< 36 hr), symptomatic atrial fibrillation without signs of myocardial ischemia or a history of persistent atrial fibrillation who presented to the emergency department were randomized in a 1:1 ratio to either a wait-and-see approach or early cardioversion. The wait-and-see approach consisted of the administration of rate-control medication, including intravenous or oral beta-adrenergic-receptor blocking agents, nondihydropyridine calcium-channel blockers, or digoxin to achieve a heart rate of 110 beats per minute or less and symptomatic relief. Patients were then discharged with an outpatient visit scheduled for the next day and a referral for cardioversion as close as possible to 48 hours after the onset of symptoms. The cardioconversion group received pharmacologic cardioversion with flecainide unless contraindicated, then electrical cardioversion was performed.

Main outcome measures. Primary outcome was the presence of sinus rhythm on electrocardiogram (ECG) recorded at the 4-week trial visit. Secondary endpoints included the duration of the index visit at the emergency department, emergency department visits related to atrial fibrillation, cardiovascular complications, and time until recurrence of atrial fibrillation.

Main results. From October 2014 through September 2018, 437 patients underwent randomization, with 218 patients assigned to the delayed cardioversion group and 219 to the early cardioversion group. Mean age was 65 years, and a majority of the patients (60%) were men (n = 261). The primary end point of the presence of sinus rhythm on the ECG recorded at the 4-week visit was present in 193 of 212 patients (91%) in the delayed cardioversion group and in 202 of 215 patients (94%) in the early cardioversion group. The –2.9 percentage points with confidence interval [CI] –8.2 to 2.2 (P = 0.005) met the criteria for the noninferiority of the wait-and-see approach.

For secondary outcomes, the median duration of the index visit was 120 minutes (range, 60 to 253) in the delayed cardioversion group and 158 minutes (range, 110 to 228) in the early cardioversion group. The median difference between the 2 groups was 30 minutes (95% CI, 6 to 51 minutes). There was no significant difference in cardiovascular complications between the 2 groups. Fourteen of 212 patients (7%) in the delayed cardioversion group and 14 of 215 patients (7%) in the early cardioversion group had subsequent visits to the emergency department because of a recurrence of atrial fibrillation. Telemetric ECG recordings were available for 335 of the 437 patients. Recurrence of atrial fibrillation occurred in 49 of the 164 (30%) patients in the delayed cardioversion group and 50 of the 171 (29%) patients in the early cardioversion group.

In terms of treatment, conversion to sinus rhythm within 48 hours occurred spontaneously in 150 of 218 patients (69%) in the delayed cardioversion group after receiving rate-control medications only. Of the 218 patients, 61 (28%) had delayed cardioversion (9 by pharmacologic and 52 by electrical cardioversion) as per protocol and achieved sinus rhythm within 48 hours. In the early cardioversion group, conversion to sinus rhythm occurred spontaneously in 36 of 219 patients (16%) before the initiation of the cardioversion and in 171 of 219 (78%) after cardioversion (83 by pharmacologic and 88 by electrical).

 

 

Conclusion. For patients with recent-onset, symptomatic atrial fibrillation, allowing a short time for spontaneous conversion to sinus rhythm is reasonable as demonstrated by this noninferiority study.

Commentary

Atrial fibrillation accounts for nearly 0.5% of all emergency department visits, and this number is increasing.1,2 Patients commonly undergo immediate restoration of sinus rhythm by means of pharmacologic or electrical cardioversion. However, it is questionable whether immediate restoration of sinus rhythm is necessary, as spontaneous conversion to sinus rhythm occurs frequently. In addition, the safety of cardioversion between 12 and 48 hours after the onset of atrial fibrillation is questionable.3,4

In this pragmatic trial, the findings suggest that rate-control therapy alone can achieve prompt symptom relief in almost all eligible patients, had a low risk of complications, and reduced the median length of stay in the emergency department to 2 hours. Independent of cardioversion strategy, the authors stressed the importance of management of stroke risk when patients present with atrial fibrillation to the emergency department. In this trial, 2 patients had cerebral embolism even though both were started on anticoagulation in the index visit. One patient from the delayed cardioversion group was on dabigatran after spontaneous conversion to sinus rhythm and had an event 5 days after the index visit. The other patient, from the early cardioversion group, was on rivaroxaban and had an event 10 days after electrical cardiology. In order for the results of this trial to be broadly applicable, exclusion of intraatrial thrombus on transesophageal echocardiography may be necessary when the onset of atrial fibrillation is not as clear.

There are several limitations of this study. First, this study included only 171 of the 3706 patients (4.6%) screened systematically at the 2 academic centers, but included 266 from 13 centers without systematic screening. The large amount of patients excluded from the controlled environment made the results less generalizable in the broader scope. Second, the reported incidence of recurrent atrial fibrillation within 4 weeks after randomization was an underestimation of the true recurrence rate since the trial used intermittent monitoring. Although the incidence of about 30% was similar between the 2 groups, the authors suggested that the probability of recurrence of atrial fibrillation was not affected by management approach during the acute event. Finally, for these results to be applicable in the general population, defined treatment algorithms and access to prompt follow-up are needed, and these may not be practical in other clinical settings.2,5

Applications for Clinical Practice

The current study demonstrated immediate cardioversion is not necessary for patients with recent-onset, symptomatic atrial fibrillation in the emergency department. Allowing a short time for spontaneous conversion to sinus rhythm is reasonable as long as the total time in atrial fibrillation is less than 48 hours. Special consideration for anticoagulation is critical because stroke has been associated with atrial fibrillation duration between 24 and 48 hours.

—Ka Ming Gordon Ngai, MD, MPH

References

1. Rozen G, Hosseini SM, Kaadan MI, et al. Emergency department visits for atrial fibrillation in the United States: trends in admission rates and economic burden from 2007 to 2014. J Am Heart Assoc. 2018;7(15):e009024.

2. Healey JS, McIntyre WF. The RACE to treat atrial fibrillation in the emergency department. N Engl J Med. 2019 Mar 18.

3. Andrade JM, Verma A, Mitchell LB, et al. 2018 Focused update of the Canadian Cardiovascular Society guidelines for the management of atrial fibrillation. Can J Cardiol. 2018;34:1371-1392. 


4. Nuotio I, Hartikainen JE, Grönberg T, et al. Time to cardioversion for acute atrial fibrillation and thromboembolic complications. JAMA. 2014;312:647-649

5. Baugh CW, Clark CL, Wilson JW, et al. Creation and implementation of an outpatient pathway for atrial fibrillation in the emergency department setting: results of an expert panel. Acad Emerg Med. 2018;25:1065-1075.

References

1. Rozen G, Hosseini SM, Kaadan MI, et al. Emergency department visits for atrial fibrillation in the United States: trends in admission rates and economic burden from 2007 to 2014. J Am Heart Assoc. 2018;7(15):e009024.

2. Healey JS, McIntyre WF. The RACE to treat atrial fibrillation in the emergency department. N Engl J Med. 2019 Mar 18.

3. Andrade JM, Verma A, Mitchell LB, et al. 2018 Focused update of the Canadian Cardiovascular Society guidelines for the management of atrial fibrillation. Can J Cardiol. 2018;34:1371-1392. 


4. Nuotio I, Hartikainen JE, Grönberg T, et al. Time to cardioversion for acute atrial fibrillation and thromboembolic complications. JAMA. 2014;312:647-649

5. Baugh CW, Clark CL, Wilson JW, et al. Creation and implementation of an outpatient pathway for atrial fibrillation in the emergency department setting: results of an expert panel. Acad Emerg Med. 2018;25:1065-1075.

Issue
Journal of Clinical Outcomes Management - 26(3)
Issue
Journal of Clinical Outcomes Management - 26(3)
Page Number
113-114
Page Number
113-114
Publications
Publications
Topics
Article Type
Display Headline
Delayed Cardioversion Noninferior to Early Cardioversion in Recent-Onset Atrial Fibrillation
Display Headline
Delayed Cardioversion Noninferior to Early Cardioversion in Recent-Onset Atrial Fibrillation
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Article PDF Media

Does Oral Chemotherapy Venetoclax Combined with Rituximab Improve Survival in Patients with Relapsed or Refractory Chronic Lymphocytic Leukemia?

Article Type
Changed
Fri, 04/24/2020 - 10:53

Study Overview

Objective. To assess whether a combination of venetoclax with rituximab, compared to standard chemoimmunotherapy (bendamustine with rituximab), improves outcomes in patients with relapsed or refractory chronic lymphocytic leukemia.

Design. International, randomized, open-label, phase 3 clinical trial (MURANO).

Setting and participants. Patients were eligilble for the study if they were 18 years of age or older with a diagnosis of relapsed or refractory chronic lymphocytic leukemia that required therapy, and had received 1 to 3 previous treatments (including at least 1 chemotherapy-containing regimen), had an Eastern Cooperative Oncology Group performance status score of 0 or 1, and had adequate bone marrow, renal, and hepatic function. Patients were randomly assigned either to receive venetoclax plus rituximab or bendamustine plus rituximab. Randomization was stratified by geographic region, responsiveness to previous therapy, as well as the presence or absence of chromosome 17p deletion.

Main outcome measures. Primary outcome was investigator-assessed progression-free survival, which was defined as the time from randomization to the first occurrence of disease progression or relapse or death from any cause, whichever occurs first. Secondary efficacy endpoints included independent review committee-assessed progression-free survival (stratified by chromosome 17p deletion), independent review committee-assessed overall response rate and complete response rate, overall survival, rates of clearance of minimal residual disease, the duration of response, event-free survival, and the time to the next treatment for chronic lymphocytic leukemia.

Main results. From 31 March 2014 to 23 September 2015, a total of 389 patients were enrolled at 109 sites in 20 countries and were randomly assigned to receive venetoclax plus rituximab (n = 194), or bendamustine plus rituximab (n = 195). Median age was 65 years (range, 22–85) and a majority of the patients (73.8%) were men. Overall, the demographic and disease characteristics of the 2 groups were similar at baseline.

The median follow-up period was 23.8 months (range, 0–37.4). The median investigator-assessed progression-free survival was significantly longer in the venetoclax-rituximab group (median progression-free survival not reached, 32 events of progression or death in 194 patients) and was 17 months in the bendamustine-rituximab group (114 events in 195 patients). The 2-year rate of investigator-assessed progression-free survival was 84.9% (95% confidence interval [CI] 79.1–90.5) in the venetoclax-rituximab group and 36.3% (95% CI 28.5–44.0) in the bendamustine-rituximab group (hazard ratio for progression or death, 0.17; 95% CI 0.11 to 0.25; P < 0.001). Benefit was consistent in favor of the venetoclax-rituximab group in all prespecified subgroup analyses, with or without chromosome 17p deletion.

The rate of overall survival was higher in the venetoclax-rituximab group than in the bendamustine-rituximab group, with 24-month rates of 91.9% and 86.6%, respectively (hazard ratio 0.58, 95% CI 0.25–0.90). Assessments of minimal residual disease were available for 366 of the 389 patients (94.1%). On the basis of peripheral-blood samples, the venetoclax-rituximab group had a higher minimal residual disease compared to the bendamustine-rituximab group (121 of 194 patients [62.4%] vs. 26 of 195 patients [13.3%]). In bone marrow aspirate, higher rates of clearance of minimal residual disease was seen in the venetoclax-rituximab group (53 of 194 patients [27.3%]) as compared to the bendamustine-rituximab group (3 of 195 patients [1.5%]).

In terms of safety, the most common adverse event reported was neutropenia (60.8% of the patients in the venetoclax-rituximab group vs. 44.1% of the patients in the bendamustine-rituximab group). This contributed to the overall higher grade 3 or 4 adverse event rate in the venetoclax-rituximab group (159 of the 194 patients, or 82.0%) as compared to the bendamustine-rituximab group (132 of 188 patients, or 70.2%). The incidence of serious adverse events, as well as adverse events that resulted in death were similar in the 2 groups.

Conclusion. For patients with relapsed or refractory chronic lymphocytic leukemia, venetoclax plus rituximab resulted in significantly higher rates of progression-free survival than standard therapy with bendamustine plus rituximab.

Commentary

Despite advances in treatment, chronic lymphocytic leukemia remains incurable with conventional chemoimmunotherapy regimens, and almost all patient relapse after initial therapy. Following relapse of the disease, the goal is to provide durable progression-free survival, which may extend overall survival [1]. In a subset of chronic lymphocytic leukemia patients with deletion or mutation of TP53 loci on chromosome 17p13, their disease responds especially poorly to conventional treatment and they have a median survival of less than 3 years from the time of initiating first treatment.

Apoptosis defines a process of programmed cell death with an extrinsic and intrinsic cellular apoptotic pathway. B-cell lymphoma/leukemia 2 (BCL-2) protein is a key regulator of the intrinsic apoptotic pathway and almost all chronic lymphocytic leukemia cells elude apoptosis through overexpression of BCL-2. Venetoclax is an orally administered, highly selective, potent BCL-2 inhibitor approved by the FDA in 2016 for the treatment of chronic lymphocytic leukemia patients with 17p deletion who have received at least 1 prior therapy [3]. There has been great interest in combining venetoclax with other active agents in chronic lymphocytic leukemia such as chemotherapy, monoclonal antibodies, and B-cell receptor inhibitors. The combination of venetoclax with the CD20 antibody rituximab was found to be able to overcome micro-environment-induced resistance to venetoclax [4].

In this analysis of the phase 3 MURANO trial of venetoclax plus rituximab in relapsed or refractory chronic lymphocytic leukemia by Seymour et al, the authors demonstrated a significantly higher rate of progression-free survival with venetoclax plus rituximab than with standard chemoimmunotherapy bendamustine plus rituximab. In addition, secondary efficacy measures, including the complete response rate, the overall response rate, and overall survival were also higher in the venetoclax plus rituximab than with bendamustine plus rituximab.

There are several limitations of this study. First, this study was terminated early at the time of the data review on 6 September 2017. The independent data monitoring committee recommended that the primary analysis be conducted at that time because the prespecified statistical boundaries for early stopping were crossed for progression-free survival on the basis of stratified log-rank tests. In a letter to the editor, Alexander et al questioned the validity of results when design stages are violated. In immunotherapy trials, progression-free survival curves often separated at later time, rather than as a constant process; this violates the key assumption of proportionality of hazard functions. When the study was terminated early, post hoc confirmatory analyses and evaluations of robustness of the statistical plan could be used; however, prespecified analyses are critical to reproducibility in trials that are meant to be practice-changing [5]. Second, complete response rates were lower when responses was assessed by the independent review committee than when assessed by the investigator. While this represented a certain degree of author bias, the overall results were similar and the effect of venetoclax plus rituximab remain significantly better than bendamustine plus rituximab.

 

 

Applications for Clinical Practice

The current study demonstrated that venetoclax is safe and effective when combining with rituximab in the treating of chronic lymphocytic leukemia patients with or without 17p deletion who have received at least one prior therapy. The most common serious adverse event was neutropenia, correlated with tumor lysis syndrome. Careful monitoring, slow dose ramp-up, and adequate prophylaxis can mitigate some of the adverse effects.

—Ka Ming Gordon Ngai, MD, MPH

References

1. Tam CS, Stilgenbauder S. How best to manage patients with chronic lymphocytic leuekmia with 17p deletion and/or TP53 mutation? Leuk Lymphoma 2015;56:587–93.

2. Zenz T, Eichhorst B, Busch R, et al. TP53 mutation and survival in chronic lymphocytic leukemia. J Clin Oncol 2010;28:4473–9.

3. FDA news release. FDA approves new drug for chronic lymphocytic leukemia in patients with a specific chromosomal abnormality. 11 April 2016. Accessed 9 May 2018 at www.fda.gov/newsevents/newsroom/pressannouncements/ucm495253.htm.

4. Thijssen R, Slinger E, Weller K, et al. Resistance to ABT-199 induced by micro-environmental signals in chronic lymphocytic leukemia can be counteracted by CD20 antibodies or kinase inhibitors. Haematologica 2015;100:e302-e306.

5. Alexander BM, Schoenfeld JD, Trippa L. Hazards of hazard ratios—deviations from model assumptions in immunotherapy. N Engl J Med 2018;378:1158–9.

Article PDF
Issue
Journal of Clinical Outcomes Management - 25(6)a
Publications
Topics
Sections
Article PDF
Article PDF

Study Overview

Objective. To assess whether a combination of venetoclax with rituximab, compared to standard chemoimmunotherapy (bendamustine with rituximab), improves outcomes in patients with relapsed or refractory chronic lymphocytic leukemia.

Design. International, randomized, open-label, phase 3 clinical trial (MURANO).

Setting and participants. Patients were eligilble for the study if they were 18 years of age or older with a diagnosis of relapsed or refractory chronic lymphocytic leukemia that required therapy, and had received 1 to 3 previous treatments (including at least 1 chemotherapy-containing regimen), had an Eastern Cooperative Oncology Group performance status score of 0 or 1, and had adequate bone marrow, renal, and hepatic function. Patients were randomly assigned either to receive venetoclax plus rituximab or bendamustine plus rituximab. Randomization was stratified by geographic region, responsiveness to previous therapy, as well as the presence or absence of chromosome 17p deletion.

Main outcome measures. Primary outcome was investigator-assessed progression-free survival, which was defined as the time from randomization to the first occurrence of disease progression or relapse or death from any cause, whichever occurs first. Secondary efficacy endpoints included independent review committee-assessed progression-free survival (stratified by chromosome 17p deletion), independent review committee-assessed overall response rate and complete response rate, overall survival, rates of clearance of minimal residual disease, the duration of response, event-free survival, and the time to the next treatment for chronic lymphocytic leukemia.

Main results. From 31 March 2014 to 23 September 2015, a total of 389 patients were enrolled at 109 sites in 20 countries and were randomly assigned to receive venetoclax plus rituximab (n = 194), or bendamustine plus rituximab (n = 195). Median age was 65 years (range, 22–85) and a majority of the patients (73.8%) were men. Overall, the demographic and disease characteristics of the 2 groups were similar at baseline.

The median follow-up period was 23.8 months (range, 0–37.4). The median investigator-assessed progression-free survival was significantly longer in the venetoclax-rituximab group (median progression-free survival not reached, 32 events of progression or death in 194 patients) and was 17 months in the bendamustine-rituximab group (114 events in 195 patients). The 2-year rate of investigator-assessed progression-free survival was 84.9% (95% confidence interval [CI] 79.1–90.5) in the venetoclax-rituximab group and 36.3% (95% CI 28.5–44.0) in the bendamustine-rituximab group (hazard ratio for progression or death, 0.17; 95% CI 0.11 to 0.25; P < 0.001). Benefit was consistent in favor of the venetoclax-rituximab group in all prespecified subgroup analyses, with or without chromosome 17p deletion.

The rate of overall survival was higher in the venetoclax-rituximab group than in the bendamustine-rituximab group, with 24-month rates of 91.9% and 86.6%, respectively (hazard ratio 0.58, 95% CI 0.25–0.90). Assessments of minimal residual disease were available for 366 of the 389 patients (94.1%). On the basis of peripheral-blood samples, the venetoclax-rituximab group had a higher minimal residual disease compared to the bendamustine-rituximab group (121 of 194 patients [62.4%] vs. 26 of 195 patients [13.3%]). In bone marrow aspirate, higher rates of clearance of minimal residual disease was seen in the venetoclax-rituximab group (53 of 194 patients [27.3%]) as compared to the bendamustine-rituximab group (3 of 195 patients [1.5%]).

In terms of safety, the most common adverse event reported was neutropenia (60.8% of the patients in the venetoclax-rituximab group vs. 44.1% of the patients in the bendamustine-rituximab group). This contributed to the overall higher grade 3 or 4 adverse event rate in the venetoclax-rituximab group (159 of the 194 patients, or 82.0%) as compared to the bendamustine-rituximab group (132 of 188 patients, or 70.2%). The incidence of serious adverse events, as well as adverse events that resulted in death were similar in the 2 groups.

Conclusion. For patients with relapsed or refractory chronic lymphocytic leukemia, venetoclax plus rituximab resulted in significantly higher rates of progression-free survival than standard therapy with bendamustine plus rituximab.

Commentary

Despite advances in treatment, chronic lymphocytic leukemia remains incurable with conventional chemoimmunotherapy regimens, and almost all patient relapse after initial therapy. Following relapse of the disease, the goal is to provide durable progression-free survival, which may extend overall survival [1]. In a subset of chronic lymphocytic leukemia patients with deletion or mutation of TP53 loci on chromosome 17p13, their disease responds especially poorly to conventional treatment and they have a median survival of less than 3 years from the time of initiating first treatment.

Apoptosis defines a process of programmed cell death with an extrinsic and intrinsic cellular apoptotic pathway. B-cell lymphoma/leukemia 2 (BCL-2) protein is a key regulator of the intrinsic apoptotic pathway and almost all chronic lymphocytic leukemia cells elude apoptosis through overexpression of BCL-2. Venetoclax is an orally administered, highly selective, potent BCL-2 inhibitor approved by the FDA in 2016 for the treatment of chronic lymphocytic leukemia patients with 17p deletion who have received at least 1 prior therapy [3]. There has been great interest in combining venetoclax with other active agents in chronic lymphocytic leukemia such as chemotherapy, monoclonal antibodies, and B-cell receptor inhibitors. The combination of venetoclax with the CD20 antibody rituximab was found to be able to overcome micro-environment-induced resistance to venetoclax [4].

In this analysis of the phase 3 MURANO trial of venetoclax plus rituximab in relapsed or refractory chronic lymphocytic leukemia by Seymour et al, the authors demonstrated a significantly higher rate of progression-free survival with venetoclax plus rituximab than with standard chemoimmunotherapy bendamustine plus rituximab. In addition, secondary efficacy measures, including the complete response rate, the overall response rate, and overall survival were also higher in the venetoclax plus rituximab than with bendamustine plus rituximab.

There are several limitations of this study. First, this study was terminated early at the time of the data review on 6 September 2017. The independent data monitoring committee recommended that the primary analysis be conducted at that time because the prespecified statistical boundaries for early stopping were crossed for progression-free survival on the basis of stratified log-rank tests. In a letter to the editor, Alexander et al questioned the validity of results when design stages are violated. In immunotherapy trials, progression-free survival curves often separated at later time, rather than as a constant process; this violates the key assumption of proportionality of hazard functions. When the study was terminated early, post hoc confirmatory analyses and evaluations of robustness of the statistical plan could be used; however, prespecified analyses are critical to reproducibility in trials that are meant to be practice-changing [5]. Second, complete response rates were lower when responses was assessed by the independent review committee than when assessed by the investigator. While this represented a certain degree of author bias, the overall results were similar and the effect of venetoclax plus rituximab remain significantly better than bendamustine plus rituximab.

 

 

Applications for Clinical Practice

The current study demonstrated that venetoclax is safe and effective when combining with rituximab in the treating of chronic lymphocytic leukemia patients with or without 17p deletion who have received at least one prior therapy. The most common serious adverse event was neutropenia, correlated with tumor lysis syndrome. Careful monitoring, slow dose ramp-up, and adequate prophylaxis can mitigate some of the adverse effects.

—Ka Ming Gordon Ngai, MD, MPH

Study Overview

Objective. To assess whether a combination of venetoclax with rituximab, compared to standard chemoimmunotherapy (bendamustine with rituximab), improves outcomes in patients with relapsed or refractory chronic lymphocytic leukemia.

Design. International, randomized, open-label, phase 3 clinical trial (MURANO).

Setting and participants. Patients were eligilble for the study if they were 18 years of age or older with a diagnosis of relapsed or refractory chronic lymphocytic leukemia that required therapy, and had received 1 to 3 previous treatments (including at least 1 chemotherapy-containing regimen), had an Eastern Cooperative Oncology Group performance status score of 0 or 1, and had adequate bone marrow, renal, and hepatic function. Patients were randomly assigned either to receive venetoclax plus rituximab or bendamustine plus rituximab. Randomization was stratified by geographic region, responsiveness to previous therapy, as well as the presence or absence of chromosome 17p deletion.

Main outcome measures. Primary outcome was investigator-assessed progression-free survival, which was defined as the time from randomization to the first occurrence of disease progression or relapse or death from any cause, whichever occurs first. Secondary efficacy endpoints included independent review committee-assessed progression-free survival (stratified by chromosome 17p deletion), independent review committee-assessed overall response rate and complete response rate, overall survival, rates of clearance of minimal residual disease, the duration of response, event-free survival, and the time to the next treatment for chronic lymphocytic leukemia.

Main results. From 31 March 2014 to 23 September 2015, a total of 389 patients were enrolled at 109 sites in 20 countries and were randomly assigned to receive venetoclax plus rituximab (n = 194), or bendamustine plus rituximab (n = 195). Median age was 65 years (range, 22–85) and a majority of the patients (73.8%) were men. Overall, the demographic and disease characteristics of the 2 groups were similar at baseline.

The median follow-up period was 23.8 months (range, 0–37.4). The median investigator-assessed progression-free survival was significantly longer in the venetoclax-rituximab group (median progression-free survival not reached, 32 events of progression or death in 194 patients) and was 17 months in the bendamustine-rituximab group (114 events in 195 patients). The 2-year rate of investigator-assessed progression-free survival was 84.9% (95% confidence interval [CI] 79.1–90.5) in the venetoclax-rituximab group and 36.3% (95% CI 28.5–44.0) in the bendamustine-rituximab group (hazard ratio for progression or death, 0.17; 95% CI 0.11 to 0.25; P < 0.001). Benefit was consistent in favor of the venetoclax-rituximab group in all prespecified subgroup analyses, with or without chromosome 17p deletion.

The rate of overall survival was higher in the venetoclax-rituximab group than in the bendamustine-rituximab group, with 24-month rates of 91.9% and 86.6%, respectively (hazard ratio 0.58, 95% CI 0.25–0.90). Assessments of minimal residual disease were available for 366 of the 389 patients (94.1%). On the basis of peripheral-blood samples, the venetoclax-rituximab group had a higher minimal residual disease compared to the bendamustine-rituximab group (121 of 194 patients [62.4%] vs. 26 of 195 patients [13.3%]). In bone marrow aspirate, higher rates of clearance of minimal residual disease was seen in the venetoclax-rituximab group (53 of 194 patients [27.3%]) as compared to the bendamustine-rituximab group (3 of 195 patients [1.5%]).

In terms of safety, the most common adverse event reported was neutropenia (60.8% of the patients in the venetoclax-rituximab group vs. 44.1% of the patients in the bendamustine-rituximab group). This contributed to the overall higher grade 3 or 4 adverse event rate in the venetoclax-rituximab group (159 of the 194 patients, or 82.0%) as compared to the bendamustine-rituximab group (132 of 188 patients, or 70.2%). The incidence of serious adverse events, as well as adverse events that resulted in death were similar in the 2 groups.

Conclusion. For patients with relapsed or refractory chronic lymphocytic leukemia, venetoclax plus rituximab resulted in significantly higher rates of progression-free survival than standard therapy with bendamustine plus rituximab.

Commentary

Despite advances in treatment, chronic lymphocytic leukemia remains incurable with conventional chemoimmunotherapy regimens, and almost all patient relapse after initial therapy. Following relapse of the disease, the goal is to provide durable progression-free survival, which may extend overall survival [1]. In a subset of chronic lymphocytic leukemia patients with deletion or mutation of TP53 loci on chromosome 17p13, their disease responds especially poorly to conventional treatment and they have a median survival of less than 3 years from the time of initiating first treatment.

Apoptosis defines a process of programmed cell death with an extrinsic and intrinsic cellular apoptotic pathway. B-cell lymphoma/leukemia 2 (BCL-2) protein is a key regulator of the intrinsic apoptotic pathway and almost all chronic lymphocytic leukemia cells elude apoptosis through overexpression of BCL-2. Venetoclax is an orally administered, highly selective, potent BCL-2 inhibitor approved by the FDA in 2016 for the treatment of chronic lymphocytic leukemia patients with 17p deletion who have received at least 1 prior therapy [3]. There has been great interest in combining venetoclax with other active agents in chronic lymphocytic leukemia such as chemotherapy, monoclonal antibodies, and B-cell receptor inhibitors. The combination of venetoclax with the CD20 antibody rituximab was found to be able to overcome micro-environment-induced resistance to venetoclax [4].

In this analysis of the phase 3 MURANO trial of venetoclax plus rituximab in relapsed or refractory chronic lymphocytic leukemia by Seymour et al, the authors demonstrated a significantly higher rate of progression-free survival with venetoclax plus rituximab than with standard chemoimmunotherapy bendamustine plus rituximab. In addition, secondary efficacy measures, including the complete response rate, the overall response rate, and overall survival were also higher in the venetoclax plus rituximab than with bendamustine plus rituximab.

There are several limitations of this study. First, this study was terminated early at the time of the data review on 6 September 2017. The independent data monitoring committee recommended that the primary analysis be conducted at that time because the prespecified statistical boundaries for early stopping were crossed for progression-free survival on the basis of stratified log-rank tests. In a letter to the editor, Alexander et al questioned the validity of results when design stages are violated. In immunotherapy trials, progression-free survival curves often separated at later time, rather than as a constant process; this violates the key assumption of proportionality of hazard functions. When the study was terminated early, post hoc confirmatory analyses and evaluations of robustness of the statistical plan could be used; however, prespecified analyses are critical to reproducibility in trials that are meant to be practice-changing [5]. Second, complete response rates were lower when responses was assessed by the independent review committee than when assessed by the investigator. While this represented a certain degree of author bias, the overall results were similar and the effect of venetoclax plus rituximab remain significantly better than bendamustine plus rituximab.

 

 

Applications for Clinical Practice

The current study demonstrated that venetoclax is safe and effective when combining with rituximab in the treating of chronic lymphocytic leukemia patients with or without 17p deletion who have received at least one prior therapy. The most common serious adverse event was neutropenia, correlated with tumor lysis syndrome. Careful monitoring, slow dose ramp-up, and adequate prophylaxis can mitigate some of the adverse effects.

—Ka Ming Gordon Ngai, MD, MPH

References

1. Tam CS, Stilgenbauder S. How best to manage patients with chronic lymphocytic leuekmia with 17p deletion and/or TP53 mutation? Leuk Lymphoma 2015;56:587–93.

2. Zenz T, Eichhorst B, Busch R, et al. TP53 mutation and survival in chronic lymphocytic leukemia. J Clin Oncol 2010;28:4473–9.

3. FDA news release. FDA approves new drug for chronic lymphocytic leukemia in patients with a specific chromosomal abnormality. 11 April 2016. Accessed 9 May 2018 at www.fda.gov/newsevents/newsroom/pressannouncements/ucm495253.htm.

4. Thijssen R, Slinger E, Weller K, et al. Resistance to ABT-199 induced by micro-environmental signals in chronic lymphocytic leukemia can be counteracted by CD20 antibodies or kinase inhibitors. Haematologica 2015;100:e302-e306.

5. Alexander BM, Schoenfeld JD, Trippa L. Hazards of hazard ratios—deviations from model assumptions in immunotherapy. N Engl J Med 2018;378:1158–9.

References

1. Tam CS, Stilgenbauder S. How best to manage patients with chronic lymphocytic leuekmia with 17p deletion and/or TP53 mutation? Leuk Lymphoma 2015;56:587–93.

2. Zenz T, Eichhorst B, Busch R, et al. TP53 mutation and survival in chronic lymphocytic leukemia. J Clin Oncol 2010;28:4473–9.

3. FDA news release. FDA approves new drug for chronic lymphocytic leukemia in patients with a specific chromosomal abnormality. 11 April 2016. Accessed 9 May 2018 at www.fda.gov/newsevents/newsroom/pressannouncements/ucm495253.htm.

4. Thijssen R, Slinger E, Weller K, et al. Resistance to ABT-199 induced by micro-environmental signals in chronic lymphocytic leukemia can be counteracted by CD20 antibodies or kinase inhibitors. Haematologica 2015;100:e302-e306.

5. Alexander BM, Schoenfeld JD, Trippa L. Hazards of hazard ratios—deviations from model assumptions in immunotherapy. N Engl J Med 2018;378:1158–9.

Issue
Journal of Clinical Outcomes Management - 25(6)a
Issue
Journal of Clinical Outcomes Management - 25(6)a
Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Article PDF Media

Which Herpes Zoster Vaccine is Most Cost-Effective?

Article Type
Changed
Wed, 04/29/2020 - 11:15

Study Overview

Objective. To assess the cost-effectiveness of the new adjuvanted herpes zoster subunit vaccine (HZ/su) as compared with that of the current live attenuated herpes zoster vaccine (ZVL), or no vaccine.

Design. Markov decision model evaluating 3 strategies from a societal perspective: (1) no vaccination, (2) vaccination with single dose ZVL, and (3) vaccination with 2-dose series of HZ/su.

Setting and participants. Data for the model were extracted from the US medical literature using PubMed through January 2015. Data were derived from studies of fewer than 100 patients to more than 30,000 patients, depending on the variable assessed. Variables included epidemiologic parameters, vaccine efficacy and adverse events, quality-adjusted life-years (QALYs), and costs. Because there is no standard willingness-to-pay (WTP) threshold for cost-effectiveness in the United States, $50,000 per QALY was chosen.

Main outcome measures. Total costs and QALYs.

Main results. At all ages, no vaccination was always the least expensive and least effective option, while HZ/su was always the most effective and less expensive than ZVL. At a proposed price of $280 per series ($140 per dose), HZ/su was more effective and less expensive than ZVL at all ages. The incremental cost-effectiveness ratios compared with no vaccination ranged from $20,038 to $30,084 per QALY, depending on vaccination age. The cost-effectiveness of HZ/su was insensitive to the waning rate of either vaccine due to its high efficacy, with initial level of protection close to 90% even among people 70 years or older.

Conclusion. At a manufacturer suggested price of $280 per series ($140 per dose), HZ/su would cost less than ZVL and has a high probability of offering good value.

Commentary

Herpes zosters is a localized, usually painful, cutaneous eruption resulting from reactivation of latent varicella zoster virus. It is a common disease with approximately one million cases occurring each year in the United States [1]. The incidence increases with age, from 5 cases per 1000 population in adults aged 50–59 years to 11 cases per 1000 population in persons aged ≥ 80 years. Postherpetic neuralgia, commonly defined as persistent pain for at least 90 days following the resolution of the herpes zoster rash, is the most common complication and occurs in 10% to 13% of herpes zoster cases in persons aged > 50 years [2,3].

In 2006, the US Food and Drug Administration (FDA) approved the ZVL vaccine Zostavax (Merck) for prevention of postherpetic neuralgia. By 2016, 33% of adults aged ≥ 60 years reported receipt of the vaccine [4]. However, ZVL does not prevent all herpes zoster, particularly among the elderly. Moreover, the efficacy wanes completely after approximately 10 years [5]. To address these shortcomings, a 2-dose HZ/su (Shingrix; GlaxoSmithKline) containing recombinant glycoprotein E in combination with a novel adjuvant (AS01B) was approved by the FDA in adults aged ≥ 50 years. In randomized controlled trials, HZ/su has an efficacy of close to 97%, even after age 70 years [6].

With the approval of the new attenuated herpes zoster vaccine, clinicians and patients face the question of which vaccine to get and when. The cost-effectiveness analysis published by Le and Rothberg in this study compare the value of HZ/su with ZVL vaccine and a no-vaccine strategy for individuals 60 years or older from the US societal perspective. The results suggest that, at $140 per dose, using HZ/su vaccine compared with no vaccine would cost between $20,038 and $30,084 per QALY and thus is a cost-effective strategy. The deterministic sensitivity analysis indicates that the overall results do not change under different assumptions about model input parameters, even if patients are nonadherent to the second dose of HZ/su vaccine.

 

As with any simulation study, the major limitation of this study is the accuracy of the model and the assumptions on which it is based. The body of evidence for benefits of ZVL was large, including multiple pre-licensure and post-licensure RCTs, as well as observational studies of effectiveness. On the other hand, the body of evidence for benefits of RZV was primarily informed by one high-quality RCT that studied vaccine efficacy through 4 years post-vaccination [4,6]. Currently, 3 other independent cost-effectiveness analysis are available. The Centers for Disease Control and Prevention model estimated HZ/su vaccine cost per QALY of $31,000 when vaccination occurred at age ≥ 50 years. The GlaxoSmithKline model, manufacturer of HZ/su vaccine, estimated a HZ/su vaccine cost per QALY of $12,000. While the Merck model, manufacturer of the ZVL vaccine, estimated a HZ/su vaccine cost per QALY of $107,000 [4]. In addition to model variables, the key assumption by Le and Rothberg are based on the HZ/su vaccine cost at $140 per dose and ZVL at $213. The study results need to be interpreted carefully if the vaccine prices turn out to be different in the future.

Applications for Clinical Practice

The current study by Le and Rothberg demonstrated the cost-effectiveness of the new HZ/su vaccine. Since the study’s publication, the CDC has updated their recommendations on immunization practices for use of herpes zoster vaccine [4]. HZ/su vaccine, also known as the recombinant zoster vaccine (RZV), is now preferred over ZVL for the prevention of herpes zoster and related complications. RZV is recommended for immunocompetent adults age 50 or older, 10 years earlier than previously for the ZVL. In addition, RZV is recommended for adults who previously received ZVL. Finally, RZV can be administered concomitantly with other adult vaccines, does not require screening for a history of varicella, and is likely safe for immunocompromised persons.

—Ka Ming Gordon Ngai, MD, MPH

References

1. Insinga RP, Itzler RF, Pellissier JM, et al. The incidence of herpes zoster in a United States administrative database. J Gen Intern Med 2005;20:748–53.

2. Yawn BP, Saddier P, Wollan PC, et al. A population-based study of the incidence and complication rates of herpes zoster before zoster vaccine introduction. Mayo Clin Proc 2007;82:1341–9.

3. Oxman MN, Levin MJ, Johnson GR, et al. Shingles Prevention Study Group. A vaccine to prevent herpes zoster and postherpetic neuralgia in older adults. N Eng J Med 2005;352:2271-84.

4. Dooling KL, Guo A, Patel M, et al. Recommendations of the Advisory Committee on Immunization Practices for use of herpes zoster vaccines. MMWR Morb Mortal Wkly Rep 2018;67:103–8.

5. Morrison VA, Johnson GR, Schmader KE, et al; Shingles Prevention Study Group. Long-term persistence of zoster vaccine efficacy. Clin Infect Dis 2015;60:900–9.

6. Lai H, Cunningham AL, Godeaux O, et al; ZOE-50 Study Group. Efficacy of an adjuvanted herpes zoster subunit vaccine in older adults. N Engl J Med 2015;372:2087–96.

Article PDF
Issue
Journal of Clinical Outcomes Management - 25(3)
Publications
Topics
Sections
Article PDF
Article PDF

Study Overview

Objective. To assess the cost-effectiveness of the new adjuvanted herpes zoster subunit vaccine (HZ/su) as compared with that of the current live attenuated herpes zoster vaccine (ZVL), or no vaccine.

Design. Markov decision model evaluating 3 strategies from a societal perspective: (1) no vaccination, (2) vaccination with single dose ZVL, and (3) vaccination with 2-dose series of HZ/su.

Setting and participants. Data for the model were extracted from the US medical literature using PubMed through January 2015. Data were derived from studies of fewer than 100 patients to more than 30,000 patients, depending on the variable assessed. Variables included epidemiologic parameters, vaccine efficacy and adverse events, quality-adjusted life-years (QALYs), and costs. Because there is no standard willingness-to-pay (WTP) threshold for cost-effectiveness in the United States, $50,000 per QALY was chosen.

Main outcome measures. Total costs and QALYs.

Main results. At all ages, no vaccination was always the least expensive and least effective option, while HZ/su was always the most effective and less expensive than ZVL. At a proposed price of $280 per series ($140 per dose), HZ/su was more effective and less expensive than ZVL at all ages. The incremental cost-effectiveness ratios compared with no vaccination ranged from $20,038 to $30,084 per QALY, depending on vaccination age. The cost-effectiveness of HZ/su was insensitive to the waning rate of either vaccine due to its high efficacy, with initial level of protection close to 90% even among people 70 years or older.

Conclusion. At a manufacturer suggested price of $280 per series ($140 per dose), HZ/su would cost less than ZVL and has a high probability of offering good value.

Commentary

Herpes zosters is a localized, usually painful, cutaneous eruption resulting from reactivation of latent varicella zoster virus. It is a common disease with approximately one million cases occurring each year in the United States [1]. The incidence increases with age, from 5 cases per 1000 population in adults aged 50–59 years to 11 cases per 1000 population in persons aged ≥ 80 years. Postherpetic neuralgia, commonly defined as persistent pain for at least 90 days following the resolution of the herpes zoster rash, is the most common complication and occurs in 10% to 13% of herpes zoster cases in persons aged > 50 years [2,3].

In 2006, the US Food and Drug Administration (FDA) approved the ZVL vaccine Zostavax (Merck) for prevention of postherpetic neuralgia. By 2016, 33% of adults aged ≥ 60 years reported receipt of the vaccine [4]. However, ZVL does not prevent all herpes zoster, particularly among the elderly. Moreover, the efficacy wanes completely after approximately 10 years [5]. To address these shortcomings, a 2-dose HZ/su (Shingrix; GlaxoSmithKline) containing recombinant glycoprotein E in combination with a novel adjuvant (AS01B) was approved by the FDA in adults aged ≥ 50 years. In randomized controlled trials, HZ/su has an efficacy of close to 97%, even after age 70 years [6].

With the approval of the new attenuated herpes zoster vaccine, clinicians and patients face the question of which vaccine to get and when. The cost-effectiveness analysis published by Le and Rothberg in this study compare the value of HZ/su with ZVL vaccine and a no-vaccine strategy for individuals 60 years or older from the US societal perspective. The results suggest that, at $140 per dose, using HZ/su vaccine compared with no vaccine would cost between $20,038 and $30,084 per QALY and thus is a cost-effective strategy. The deterministic sensitivity analysis indicates that the overall results do not change under different assumptions about model input parameters, even if patients are nonadherent to the second dose of HZ/su vaccine.

 

As with any simulation study, the major limitation of this study is the accuracy of the model and the assumptions on which it is based. The body of evidence for benefits of ZVL was large, including multiple pre-licensure and post-licensure RCTs, as well as observational studies of effectiveness. On the other hand, the body of evidence for benefits of RZV was primarily informed by one high-quality RCT that studied vaccine efficacy through 4 years post-vaccination [4,6]. Currently, 3 other independent cost-effectiveness analysis are available. The Centers for Disease Control and Prevention model estimated HZ/su vaccine cost per QALY of $31,000 when vaccination occurred at age ≥ 50 years. The GlaxoSmithKline model, manufacturer of HZ/su vaccine, estimated a HZ/su vaccine cost per QALY of $12,000. While the Merck model, manufacturer of the ZVL vaccine, estimated a HZ/su vaccine cost per QALY of $107,000 [4]. In addition to model variables, the key assumption by Le and Rothberg are based on the HZ/su vaccine cost at $140 per dose and ZVL at $213. The study results need to be interpreted carefully if the vaccine prices turn out to be different in the future.

Applications for Clinical Practice

The current study by Le and Rothberg demonstrated the cost-effectiveness of the new HZ/su vaccine. Since the study’s publication, the CDC has updated their recommendations on immunization practices for use of herpes zoster vaccine [4]. HZ/su vaccine, also known as the recombinant zoster vaccine (RZV), is now preferred over ZVL for the prevention of herpes zoster and related complications. RZV is recommended for immunocompetent adults age 50 or older, 10 years earlier than previously for the ZVL. In addition, RZV is recommended for adults who previously received ZVL. Finally, RZV can be administered concomitantly with other adult vaccines, does not require screening for a history of varicella, and is likely safe for immunocompromised persons.

—Ka Ming Gordon Ngai, MD, MPH

Study Overview

Objective. To assess the cost-effectiveness of the new adjuvanted herpes zoster subunit vaccine (HZ/su) as compared with that of the current live attenuated herpes zoster vaccine (ZVL), or no vaccine.

Design. Markov decision model evaluating 3 strategies from a societal perspective: (1) no vaccination, (2) vaccination with single dose ZVL, and (3) vaccination with 2-dose series of HZ/su.

Setting and participants. Data for the model were extracted from the US medical literature using PubMed through January 2015. Data were derived from studies of fewer than 100 patients to more than 30,000 patients, depending on the variable assessed. Variables included epidemiologic parameters, vaccine efficacy and adverse events, quality-adjusted life-years (QALYs), and costs. Because there is no standard willingness-to-pay (WTP) threshold for cost-effectiveness in the United States, $50,000 per QALY was chosen.

Main outcome measures. Total costs and QALYs.

Main results. At all ages, no vaccination was always the least expensive and least effective option, while HZ/su was always the most effective and less expensive than ZVL. At a proposed price of $280 per series ($140 per dose), HZ/su was more effective and less expensive than ZVL at all ages. The incremental cost-effectiveness ratios compared with no vaccination ranged from $20,038 to $30,084 per QALY, depending on vaccination age. The cost-effectiveness of HZ/su was insensitive to the waning rate of either vaccine due to its high efficacy, with initial level of protection close to 90% even among people 70 years or older.

Conclusion. At a manufacturer suggested price of $280 per series ($140 per dose), HZ/su would cost less than ZVL and has a high probability of offering good value.

Commentary

Herpes zosters is a localized, usually painful, cutaneous eruption resulting from reactivation of latent varicella zoster virus. It is a common disease with approximately one million cases occurring each year in the United States [1]. The incidence increases with age, from 5 cases per 1000 population in adults aged 50–59 years to 11 cases per 1000 population in persons aged ≥ 80 years. Postherpetic neuralgia, commonly defined as persistent pain for at least 90 days following the resolution of the herpes zoster rash, is the most common complication and occurs in 10% to 13% of herpes zoster cases in persons aged > 50 years [2,3].

In 2006, the US Food and Drug Administration (FDA) approved the ZVL vaccine Zostavax (Merck) for prevention of postherpetic neuralgia. By 2016, 33% of adults aged ≥ 60 years reported receipt of the vaccine [4]. However, ZVL does not prevent all herpes zoster, particularly among the elderly. Moreover, the efficacy wanes completely after approximately 10 years [5]. To address these shortcomings, a 2-dose HZ/su (Shingrix; GlaxoSmithKline) containing recombinant glycoprotein E in combination with a novel adjuvant (AS01B) was approved by the FDA in adults aged ≥ 50 years. In randomized controlled trials, HZ/su has an efficacy of close to 97%, even after age 70 years [6].

With the approval of the new attenuated herpes zoster vaccine, clinicians and patients face the question of which vaccine to get and when. The cost-effectiveness analysis published by Le and Rothberg in this study compare the value of HZ/su with ZVL vaccine and a no-vaccine strategy for individuals 60 years or older from the US societal perspective. The results suggest that, at $140 per dose, using HZ/su vaccine compared with no vaccine would cost between $20,038 and $30,084 per QALY and thus is a cost-effective strategy. The deterministic sensitivity analysis indicates that the overall results do not change under different assumptions about model input parameters, even if patients are nonadherent to the second dose of HZ/su vaccine.

 

As with any simulation study, the major limitation of this study is the accuracy of the model and the assumptions on which it is based. The body of evidence for benefits of ZVL was large, including multiple pre-licensure and post-licensure RCTs, as well as observational studies of effectiveness. On the other hand, the body of evidence for benefits of RZV was primarily informed by one high-quality RCT that studied vaccine efficacy through 4 years post-vaccination [4,6]. Currently, 3 other independent cost-effectiveness analysis are available. The Centers for Disease Control and Prevention model estimated HZ/su vaccine cost per QALY of $31,000 when vaccination occurred at age ≥ 50 years. The GlaxoSmithKline model, manufacturer of HZ/su vaccine, estimated a HZ/su vaccine cost per QALY of $12,000. While the Merck model, manufacturer of the ZVL vaccine, estimated a HZ/su vaccine cost per QALY of $107,000 [4]. In addition to model variables, the key assumption by Le and Rothberg are based on the HZ/su vaccine cost at $140 per dose and ZVL at $213. The study results need to be interpreted carefully if the vaccine prices turn out to be different in the future.

Applications for Clinical Practice

The current study by Le and Rothberg demonstrated the cost-effectiveness of the new HZ/su vaccine. Since the study’s publication, the CDC has updated their recommendations on immunization practices for use of herpes zoster vaccine [4]. HZ/su vaccine, also known as the recombinant zoster vaccine (RZV), is now preferred over ZVL for the prevention of herpes zoster and related complications. RZV is recommended for immunocompetent adults age 50 or older, 10 years earlier than previously for the ZVL. In addition, RZV is recommended for adults who previously received ZVL. Finally, RZV can be administered concomitantly with other adult vaccines, does not require screening for a history of varicella, and is likely safe for immunocompromised persons.

—Ka Ming Gordon Ngai, MD, MPH

References

1. Insinga RP, Itzler RF, Pellissier JM, et al. The incidence of herpes zoster in a United States administrative database. J Gen Intern Med 2005;20:748–53.

2. Yawn BP, Saddier P, Wollan PC, et al. A population-based study of the incidence and complication rates of herpes zoster before zoster vaccine introduction. Mayo Clin Proc 2007;82:1341–9.

3. Oxman MN, Levin MJ, Johnson GR, et al. Shingles Prevention Study Group. A vaccine to prevent herpes zoster and postherpetic neuralgia in older adults. N Eng J Med 2005;352:2271-84.

4. Dooling KL, Guo A, Patel M, et al. Recommendations of the Advisory Committee on Immunization Practices for use of herpes zoster vaccines. MMWR Morb Mortal Wkly Rep 2018;67:103–8.

5. Morrison VA, Johnson GR, Schmader KE, et al; Shingles Prevention Study Group. Long-term persistence of zoster vaccine efficacy. Clin Infect Dis 2015;60:900–9.

6. Lai H, Cunningham AL, Godeaux O, et al; ZOE-50 Study Group. Efficacy of an adjuvanted herpes zoster subunit vaccine in older adults. N Engl J Med 2015;372:2087–96.

References

1. Insinga RP, Itzler RF, Pellissier JM, et al. The incidence of herpes zoster in a United States administrative database. J Gen Intern Med 2005;20:748–53.

2. Yawn BP, Saddier P, Wollan PC, et al. A population-based study of the incidence and complication rates of herpes zoster before zoster vaccine introduction. Mayo Clin Proc 2007;82:1341–9.

3. Oxman MN, Levin MJ, Johnson GR, et al. Shingles Prevention Study Group. A vaccine to prevent herpes zoster and postherpetic neuralgia in older adults. N Eng J Med 2005;352:2271-84.

4. Dooling KL, Guo A, Patel M, et al. Recommendations of the Advisory Committee on Immunization Practices for use of herpes zoster vaccines. MMWR Morb Mortal Wkly Rep 2018;67:103–8.

5. Morrison VA, Johnson GR, Schmader KE, et al; Shingles Prevention Study Group. Long-term persistence of zoster vaccine efficacy. Clin Infect Dis 2015;60:900–9.

6. Lai H, Cunningham AL, Godeaux O, et al; ZOE-50 Study Group. Efficacy of an adjuvanted herpes zoster subunit vaccine in older adults. N Engl J Med 2015;372:2087–96.

Issue
Journal of Clinical Outcomes Management - 25(3)
Issue
Journal of Clinical Outcomes Management - 25(3)
Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Article PDF Media

EMR-Based Tool for Identifying Type 2 Diabetic Patients at High Risk for Hypoglycemia

Article Type
Changed
Tue, 05/03/2022 - 15:22

Study Overview

Objective. To develop and validate a risk stratification tool to categorize 12-month risk of hypoglycemia-related emergency department (ED) or hospital use among patients with type 2 diabetes (T2D).

Design. Prospective cohort study.

Setting and participants. Patients with T2D from Kaiser Permanente Northern California were identified using electronic medical records (EMR). Patients had to be 21 years of age or older as of the baseline date of 1 January 2014, with continuous health plan membership for 24 months prebaseline and pharmacy benefits for 12 months prebaseline. Of the 233,330 adults identified, 24,719 were excluded for unknown diabetes type, and 3614 were excluded for type 1 diabetes. The remaining 206,435 eligible patients with T2D were randomly split into an 80% derivation sample (n = 165,148) for tool development and 20% internal validation sample (n = 41,287). Using similar eligibility criteria, 2 external validation samples were derived from the Veterans Administration Diabetes Epidemiology Cohort (VA) (n = 1,335,966 adults) as well as from Group Health Cooperative (GH) (n = 14,972).

Main outcome measure. The primary outcome was the occurrence of any hypoglycemia-related ED visit or hospital use during the 12 months postbaseline. A primary diagnosis of hypoglycemia was ascertained using the following International Classification of Diseases, Ninth Revision (ICD-9) codes: 251.0, 251.1, 251.2, 962.3, or 250.8, without concurrent 259.3, 272.7, 681.xx, 686.9x, 707.a-707.9, 709.3, 730.0-730.2, or 731.8 codes [1]. Secondary discharge diagnoses for hypoglycemia were not used because they are often attributable to events that occurred during the ED or hospital encounter.

Main results. Beginning with 156 (122 categorical and 34 continuous) candidate clinical, demographic, and behavioral predictor variables for model development, the final classification tree was based on 6 patient-specific variables: total number of prior episodes of hypoglycemia-related ED or hospital utilization (0, 1–2, ≥ 3 times), number of ED encounters for any reason in the prior 12 months (< 2, ≥ 2 times), insulin use (yes/no), sulfonylurea use (yes/no), presence of severe or end-stage kidney disease (dialysis or chronic kidney disease stage 4 or 5 determined by estimated glomerular filtration rate of ≤ 29 mL/min/1.73 m² (yes/no), and age younger than 77 years (yes/no). This classification tree resulted in 10 mutually exclusive leaf nodes, each yielding an estimated annual risk of hypoglycemia-related utilization, which were categorized as high (> 5%), intermediate (1%–5%), or low (< 1%).

The above classification model was then transcribed into a checklist-style hypoglycemia risk stratification tool by mapping the combination of risk factors to high, intermediate, or low risk of having any hypoglycemia-related utilization in the following 12 months.

Regarding patient characteristics, there were no significant differences in the distribution of the 6 predictors between the Kaiser derivation vs. validation samples, but there were significant differences across external validation samples. For example, the VA sample was predominantly men, with a higher proportion of patients older than 77 years, and had the highest proportion of patients with severe or end-stage kidney disease. Regarding model validation, the tool performed well in both internal validation (C statistic = 0.83) and external validation samples (VA C statistic = 0.81; GH C statistic = 0.79).

Conclusion. This hypoglycemia risk stratification tool categorizes the 12-month risk of hypoglycemia-related utilization in patients with T2D using 6 easily obtained inputs. This tool can facilitate efficient targeting of population management interventions to reduce hypoglycemia risk and improve patient safety.

Commentary

It is estimated that 25 million people in the United States have diabetes [2]. Hypoglycemia is a frequent adverse event in patients with T2D, being more common than acute hyperglycemic emergencies such as hyperosmolar hyperglycemic state [3]. Iatrogenic hypoglycemia due to glucose-lowering medication can result in hypoglycemic crisis that requires administration of carbohydrates, glucagon, or other resuscitative actions in the ED or in hospital [4,5]. The estimated total annual direct medical costs of hypoglycemia-related utilization were estimated at approximately $1.8 billion in the United States in 2009.

The risk of hypoglycemia varies widely in patients with T2D and there are no validated methods to target interventions to the at-risk population. In this article, Karter and colleagues developed and validated a pragmatic hypoglycemia risk stratification tool that uses 6 factors to categorize the 12-month risk of hypoglycemia-related ED or hospital utilization.

Identifying patients at high-risk for hypoglycemia-related utilization provides an opportunity to mobilize resources to target this minority of patients with T2D, including deintensifying or simplifying medication regimens, prescribing glucagon kits or continuous glucose monitors, making referrals to clinical pharmacists or nurse care managers, and regularly asking about hypoglycemia events occurring outside the medical setting. This is important, as more than 95% of severe hypoglycemia events may go clinically unrecognized because they did not result in ED or hospital use [6]. In addition, as the 6 inputs were identified by EMR, intervention can include automated clinical alert flags in the EMR and automated messaging to patients with elevated risk.

Several limitations exist. The study excluded secondary discharge diagnoses for hypoglycemia as these may occur due to sepsis, acute renal failure, trauma, or other causes. In addition, the external validation populations had different distributions of disease severity and case mix. The authors attribute some of the inconsistent findings to sparse data in the GH validation sample (n = 14,972). Finally, this tool was developed to stratify the population into 3 levels of risk, and it should not be used to estimate the probability of hypoglycemic-related utilization for an individual patient.

Applications for Clinical Practice

The EMR-based hypoglycemia risk stratification tool categorizes the 12-month risk of hypoglycemia-related utilization in patients with T2D using 6 easily obtained inputs. This tool can facilitate efficient targeting of population management interventions, including integration into existing EMR as clinical decision aid, to reduce hypoglycemia risk and improve patient safety.

—Ka Ming Gordon Ngai, MD, MPH

References

1. Ginde AA, Blanc PG, Lieberman RM, et al. Validation of ICD-9-CM coding algorithm for improved identification of hypoglycemia visits. BMC Endocr Disord 2008;8:4.

2. Gregg EW, Li Y, Wang J, et al. Change in diabetes-related complications in the United States, 1990-2010. N Engl J Med 2014; 370:1514–23.

3. Lipska KJ, Ross JS, Wang Y, et al. National trends in US hospital admissions for hyperglycemia and hypoglycemia among Medicare beneficiaries, 1999 to 2011. JAMA Intern Med 2014:174: 1116–24.

4. Pogach L, Aron D. Balancing hypoglycemia and glycemic control: a public health approach for insulin safety. JAMA 2010;303:2076–7.

5. Lee SJ. So much insulin, so much hypoglycemia. JAMA Intern Med 2014;174:686–8.

6. Sarkar U, Karter AJ, Liu JY, et al. Hypoglycemia is more common among type 2 diabetes patients with limited health literacy: the Diabetes Study of Northern California (DISTANCE). J Gen Intern Med 2010;25:962–8.

Article PDF
Issue
Journal of Clinical Outcomes Management - 24(10)
Publications
Topics
Sections
Article PDF
Article PDF

Study Overview

Objective. To develop and validate a risk stratification tool to categorize 12-month risk of hypoglycemia-related emergency department (ED) or hospital use among patients with type 2 diabetes (T2D).

Design. Prospective cohort study.

Setting and participants. Patients with T2D from Kaiser Permanente Northern California were identified using electronic medical records (EMR). Patients had to be 21 years of age or older as of the baseline date of 1 January 2014, with continuous health plan membership for 24 months prebaseline and pharmacy benefits for 12 months prebaseline. Of the 233,330 adults identified, 24,719 were excluded for unknown diabetes type, and 3614 were excluded for type 1 diabetes. The remaining 206,435 eligible patients with T2D were randomly split into an 80% derivation sample (n = 165,148) for tool development and 20% internal validation sample (n = 41,287). Using similar eligibility criteria, 2 external validation samples were derived from the Veterans Administration Diabetes Epidemiology Cohort (VA) (n = 1,335,966 adults) as well as from Group Health Cooperative (GH) (n = 14,972).

Main outcome measure. The primary outcome was the occurrence of any hypoglycemia-related ED visit or hospital use during the 12 months postbaseline. A primary diagnosis of hypoglycemia was ascertained using the following International Classification of Diseases, Ninth Revision (ICD-9) codes: 251.0, 251.1, 251.2, 962.3, or 250.8, without concurrent 259.3, 272.7, 681.xx, 686.9x, 707.a-707.9, 709.3, 730.0-730.2, or 731.8 codes [1]. Secondary discharge diagnoses for hypoglycemia were not used because they are often attributable to events that occurred during the ED or hospital encounter.

Main results. Beginning with 156 (122 categorical and 34 continuous) candidate clinical, demographic, and behavioral predictor variables for model development, the final classification tree was based on 6 patient-specific variables: total number of prior episodes of hypoglycemia-related ED or hospital utilization (0, 1–2, ≥ 3 times), number of ED encounters for any reason in the prior 12 months (< 2, ≥ 2 times), insulin use (yes/no), sulfonylurea use (yes/no), presence of severe or end-stage kidney disease (dialysis or chronic kidney disease stage 4 or 5 determined by estimated glomerular filtration rate of ≤ 29 mL/min/1.73 m² (yes/no), and age younger than 77 years (yes/no). This classification tree resulted in 10 mutually exclusive leaf nodes, each yielding an estimated annual risk of hypoglycemia-related utilization, which were categorized as high (> 5%), intermediate (1%–5%), or low (< 1%).

The above classification model was then transcribed into a checklist-style hypoglycemia risk stratification tool by mapping the combination of risk factors to high, intermediate, or low risk of having any hypoglycemia-related utilization in the following 12 months.

Regarding patient characteristics, there were no significant differences in the distribution of the 6 predictors between the Kaiser derivation vs. validation samples, but there were significant differences across external validation samples. For example, the VA sample was predominantly men, with a higher proportion of patients older than 77 years, and had the highest proportion of patients with severe or end-stage kidney disease. Regarding model validation, the tool performed well in both internal validation (C statistic = 0.83) and external validation samples (VA C statistic = 0.81; GH C statistic = 0.79).

Conclusion. This hypoglycemia risk stratification tool categorizes the 12-month risk of hypoglycemia-related utilization in patients with T2D using 6 easily obtained inputs. This tool can facilitate efficient targeting of population management interventions to reduce hypoglycemia risk and improve patient safety.

Commentary

It is estimated that 25 million people in the United States have diabetes [2]. Hypoglycemia is a frequent adverse event in patients with T2D, being more common than acute hyperglycemic emergencies such as hyperosmolar hyperglycemic state [3]. Iatrogenic hypoglycemia due to glucose-lowering medication can result in hypoglycemic crisis that requires administration of carbohydrates, glucagon, or other resuscitative actions in the ED or in hospital [4,5]. The estimated total annual direct medical costs of hypoglycemia-related utilization were estimated at approximately $1.8 billion in the United States in 2009.

The risk of hypoglycemia varies widely in patients with T2D and there are no validated methods to target interventions to the at-risk population. In this article, Karter and colleagues developed and validated a pragmatic hypoglycemia risk stratification tool that uses 6 factors to categorize the 12-month risk of hypoglycemia-related ED or hospital utilization.

Identifying patients at high-risk for hypoglycemia-related utilization provides an opportunity to mobilize resources to target this minority of patients with T2D, including deintensifying or simplifying medication regimens, prescribing glucagon kits or continuous glucose monitors, making referrals to clinical pharmacists or nurse care managers, and regularly asking about hypoglycemia events occurring outside the medical setting. This is important, as more than 95% of severe hypoglycemia events may go clinically unrecognized because they did not result in ED or hospital use [6]. In addition, as the 6 inputs were identified by EMR, intervention can include automated clinical alert flags in the EMR and automated messaging to patients with elevated risk.

Several limitations exist. The study excluded secondary discharge diagnoses for hypoglycemia as these may occur due to sepsis, acute renal failure, trauma, or other causes. In addition, the external validation populations had different distributions of disease severity and case mix. The authors attribute some of the inconsistent findings to sparse data in the GH validation sample (n = 14,972). Finally, this tool was developed to stratify the population into 3 levels of risk, and it should not be used to estimate the probability of hypoglycemic-related utilization for an individual patient.

Applications for Clinical Practice

The EMR-based hypoglycemia risk stratification tool categorizes the 12-month risk of hypoglycemia-related utilization in patients with T2D using 6 easily obtained inputs. This tool can facilitate efficient targeting of population management interventions, including integration into existing EMR as clinical decision aid, to reduce hypoglycemia risk and improve patient safety.

—Ka Ming Gordon Ngai, MD, MPH

Study Overview

Objective. To develop and validate a risk stratification tool to categorize 12-month risk of hypoglycemia-related emergency department (ED) or hospital use among patients with type 2 diabetes (T2D).

Design. Prospective cohort study.

Setting and participants. Patients with T2D from Kaiser Permanente Northern California were identified using electronic medical records (EMR). Patients had to be 21 years of age or older as of the baseline date of 1 January 2014, with continuous health plan membership for 24 months prebaseline and pharmacy benefits for 12 months prebaseline. Of the 233,330 adults identified, 24,719 were excluded for unknown diabetes type, and 3614 were excluded for type 1 diabetes. The remaining 206,435 eligible patients with T2D were randomly split into an 80% derivation sample (n = 165,148) for tool development and 20% internal validation sample (n = 41,287). Using similar eligibility criteria, 2 external validation samples were derived from the Veterans Administration Diabetes Epidemiology Cohort (VA) (n = 1,335,966 adults) as well as from Group Health Cooperative (GH) (n = 14,972).

Main outcome measure. The primary outcome was the occurrence of any hypoglycemia-related ED visit or hospital use during the 12 months postbaseline. A primary diagnosis of hypoglycemia was ascertained using the following International Classification of Diseases, Ninth Revision (ICD-9) codes: 251.0, 251.1, 251.2, 962.3, or 250.8, without concurrent 259.3, 272.7, 681.xx, 686.9x, 707.a-707.9, 709.3, 730.0-730.2, or 731.8 codes [1]. Secondary discharge diagnoses for hypoglycemia were not used because they are often attributable to events that occurred during the ED or hospital encounter.

Main results. Beginning with 156 (122 categorical and 34 continuous) candidate clinical, demographic, and behavioral predictor variables for model development, the final classification tree was based on 6 patient-specific variables: total number of prior episodes of hypoglycemia-related ED or hospital utilization (0, 1–2, ≥ 3 times), number of ED encounters for any reason in the prior 12 months (< 2, ≥ 2 times), insulin use (yes/no), sulfonylurea use (yes/no), presence of severe or end-stage kidney disease (dialysis or chronic kidney disease stage 4 or 5 determined by estimated glomerular filtration rate of ≤ 29 mL/min/1.73 m² (yes/no), and age younger than 77 years (yes/no). This classification tree resulted in 10 mutually exclusive leaf nodes, each yielding an estimated annual risk of hypoglycemia-related utilization, which were categorized as high (> 5%), intermediate (1%–5%), or low (< 1%).

The above classification model was then transcribed into a checklist-style hypoglycemia risk stratification tool by mapping the combination of risk factors to high, intermediate, or low risk of having any hypoglycemia-related utilization in the following 12 months.

Regarding patient characteristics, there were no significant differences in the distribution of the 6 predictors between the Kaiser derivation vs. validation samples, but there were significant differences across external validation samples. For example, the VA sample was predominantly men, with a higher proportion of patients older than 77 years, and had the highest proportion of patients with severe or end-stage kidney disease. Regarding model validation, the tool performed well in both internal validation (C statistic = 0.83) and external validation samples (VA C statistic = 0.81; GH C statistic = 0.79).

Conclusion. This hypoglycemia risk stratification tool categorizes the 12-month risk of hypoglycemia-related utilization in patients with T2D using 6 easily obtained inputs. This tool can facilitate efficient targeting of population management interventions to reduce hypoglycemia risk and improve patient safety.

Commentary

It is estimated that 25 million people in the United States have diabetes [2]. Hypoglycemia is a frequent adverse event in patients with T2D, being more common than acute hyperglycemic emergencies such as hyperosmolar hyperglycemic state [3]. Iatrogenic hypoglycemia due to glucose-lowering medication can result in hypoglycemic crisis that requires administration of carbohydrates, glucagon, or other resuscitative actions in the ED or in hospital [4,5]. The estimated total annual direct medical costs of hypoglycemia-related utilization were estimated at approximately $1.8 billion in the United States in 2009.

The risk of hypoglycemia varies widely in patients with T2D and there are no validated methods to target interventions to the at-risk population. In this article, Karter and colleagues developed and validated a pragmatic hypoglycemia risk stratification tool that uses 6 factors to categorize the 12-month risk of hypoglycemia-related ED or hospital utilization.

Identifying patients at high-risk for hypoglycemia-related utilization provides an opportunity to mobilize resources to target this minority of patients with T2D, including deintensifying or simplifying medication regimens, prescribing glucagon kits or continuous glucose monitors, making referrals to clinical pharmacists or nurse care managers, and regularly asking about hypoglycemia events occurring outside the medical setting. This is important, as more than 95% of severe hypoglycemia events may go clinically unrecognized because they did not result in ED or hospital use [6]. In addition, as the 6 inputs were identified by EMR, intervention can include automated clinical alert flags in the EMR and automated messaging to patients with elevated risk.

Several limitations exist. The study excluded secondary discharge diagnoses for hypoglycemia as these may occur due to sepsis, acute renal failure, trauma, or other causes. In addition, the external validation populations had different distributions of disease severity and case mix. The authors attribute some of the inconsistent findings to sparse data in the GH validation sample (n = 14,972). Finally, this tool was developed to stratify the population into 3 levels of risk, and it should not be used to estimate the probability of hypoglycemic-related utilization for an individual patient.

Applications for Clinical Practice

The EMR-based hypoglycemia risk stratification tool categorizes the 12-month risk of hypoglycemia-related utilization in patients with T2D using 6 easily obtained inputs. This tool can facilitate efficient targeting of population management interventions, including integration into existing EMR as clinical decision aid, to reduce hypoglycemia risk and improve patient safety.

—Ka Ming Gordon Ngai, MD, MPH

References

1. Ginde AA, Blanc PG, Lieberman RM, et al. Validation of ICD-9-CM coding algorithm for improved identification of hypoglycemia visits. BMC Endocr Disord 2008;8:4.

2. Gregg EW, Li Y, Wang J, et al. Change in diabetes-related complications in the United States, 1990-2010. N Engl J Med 2014; 370:1514–23.

3. Lipska KJ, Ross JS, Wang Y, et al. National trends in US hospital admissions for hyperglycemia and hypoglycemia among Medicare beneficiaries, 1999 to 2011. JAMA Intern Med 2014:174: 1116–24.

4. Pogach L, Aron D. Balancing hypoglycemia and glycemic control: a public health approach for insulin safety. JAMA 2010;303:2076–7.

5. Lee SJ. So much insulin, so much hypoglycemia. JAMA Intern Med 2014;174:686–8.

6. Sarkar U, Karter AJ, Liu JY, et al. Hypoglycemia is more common among type 2 diabetes patients with limited health literacy: the Diabetes Study of Northern California (DISTANCE). J Gen Intern Med 2010;25:962–8.

References

1. Ginde AA, Blanc PG, Lieberman RM, et al. Validation of ICD-9-CM coding algorithm for improved identification of hypoglycemia visits. BMC Endocr Disord 2008;8:4.

2. Gregg EW, Li Y, Wang J, et al. Change in diabetes-related complications in the United States, 1990-2010. N Engl J Med 2014; 370:1514–23.

3. Lipska KJ, Ross JS, Wang Y, et al. National trends in US hospital admissions for hyperglycemia and hypoglycemia among Medicare beneficiaries, 1999 to 2011. JAMA Intern Med 2014:174: 1116–24.

4. Pogach L, Aron D. Balancing hypoglycemia and glycemic control: a public health approach for insulin safety. JAMA 2010;303:2076–7.

5. Lee SJ. So much insulin, so much hypoglycemia. JAMA Intern Med 2014;174:686–8.

6. Sarkar U, Karter AJ, Liu JY, et al. Hypoglycemia is more common among type 2 diabetes patients with limited health literacy: the Diabetes Study of Northern California (DISTANCE). J Gen Intern Med 2010;25:962–8.

Issue
Journal of Clinical Outcomes Management - 24(10)
Issue
Journal of Clinical Outcomes Management - 24(10)
Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Article PDF Media

Is MRI Safe in Patients with Implanted Cardiac Devices?

Article Type
Changed
Wed, 02/28/2018 - 12:24
Display Headline
Is MRI Safe in Patients with Implanted Cardiac Devices?

Study Overview

Objective. To assess the risks associated with magnetic resonance imaging (MRI) in patients with a pacemaker or implantable cardioverter-defibrillator (ICD) that is “non–MRI-conditional.”

Design. Prospective cohort study using the multicenter MagnaSafe Registry.

Setting and participants. Patients were included in the registry if they were 18 years of age or older and had a non–MRI-conditional pacemaker or ICD generator, from any manufacturer, that was implanted after 2001, with leads from any manufacturer, and if the patient’s physician determined that nonthoracic MRI at 1.5 tesla was clinically indicated. Exclusion criteria included an abandoned or inactive lead that could not be interrogated, an MRI-conditional pacemaker, a device implanted in a nonthoracic location, or a device with a battery that was near the end of its battery life. In addition, pacing-dependent patients with an ICD were also excluded.

Main outcome measures. The primary outcomes of the study were death, generator or lead failure requiring immediate replacement, loss of capture (for pacing-dependent patients with pacemakers), new-onset arrhythmia, and partial or full generator electrical reset. The secondary outcomes were changes in device settings including: a battery voltage decrease of 0.04V or more, a pacing lead threshold increase of 0.5V or more, a P-wave amplitude decrease of 50% or more, an R-wave amplitude decrease of 25% or more and of 50% or more, a pacing lead impedance change of 50 ohms or more, and a high-voltage (shock) lead impedance change of 3 ohms or more.

Main results. Between April 2009 and April 2014, clinically indicated nonthoracic MRI was performed in a total of 1000 pacemaker cases (818 patients) and 500 ICD cases (428 patients) across 19 centers in the United States. The majority (75%) of the MRI examinations were performed on the brain or the spine. The mean time patients spent within the magnetic field was 44 minutes. Four patients reported symptoms of generator-site discomfort; one patient with an ICD was removed from the scanner when a sensation of heating was described at the site of the generator implanted and did not complete the examination.

Regarding primary outcomes, no deaths, lead failures, losses of capture, or ventricular arrhythmias occurred during MRI. One ICD device was left in the active mode for anti-tachycardia therapy (a protocol violation) and the generator could not be interrogated after MRI and required immediate replacement. Four patients had atrial fibrillation and 2 patients had atrial flutter during or immediately after the MRI. All 6 patients returned to sinus rhythm within 49 hours after MRI. No ventricular arrhythmias were noted. There were also 6 cases of partial generator electrical reset with no clinical significance.

Regarding secondary outcomes, a decrease of 50% or more in P-wave amplitude was detected in 0.9% of pacemaker leads and in 0.3% of ICD leads; a decrease of 25% or more in R-wave amplitude was detected in 3.9% of pacemaker leads and in 1.5% of ICD leads, and a decrease of 50% or more in R-wave amplitude was detected in no pacemaker leads and in 0.2% of ICD leads. An increase in pacing lead threshold of 0.5 V or more was detected in 0.7% of pacemaker leads and in 0.8% of ICD leads. A pacing lead impedance change of 50 ohms or more was noted in 3.3% of pacemakers and in 4.2% of ICDs.

Conclusion. Device or lead failure did not occur in any patient with a non–MRI-conditional pacemaker or ICD who underwent clinically indicated nonthoracic MRI at 1.5 tesla when patients were appropriately screened and had the cardiac device reprogrammed in accordance with the protocol. Substantial changes in device settings were infrequent and did not result in clinical adverse events.

Commentary

It is estimated that 2 million people in the United States and an additional 6 million worldwide have an implanted non–MRI-conditional cardiac pacemaker or ICD [1]. At least half of patients with such devices are predicted to have a clinical indication for MRI during their lifetime after device implantation [2]. The use of MRI poses concerns due to the potential for magnetic field–induced cardiac lead heating, which could result in myocardial thermal injury and detrimental changes in pacing properties [3,4].

In this study, Russo and colleagues assessed the risks for patients with a non-MRI-conditional pacemaker or ICD receiving an MRI scan using a pre-scanning protocol. If the patient was asymptomatic and had an intrinsic heart rate of at least 40 beats per minute, the device was programmed to a no-pacing mode (ODO or OVO). Symptomatic patients or those with an intrinsic heart rate of less than 40 beats per minute were determined to be pacing-dependent, and the device was reprogrammed to an asynchronous pacing mode (DOO or VOO). All bradycardia and tachycardia therapies were inactivated before the MRI. Based on this standardized protocol, no major adverse outcomes occurred. All pacemaker or ICD device were reprogrammed in accordance with the pre-specified protocol except one case where the ICD device was left in the active mode for anti-tachycardia therapy (a protocol violation) and the generator could not be interrogated after MRI and required immediate replacement. In addition to patient safety, the authors also measure the functionality of the devices pre-MRI and post-MRI. One of these measurements were battery voltage changes, a small decrease was noted for both pacemakers and ICDs as expected. The radiofrequency energy generated during MRI scanning creates a temporary decrease in battery voltage, which had resolved in all pacemaker cases although some ICD voltage decreases of 0.04 V or more had not resolved by the end of the 6 month post-MRI follow-up.

Several limitations exist. The study registry included devices and leads from different manufacturers, but did not report outcomes by manufacturer. While overall it appears to be safe to conduct an MRI study for patients who have non–MRI-conditional devices, this study did not provide enough information for patients younger than 18 years of age, patients who required repeat MRI studies, MRI examinations of the thorax, or higher MRI field strengths—the newer 3 tesla high-resolution MRI machines.

Applications for Clinical Practice

This multicenter prospective cohort study provides strong evidence that patients with a non–MRI-conditional pacemaker or defibrillator can receive nonthoracic MRI studies at 1.5 tesla when a straight pre-scanning device interrogation is performed per the standardized protocol.

—Ka Ming Gordon Ngai, MD, MPH

References

1. Nazarian S, Hansford R, Roguin A, et al. A prospective evaluation of a protocol for magnetic resonance imaging of patients with implanted cardiac devices. Ann Intern Med 2011;155:415–24.

2. Kalin R, Stanton MS. Current clnical issues for MRI scanning of pacemaker and defibrillator patients. Pacing Clin Electrophysiol 2005;28:326–8.

3. Beinart R, Nazarian S. Effects of external electrical and magnetic fields on pacemakers and defibrillators: from engineering principles to clinical practice. Circulation 2013; 128:2799–809.

4. Luechinger R, Zeijlemaker VA, Pedersen EM, et al. In vivo heating of pacemaker leads during magnetic resonance imaging. Eur Heart J 2005;26:376–83.

Issue
Journal of Clinical Outcomes Management - April 2017, Vol. 24, No. 4
Publications
Topics
Sections

Study Overview

Objective. To assess the risks associated with magnetic resonance imaging (MRI) in patients with a pacemaker or implantable cardioverter-defibrillator (ICD) that is “non–MRI-conditional.”

Design. Prospective cohort study using the multicenter MagnaSafe Registry.

Setting and participants. Patients were included in the registry if they were 18 years of age or older and had a non–MRI-conditional pacemaker or ICD generator, from any manufacturer, that was implanted after 2001, with leads from any manufacturer, and if the patient’s physician determined that nonthoracic MRI at 1.5 tesla was clinically indicated. Exclusion criteria included an abandoned or inactive lead that could not be interrogated, an MRI-conditional pacemaker, a device implanted in a nonthoracic location, or a device with a battery that was near the end of its battery life. In addition, pacing-dependent patients with an ICD were also excluded.

Main outcome measures. The primary outcomes of the study were death, generator or lead failure requiring immediate replacement, loss of capture (for pacing-dependent patients with pacemakers), new-onset arrhythmia, and partial or full generator electrical reset. The secondary outcomes were changes in device settings including: a battery voltage decrease of 0.04V or more, a pacing lead threshold increase of 0.5V or more, a P-wave amplitude decrease of 50% or more, an R-wave amplitude decrease of 25% or more and of 50% or more, a pacing lead impedance change of 50 ohms or more, and a high-voltage (shock) lead impedance change of 3 ohms or more.

Main results. Between April 2009 and April 2014, clinically indicated nonthoracic MRI was performed in a total of 1000 pacemaker cases (818 patients) and 500 ICD cases (428 patients) across 19 centers in the United States. The majority (75%) of the MRI examinations were performed on the brain or the spine. The mean time patients spent within the magnetic field was 44 minutes. Four patients reported symptoms of generator-site discomfort; one patient with an ICD was removed from the scanner when a sensation of heating was described at the site of the generator implanted and did not complete the examination.

Regarding primary outcomes, no deaths, lead failures, losses of capture, or ventricular arrhythmias occurred during MRI. One ICD device was left in the active mode for anti-tachycardia therapy (a protocol violation) and the generator could not be interrogated after MRI and required immediate replacement. Four patients had atrial fibrillation and 2 patients had atrial flutter during or immediately after the MRI. All 6 patients returned to sinus rhythm within 49 hours after MRI. No ventricular arrhythmias were noted. There were also 6 cases of partial generator electrical reset with no clinical significance.

Regarding secondary outcomes, a decrease of 50% or more in P-wave amplitude was detected in 0.9% of pacemaker leads and in 0.3% of ICD leads; a decrease of 25% or more in R-wave amplitude was detected in 3.9% of pacemaker leads and in 1.5% of ICD leads, and a decrease of 50% or more in R-wave amplitude was detected in no pacemaker leads and in 0.2% of ICD leads. An increase in pacing lead threshold of 0.5 V or more was detected in 0.7% of pacemaker leads and in 0.8% of ICD leads. A pacing lead impedance change of 50 ohms or more was noted in 3.3% of pacemakers and in 4.2% of ICDs.

Conclusion. Device or lead failure did not occur in any patient with a non–MRI-conditional pacemaker or ICD who underwent clinically indicated nonthoracic MRI at 1.5 tesla when patients were appropriately screened and had the cardiac device reprogrammed in accordance with the protocol. Substantial changes in device settings were infrequent and did not result in clinical adverse events.

Commentary

It is estimated that 2 million people in the United States and an additional 6 million worldwide have an implanted non–MRI-conditional cardiac pacemaker or ICD [1]. At least half of patients with such devices are predicted to have a clinical indication for MRI during their lifetime after device implantation [2]. The use of MRI poses concerns due to the potential for magnetic field–induced cardiac lead heating, which could result in myocardial thermal injury and detrimental changes in pacing properties [3,4].

In this study, Russo and colleagues assessed the risks for patients with a non-MRI-conditional pacemaker or ICD receiving an MRI scan using a pre-scanning protocol. If the patient was asymptomatic and had an intrinsic heart rate of at least 40 beats per minute, the device was programmed to a no-pacing mode (ODO or OVO). Symptomatic patients or those with an intrinsic heart rate of less than 40 beats per minute were determined to be pacing-dependent, and the device was reprogrammed to an asynchronous pacing mode (DOO or VOO). All bradycardia and tachycardia therapies were inactivated before the MRI. Based on this standardized protocol, no major adverse outcomes occurred. All pacemaker or ICD device were reprogrammed in accordance with the pre-specified protocol except one case where the ICD device was left in the active mode for anti-tachycardia therapy (a protocol violation) and the generator could not be interrogated after MRI and required immediate replacement. In addition to patient safety, the authors also measure the functionality of the devices pre-MRI and post-MRI. One of these measurements were battery voltage changes, a small decrease was noted for both pacemakers and ICDs as expected. The radiofrequency energy generated during MRI scanning creates a temporary decrease in battery voltage, which had resolved in all pacemaker cases although some ICD voltage decreases of 0.04 V or more had not resolved by the end of the 6 month post-MRI follow-up.

Several limitations exist. The study registry included devices and leads from different manufacturers, but did not report outcomes by manufacturer. While overall it appears to be safe to conduct an MRI study for patients who have non–MRI-conditional devices, this study did not provide enough information for patients younger than 18 years of age, patients who required repeat MRI studies, MRI examinations of the thorax, or higher MRI field strengths—the newer 3 tesla high-resolution MRI machines.

Applications for Clinical Practice

This multicenter prospective cohort study provides strong evidence that patients with a non–MRI-conditional pacemaker or defibrillator can receive nonthoracic MRI studies at 1.5 tesla when a straight pre-scanning device interrogation is performed per the standardized protocol.

—Ka Ming Gordon Ngai, MD, MPH

Study Overview

Objective. To assess the risks associated with magnetic resonance imaging (MRI) in patients with a pacemaker or implantable cardioverter-defibrillator (ICD) that is “non–MRI-conditional.”

Design. Prospective cohort study using the multicenter MagnaSafe Registry.

Setting and participants. Patients were included in the registry if they were 18 years of age or older and had a non–MRI-conditional pacemaker or ICD generator, from any manufacturer, that was implanted after 2001, with leads from any manufacturer, and if the patient’s physician determined that nonthoracic MRI at 1.5 tesla was clinically indicated. Exclusion criteria included an abandoned or inactive lead that could not be interrogated, an MRI-conditional pacemaker, a device implanted in a nonthoracic location, or a device with a battery that was near the end of its battery life. In addition, pacing-dependent patients with an ICD were also excluded.

Main outcome measures. The primary outcomes of the study were death, generator or lead failure requiring immediate replacement, loss of capture (for pacing-dependent patients with pacemakers), new-onset arrhythmia, and partial or full generator electrical reset. The secondary outcomes were changes in device settings including: a battery voltage decrease of 0.04V or more, a pacing lead threshold increase of 0.5V or more, a P-wave amplitude decrease of 50% or more, an R-wave amplitude decrease of 25% or more and of 50% or more, a pacing lead impedance change of 50 ohms or more, and a high-voltage (shock) lead impedance change of 3 ohms or more.

Main results. Between April 2009 and April 2014, clinically indicated nonthoracic MRI was performed in a total of 1000 pacemaker cases (818 patients) and 500 ICD cases (428 patients) across 19 centers in the United States. The majority (75%) of the MRI examinations were performed on the brain or the spine. The mean time patients spent within the magnetic field was 44 minutes. Four patients reported symptoms of generator-site discomfort; one patient with an ICD was removed from the scanner when a sensation of heating was described at the site of the generator implanted and did not complete the examination.

Regarding primary outcomes, no deaths, lead failures, losses of capture, or ventricular arrhythmias occurred during MRI. One ICD device was left in the active mode for anti-tachycardia therapy (a protocol violation) and the generator could not be interrogated after MRI and required immediate replacement. Four patients had atrial fibrillation and 2 patients had atrial flutter during or immediately after the MRI. All 6 patients returned to sinus rhythm within 49 hours after MRI. No ventricular arrhythmias were noted. There were also 6 cases of partial generator electrical reset with no clinical significance.

Regarding secondary outcomes, a decrease of 50% or more in P-wave amplitude was detected in 0.9% of pacemaker leads and in 0.3% of ICD leads; a decrease of 25% or more in R-wave amplitude was detected in 3.9% of pacemaker leads and in 1.5% of ICD leads, and a decrease of 50% or more in R-wave amplitude was detected in no pacemaker leads and in 0.2% of ICD leads. An increase in pacing lead threshold of 0.5 V or more was detected in 0.7% of pacemaker leads and in 0.8% of ICD leads. A pacing lead impedance change of 50 ohms or more was noted in 3.3% of pacemakers and in 4.2% of ICDs.

Conclusion. Device or lead failure did not occur in any patient with a non–MRI-conditional pacemaker or ICD who underwent clinically indicated nonthoracic MRI at 1.5 tesla when patients were appropriately screened and had the cardiac device reprogrammed in accordance with the protocol. Substantial changes in device settings were infrequent and did not result in clinical adverse events.

Commentary

It is estimated that 2 million people in the United States and an additional 6 million worldwide have an implanted non–MRI-conditional cardiac pacemaker or ICD [1]. At least half of patients with such devices are predicted to have a clinical indication for MRI during their lifetime after device implantation [2]. The use of MRI poses concerns due to the potential for magnetic field–induced cardiac lead heating, which could result in myocardial thermal injury and detrimental changes in pacing properties [3,4].

In this study, Russo and colleagues assessed the risks for patients with a non-MRI-conditional pacemaker or ICD receiving an MRI scan using a pre-scanning protocol. If the patient was asymptomatic and had an intrinsic heart rate of at least 40 beats per minute, the device was programmed to a no-pacing mode (ODO or OVO). Symptomatic patients or those with an intrinsic heart rate of less than 40 beats per minute were determined to be pacing-dependent, and the device was reprogrammed to an asynchronous pacing mode (DOO or VOO). All bradycardia and tachycardia therapies were inactivated before the MRI. Based on this standardized protocol, no major adverse outcomes occurred. All pacemaker or ICD device were reprogrammed in accordance with the pre-specified protocol except one case where the ICD device was left in the active mode for anti-tachycardia therapy (a protocol violation) and the generator could not be interrogated after MRI and required immediate replacement. In addition to patient safety, the authors also measure the functionality of the devices pre-MRI and post-MRI. One of these measurements were battery voltage changes, a small decrease was noted for both pacemakers and ICDs as expected. The radiofrequency energy generated during MRI scanning creates a temporary decrease in battery voltage, which had resolved in all pacemaker cases although some ICD voltage decreases of 0.04 V or more had not resolved by the end of the 6 month post-MRI follow-up.

Several limitations exist. The study registry included devices and leads from different manufacturers, but did not report outcomes by manufacturer. While overall it appears to be safe to conduct an MRI study for patients who have non–MRI-conditional devices, this study did not provide enough information for patients younger than 18 years of age, patients who required repeat MRI studies, MRI examinations of the thorax, or higher MRI field strengths—the newer 3 tesla high-resolution MRI machines.

Applications for Clinical Practice

This multicenter prospective cohort study provides strong evidence that patients with a non–MRI-conditional pacemaker or defibrillator can receive nonthoracic MRI studies at 1.5 tesla when a straight pre-scanning device interrogation is performed per the standardized protocol.

—Ka Ming Gordon Ngai, MD, MPH

References

1. Nazarian S, Hansford R, Roguin A, et al. A prospective evaluation of a protocol for magnetic resonance imaging of patients with implanted cardiac devices. Ann Intern Med 2011;155:415–24.

2. Kalin R, Stanton MS. Current clnical issues for MRI scanning of pacemaker and defibrillator patients. Pacing Clin Electrophysiol 2005;28:326–8.

3. Beinart R, Nazarian S. Effects of external electrical and magnetic fields on pacemakers and defibrillators: from engineering principles to clinical practice. Circulation 2013; 128:2799–809.

4. Luechinger R, Zeijlemaker VA, Pedersen EM, et al. In vivo heating of pacemaker leads during magnetic resonance imaging. Eur Heart J 2005;26:376–83.

References

1. Nazarian S, Hansford R, Roguin A, et al. A prospective evaluation of a protocol for magnetic resonance imaging of patients with implanted cardiac devices. Ann Intern Med 2011;155:415–24.

2. Kalin R, Stanton MS. Current clnical issues for MRI scanning of pacemaker and defibrillator patients. Pacing Clin Electrophysiol 2005;28:326–8.

3. Beinart R, Nazarian S. Effects of external electrical and magnetic fields on pacemakers and defibrillators: from engineering principles to clinical practice. Circulation 2013; 128:2799–809.

4. Luechinger R, Zeijlemaker VA, Pedersen EM, et al. In vivo heating of pacemaker leads during magnetic resonance imaging. Eur Heart J 2005;26:376–83.

Issue
Journal of Clinical Outcomes Management - April 2017, Vol. 24, No. 4
Issue
Journal of Clinical Outcomes Management - April 2017, Vol. 24, No. 4
Publications
Publications
Topics
Article Type
Display Headline
Is MRI Safe in Patients with Implanted Cardiac Devices?
Display Headline
Is MRI Safe in Patients with Implanted Cardiac Devices?
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default

Can Cardiovascular Magnetic Resonance, Myocardial Perfusion Scintigraphy, or NICE Guidelines Prevent Unnecessary Angiography?

Article Type
Changed
Thu, 03/01/2018 - 10:59
Display Headline
Can Cardiovascular Magnetic Resonance, Myocardial Perfusion Scintigraphy, or NICE Guidelines Prevent Unnecessary Angiography?

Study Overview

Objective. To assess whether noninvasive functional imaging strategies reduced unnecessary angiography compared with UK national guidelines–directed care.

Design. 3–parallel group, multicenter randomized clinical trial using a pragmatic comparative effectiveness design.

Setting and participants. Participants were patients from 6 UK centers (Leeds, Glasgow, Leicester, Bristol, Oxford, London) age 30 years or older with suspected angina pectoris, a coronary heart disease (CHD) pretest likelihood of 10% to 90%, and who were suitable for revascularization. They were randomly assigned at a 1:2:2 allocation ratio to the UK NICE (National Institute for Health Care Excellence) guidelines or to care guided by the results of cardiovascular magnetic resonance (CMR) or myocardial perfusion scintigraphy (MPS).

Main outcome measures. The primary outcome of the study was protocol-defined unnecessary coronary angiography occurring within 12 months, defined by a normal FFR (fractional flow reserve) > 0.8, or quantitative coronary angiography (QCA) showing no percentage diameter stenosis ≥ 70% in 1 view or ≥ 70% in 2 orthogonal views in all vessels 2.5 mm or more in diameter within 12 months. Because of the study design, this included any unnecessary angiography occurring after a false-positive test result, patients with high CHD pretest likelihood sent directly to coronary angiography in the NICE guidelines group, and imaging results that were either inconclusive or negative but overruled by the responsible physician.

Secondary endpoints included positive angiography rates, a composite of major adverse cardiovascular events (MACEs: cardiovascular death, myocardial infarction, unplanned coronary revascularization, and hospital admission for cardiovascular cause), and procedural complications.

Main results. Among 2205 patients assessed for eligibility between 23 November 2012 and 13 March 2015, 1202 patients (55% of eligible) were recruited and allocated to NICE guidelines–directed care (n = 240), or management by CMR (n = 481) or MPS (n = 481). While there were no statistical differences between the 3 groups in terms of baseline characteristics, the study population had a substantial burden of cardiovascular risk factors: 150 patients (12.5%) had diabetes, 458 patients (38.1%) had hypertension, 702 patients (58.4%) were past or current tobacco users, 483 patients (40.2%) had dyslipidemia, and 651 patients (54.2%) had a family history of premature CHD. All patients were symptomatic, with 401 patients (33.4%) reporting typical chest pain and 801 patients (66.6%) reporting atypical chest pain as their primary symptom. Overall, 265 patients (22.0%) underwent at least 1 coronary angiogram and 10 patients underwent 2 angiograms.

The number of patients with invasive coronary angiography after 12 months were as follows: 102 of the 240 patients in the NICE guidelines group (42.5% [95% confidence interval {CI} 36.2%–49.0%]), 85 of the 481 patients in the CMR group (17.7% [95% CI 14.4%–21.4%]), and 78 of the 481 patients in the MPS group (16.2% [95% CI 13.0%–19.8%]). The primary endpoint of unnecessary angiography occurred in 69 patients (28.8%) in the NICE guidelines group, 36 patients (7.5%) in the CMR group, and 34 patients (7.1%) in the MPS group. Using CMR group as reference, adjusted odds ratio (AOR) of unnecessary angiography for CMR group vs. NICE guidelines group was 0.21 (95% CI 0.12–0.34, P < 0.001), and the AOR for CMR group vs. the MPS groups was 1.27 (95% CI 0.79–2.03, P = 0.32).

For the secondary endpoints, positive angiography was observed in 29 patients (12.1% [95% CI 8.2%–16.9%]) in the NICE guidelines group, 47 patients (9.8% [95% CI 7.3%–12.8%]) in the CMR group, and 42 patients (8.7% [95% CI 6.4%–11.6%]) in the MPS group, overall P = 0.36. Annualized MACE rates ware 1.6% in the NICE guidelines group, 2.0% for the CMR group, and 2.0% for the MPS group. Adjusted hazard ratios for MACE were 1.37 (95% CI 0.52–3.57, P = 0.52) for the CMR group vs. NICE guidelines group and 0.95 (95% CI 0.46–1.95, P = 0.88) for the CMR group vs. the MPS group.

Conclusion. In patients with suspected CHD, investigation by CMR or MPS resulted in lower probability of unnecessary angiography within 12 months of care than using the NICE guideline–directed care. There was no difference in adverse outcomes as measured by MACE by using NICE guidelines, CMR, or MPS.

Commentary

Coronary heart disease is a leading cause of morbidity and mortality worldwide. Despite the advancement in noninvasive imaging and recommendations in international guidelines, invasive coronary angiography is still commonly used early in diagnostic pathways in patients with suspected CHD [1]. Previous studies demonstrated that majority of patients presenting with chest pain will not have significant obstructive coronary disease; a large US study reported that approximately 60% of elective cardiac catheterizations found no obstructive CHD [2]. Thus, avoiding unnecessary angiography should reduce patient risk and provide significant financial savings. Current guidelines for investigation of stable chest pain rely on pretest likelihood of CHD. These pretest likelihood models can overestimate CHD risk, resulting in the increase in probability of invasive coronary angiography [1,3].

The current study by Greenwood et al investigated whether CMR-guided care is superior to MPS or NICE guidelines–directed care in reducing the occurrence of unnecessary angiography within 12 months. Overall, rates of disease detection based on positive angiogram were comparable for the 3 strategies. In addition, there was no difference in adverse events as measured by a composite of MACE.

While this was an excellently performed multicenter study, there were several major limitations. First, the study population was predominately white northern European (92% were classified ethnically as white), and therefore the results may not translate to other populations. Second, the NICE guidelines for estimation of high-risk CHD changed after initiation of the study due to overestimation, and recent guidelines have adopted a recalibrated risk model [4,5]. Finally, MACE is not a proxy for a missed diagnosis or treatment. It remains debatable whether revascularization for stable angina has prognostic benefit over optimal medical therapy.

Applications for Clinical Practice

This multicenter randomized clinical trial provides strong evidence to use either cardiovascular magnetic resonance–guided care or myocardial perfusion scintigraphy–guided care instead of NICE guidelines–directed care for symptomatic patients with suspected CHD in reducing unnecessary angiography.

—Ka Ming Gordon Ngai, MD, MPH

References

1. 2012 ACCF/AHA/ACP/AATS/PCNA/SCAI/STS guideline for the diagnosis and management of patients with stable ischemic heart disease. Circulation 2012;126:e354–e471.

2. Patel MR, Peterson ED, Dai D, et al. Low diagnostic yield of elective coronary angiography. N Engl J Med 2010;362:
886–95.

3.  Fox KA, McLean S. Nice guidance on the investigation of chest pain. Heart 2010;96:903–6.

4. Montalescot G, Sechtem U, Achenbach S, et al. 2013 ESC guidelines on the management of stable coronary artery disease. Eur Heart J 2013;34:2949–3003.

5.  Genders TSS, Steyerberg EW, Alkadhi H, et al. A clinical prediction rule for the diagnosis of coronary artery disease. Eur Heart J 2011;32:1316–30.

Issue
Journal of Clinical Outcomes Management - OCTOBER 2016, VOL. 23, NO. 10
Publications
Topics
Sections

Study Overview

Objective. To assess whether noninvasive functional imaging strategies reduced unnecessary angiography compared with UK national guidelines–directed care.

Design. 3–parallel group, multicenter randomized clinical trial using a pragmatic comparative effectiveness design.

Setting and participants. Participants were patients from 6 UK centers (Leeds, Glasgow, Leicester, Bristol, Oxford, London) age 30 years or older with suspected angina pectoris, a coronary heart disease (CHD) pretest likelihood of 10% to 90%, and who were suitable for revascularization. They were randomly assigned at a 1:2:2 allocation ratio to the UK NICE (National Institute for Health Care Excellence) guidelines or to care guided by the results of cardiovascular magnetic resonance (CMR) or myocardial perfusion scintigraphy (MPS).

Main outcome measures. The primary outcome of the study was protocol-defined unnecessary coronary angiography occurring within 12 months, defined by a normal FFR (fractional flow reserve) > 0.8, or quantitative coronary angiography (QCA) showing no percentage diameter stenosis ≥ 70% in 1 view or ≥ 70% in 2 orthogonal views in all vessels 2.5 mm or more in diameter within 12 months. Because of the study design, this included any unnecessary angiography occurring after a false-positive test result, patients with high CHD pretest likelihood sent directly to coronary angiography in the NICE guidelines group, and imaging results that were either inconclusive or negative but overruled by the responsible physician.

Secondary endpoints included positive angiography rates, a composite of major adverse cardiovascular events (MACEs: cardiovascular death, myocardial infarction, unplanned coronary revascularization, and hospital admission for cardiovascular cause), and procedural complications.

Main results. Among 2205 patients assessed for eligibility between 23 November 2012 and 13 March 2015, 1202 patients (55% of eligible) were recruited and allocated to NICE guidelines–directed care (n = 240), or management by CMR (n = 481) or MPS (n = 481). While there were no statistical differences between the 3 groups in terms of baseline characteristics, the study population had a substantial burden of cardiovascular risk factors: 150 patients (12.5%) had diabetes, 458 patients (38.1%) had hypertension, 702 patients (58.4%) were past or current tobacco users, 483 patients (40.2%) had dyslipidemia, and 651 patients (54.2%) had a family history of premature CHD. All patients were symptomatic, with 401 patients (33.4%) reporting typical chest pain and 801 patients (66.6%) reporting atypical chest pain as their primary symptom. Overall, 265 patients (22.0%) underwent at least 1 coronary angiogram and 10 patients underwent 2 angiograms.

The number of patients with invasive coronary angiography after 12 months were as follows: 102 of the 240 patients in the NICE guidelines group (42.5% [95% confidence interval {CI} 36.2%–49.0%]), 85 of the 481 patients in the CMR group (17.7% [95% CI 14.4%–21.4%]), and 78 of the 481 patients in the MPS group (16.2% [95% CI 13.0%–19.8%]). The primary endpoint of unnecessary angiography occurred in 69 patients (28.8%) in the NICE guidelines group, 36 patients (7.5%) in the CMR group, and 34 patients (7.1%) in the MPS group. Using CMR group as reference, adjusted odds ratio (AOR) of unnecessary angiography for CMR group vs. NICE guidelines group was 0.21 (95% CI 0.12–0.34, P < 0.001), and the AOR for CMR group vs. the MPS groups was 1.27 (95% CI 0.79–2.03, P = 0.32).

For the secondary endpoints, positive angiography was observed in 29 patients (12.1% [95% CI 8.2%–16.9%]) in the NICE guidelines group, 47 patients (9.8% [95% CI 7.3%–12.8%]) in the CMR group, and 42 patients (8.7% [95% CI 6.4%–11.6%]) in the MPS group, overall P = 0.36. Annualized MACE rates ware 1.6% in the NICE guidelines group, 2.0% for the CMR group, and 2.0% for the MPS group. Adjusted hazard ratios for MACE were 1.37 (95% CI 0.52–3.57, P = 0.52) for the CMR group vs. NICE guidelines group and 0.95 (95% CI 0.46–1.95, P = 0.88) for the CMR group vs. the MPS group.

Conclusion. In patients with suspected CHD, investigation by CMR or MPS resulted in lower probability of unnecessary angiography within 12 months of care than using the NICE guideline–directed care. There was no difference in adverse outcomes as measured by MACE by using NICE guidelines, CMR, or MPS.

Commentary

Coronary heart disease is a leading cause of morbidity and mortality worldwide. Despite the advancement in noninvasive imaging and recommendations in international guidelines, invasive coronary angiography is still commonly used early in diagnostic pathways in patients with suspected CHD [1]. Previous studies demonstrated that majority of patients presenting with chest pain will not have significant obstructive coronary disease; a large US study reported that approximately 60% of elective cardiac catheterizations found no obstructive CHD [2]. Thus, avoiding unnecessary angiography should reduce patient risk and provide significant financial savings. Current guidelines for investigation of stable chest pain rely on pretest likelihood of CHD. These pretest likelihood models can overestimate CHD risk, resulting in the increase in probability of invasive coronary angiography [1,3].

The current study by Greenwood et al investigated whether CMR-guided care is superior to MPS or NICE guidelines–directed care in reducing the occurrence of unnecessary angiography within 12 months. Overall, rates of disease detection based on positive angiogram were comparable for the 3 strategies. In addition, there was no difference in adverse events as measured by a composite of MACE.

While this was an excellently performed multicenter study, there were several major limitations. First, the study population was predominately white northern European (92% were classified ethnically as white), and therefore the results may not translate to other populations. Second, the NICE guidelines for estimation of high-risk CHD changed after initiation of the study due to overestimation, and recent guidelines have adopted a recalibrated risk model [4,5]. Finally, MACE is not a proxy for a missed diagnosis or treatment. It remains debatable whether revascularization for stable angina has prognostic benefit over optimal medical therapy.

Applications for Clinical Practice

This multicenter randomized clinical trial provides strong evidence to use either cardiovascular magnetic resonance–guided care or myocardial perfusion scintigraphy–guided care instead of NICE guidelines–directed care for symptomatic patients with suspected CHD in reducing unnecessary angiography.

—Ka Ming Gordon Ngai, MD, MPH

Study Overview

Objective. To assess whether noninvasive functional imaging strategies reduced unnecessary angiography compared with UK national guidelines–directed care.

Design. 3–parallel group, multicenter randomized clinical trial using a pragmatic comparative effectiveness design.

Setting and participants. Participants were patients from 6 UK centers (Leeds, Glasgow, Leicester, Bristol, Oxford, London) age 30 years or older with suspected angina pectoris, a coronary heart disease (CHD) pretest likelihood of 10% to 90%, and who were suitable for revascularization. They were randomly assigned at a 1:2:2 allocation ratio to the UK NICE (National Institute for Health Care Excellence) guidelines or to care guided by the results of cardiovascular magnetic resonance (CMR) or myocardial perfusion scintigraphy (MPS).

Main outcome measures. The primary outcome of the study was protocol-defined unnecessary coronary angiography occurring within 12 months, defined by a normal FFR (fractional flow reserve) > 0.8, or quantitative coronary angiography (QCA) showing no percentage diameter stenosis ≥ 70% in 1 view or ≥ 70% in 2 orthogonal views in all vessels 2.5 mm or more in diameter within 12 months. Because of the study design, this included any unnecessary angiography occurring after a false-positive test result, patients with high CHD pretest likelihood sent directly to coronary angiography in the NICE guidelines group, and imaging results that were either inconclusive or negative but overruled by the responsible physician.

Secondary endpoints included positive angiography rates, a composite of major adverse cardiovascular events (MACEs: cardiovascular death, myocardial infarction, unplanned coronary revascularization, and hospital admission for cardiovascular cause), and procedural complications.

Main results. Among 2205 patients assessed for eligibility between 23 November 2012 and 13 March 2015, 1202 patients (55% of eligible) were recruited and allocated to NICE guidelines–directed care (n = 240), or management by CMR (n = 481) or MPS (n = 481). While there were no statistical differences between the 3 groups in terms of baseline characteristics, the study population had a substantial burden of cardiovascular risk factors: 150 patients (12.5%) had diabetes, 458 patients (38.1%) had hypertension, 702 patients (58.4%) were past or current tobacco users, 483 patients (40.2%) had dyslipidemia, and 651 patients (54.2%) had a family history of premature CHD. All patients were symptomatic, with 401 patients (33.4%) reporting typical chest pain and 801 patients (66.6%) reporting atypical chest pain as their primary symptom. Overall, 265 patients (22.0%) underwent at least 1 coronary angiogram and 10 patients underwent 2 angiograms.

The number of patients with invasive coronary angiography after 12 months were as follows: 102 of the 240 patients in the NICE guidelines group (42.5% [95% confidence interval {CI} 36.2%–49.0%]), 85 of the 481 patients in the CMR group (17.7% [95% CI 14.4%–21.4%]), and 78 of the 481 patients in the MPS group (16.2% [95% CI 13.0%–19.8%]). The primary endpoint of unnecessary angiography occurred in 69 patients (28.8%) in the NICE guidelines group, 36 patients (7.5%) in the CMR group, and 34 patients (7.1%) in the MPS group. Using CMR group as reference, adjusted odds ratio (AOR) of unnecessary angiography for CMR group vs. NICE guidelines group was 0.21 (95% CI 0.12–0.34, P < 0.001), and the AOR for CMR group vs. the MPS groups was 1.27 (95% CI 0.79–2.03, P = 0.32).

For the secondary endpoints, positive angiography was observed in 29 patients (12.1% [95% CI 8.2%–16.9%]) in the NICE guidelines group, 47 patients (9.8% [95% CI 7.3%–12.8%]) in the CMR group, and 42 patients (8.7% [95% CI 6.4%–11.6%]) in the MPS group, overall P = 0.36. Annualized MACE rates ware 1.6% in the NICE guidelines group, 2.0% for the CMR group, and 2.0% for the MPS group. Adjusted hazard ratios for MACE were 1.37 (95% CI 0.52–3.57, P = 0.52) for the CMR group vs. NICE guidelines group and 0.95 (95% CI 0.46–1.95, P = 0.88) for the CMR group vs. the MPS group.

Conclusion. In patients with suspected CHD, investigation by CMR or MPS resulted in lower probability of unnecessary angiography within 12 months of care than using the NICE guideline–directed care. There was no difference in adverse outcomes as measured by MACE by using NICE guidelines, CMR, or MPS.

Commentary

Coronary heart disease is a leading cause of morbidity and mortality worldwide. Despite the advancement in noninvasive imaging and recommendations in international guidelines, invasive coronary angiography is still commonly used early in diagnostic pathways in patients with suspected CHD [1]. Previous studies demonstrated that majority of patients presenting with chest pain will not have significant obstructive coronary disease; a large US study reported that approximately 60% of elective cardiac catheterizations found no obstructive CHD [2]. Thus, avoiding unnecessary angiography should reduce patient risk and provide significant financial savings. Current guidelines for investigation of stable chest pain rely on pretest likelihood of CHD. These pretest likelihood models can overestimate CHD risk, resulting in the increase in probability of invasive coronary angiography [1,3].

The current study by Greenwood et al investigated whether CMR-guided care is superior to MPS or NICE guidelines–directed care in reducing the occurrence of unnecessary angiography within 12 months. Overall, rates of disease detection based on positive angiogram were comparable for the 3 strategies. In addition, there was no difference in adverse events as measured by a composite of MACE.

While this was an excellently performed multicenter study, there were several major limitations. First, the study population was predominately white northern European (92% were classified ethnically as white), and therefore the results may not translate to other populations. Second, the NICE guidelines for estimation of high-risk CHD changed after initiation of the study due to overestimation, and recent guidelines have adopted a recalibrated risk model [4,5]. Finally, MACE is not a proxy for a missed diagnosis or treatment. It remains debatable whether revascularization for stable angina has prognostic benefit over optimal medical therapy.

Applications for Clinical Practice

This multicenter randomized clinical trial provides strong evidence to use either cardiovascular magnetic resonance–guided care or myocardial perfusion scintigraphy–guided care instead of NICE guidelines–directed care for symptomatic patients with suspected CHD in reducing unnecessary angiography.

—Ka Ming Gordon Ngai, MD, MPH

References

1. 2012 ACCF/AHA/ACP/AATS/PCNA/SCAI/STS guideline for the diagnosis and management of patients with stable ischemic heart disease. Circulation 2012;126:e354–e471.

2. Patel MR, Peterson ED, Dai D, et al. Low diagnostic yield of elective coronary angiography. N Engl J Med 2010;362:
886–95.

3.  Fox KA, McLean S. Nice guidance on the investigation of chest pain. Heart 2010;96:903–6.

4. Montalescot G, Sechtem U, Achenbach S, et al. 2013 ESC guidelines on the management of stable coronary artery disease. Eur Heart J 2013;34:2949–3003.

5.  Genders TSS, Steyerberg EW, Alkadhi H, et al. A clinical prediction rule for the diagnosis of coronary artery disease. Eur Heart J 2011;32:1316–30.

References

1. 2012 ACCF/AHA/ACP/AATS/PCNA/SCAI/STS guideline for the diagnosis and management of patients with stable ischemic heart disease. Circulation 2012;126:e354–e471.

2. Patel MR, Peterson ED, Dai D, et al. Low diagnostic yield of elective coronary angiography. N Engl J Med 2010;362:
886–95.

3.  Fox KA, McLean S. Nice guidance on the investigation of chest pain. Heart 2010;96:903–6.

4. Montalescot G, Sechtem U, Achenbach S, et al. 2013 ESC guidelines on the management of stable coronary artery disease. Eur Heart J 2013;34:2949–3003.

5.  Genders TSS, Steyerberg EW, Alkadhi H, et al. A clinical prediction rule for the diagnosis of coronary artery disease. Eur Heart J 2011;32:1316–30.

Issue
Journal of Clinical Outcomes Management - OCTOBER 2016, VOL. 23, NO. 10
Issue
Journal of Clinical Outcomes Management - OCTOBER 2016, VOL. 23, NO. 10
Publications
Publications
Topics
Article Type
Display Headline
Can Cardiovascular Magnetic Resonance, Myocardial Perfusion Scintigraphy, or NICE Guidelines Prevent Unnecessary Angiography?
Display Headline
Can Cardiovascular Magnetic Resonance, Myocardial Perfusion Scintigraphy, or NICE Guidelines Prevent Unnecessary Angiography?
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default

Betamethasone Before All Late Preterm Deliveries?

Article Type
Changed
Thu, 03/01/2018 - 09:31
Display Headline
Betamethasone Before All Late Preterm Deliveries?

Study Overview

Objective. To determine whether the administration of betamethasone to women who are likely to deliver in the late preterm period would decrease respiratory and other neonatal complications.

Design. Randomized controlled trial.

Setting and participants. Participants were women with a singleton pregnancy at 34 weeks 0 days to 36 weeks 5 days of gestation and a high probability of delivery in the late preterm period (which extends to 36 weeks 6 days) within the 17 university-based clinical centers participating in the Maternal Fetal Medicine Units Network of the Eunice Kennedy Shriver National Institute of Child Health and Human Development (NICHD). Eligible women were randomly assigned in a 1:1 ratio to a course of 2 intramuscular injections of either 12 mg betamethasone or matching placebo administered 24 hours apart. After administration of the study medications, the women were treated clinically according to local practice, including discharge to home if delivery did not occur.

Main outcome measures. The primary outcome was a composite endpoint consisting of need for respiratory support, stillbirth, or neonatal death within 72 hours after delivery. Need for respiratory support was defined as one or more of the following: the use of continuous positive airway pressure (CPAP) or high-flow nasal cannula for at least 2 consecutive hours, supplemental oxygen with a fraction of inspired oxygen of at least 0.30 for at least 4 continuous hours, extracorporeal membrane oxygenation (ECMO), or mechanical ventilation. Secondary outcomes included 2 composite outcomes: (1) respiratory distress syndrome, transient tachypnea of the newborn, or apnea; and (2) respiratory distress syndrome, intraventricular hemorrhage, or necrotizing enterocolitis.

Main results. Among 24,133 women assessed for eligibility, 2831 women underwent randomization with 1429 assigned to the betamethasone group and 1402 to the placebo group. A total of 860 (60.2%) in the betamethasone group and 826 (58.9%) in the placebo group received the prespecified 2 doses of study medication. 1083 of the 1145 women (94.6%) who did not receive a second dose delivered before 24 hours. Two women in each study group were lost to follow-up, with outcome information available for 2827 neonates.

The rate of the primary outcome was lower in the betamethasone group (11.6%) than in the placebo group (14.4%), with a relative risk of 0.80% (95% CI 0.66 to 0.97; P = 0.02); the number needed to treat was 35 women to prevent 1 case of the primary outcome. In regard to secondary outcomes, the rate of the composite outcome of severe respiratory complications was also lower in the betamethasone group than in the placebo group (8.1% vs. 12.1%; relative risk 0.67; CI 0.53 to 0.84; P < 0.001). Of note, the betamethasone group had a higher incidence of neonatal hypoglycemia when compared to the placebo group (24.0% vs. 15.0%; relative risk 1.60; 95% CI 1.37 to 1.87; P < 0.001).

Conclusion. Administration of antenatal betamethasone in women at risk for late preterm delivery significantly decreased the rate of respiratory complications in newborns.

Commentary

Use of antenatal glucocorticoids for early preterm delivery has been a widely accepted practice, with strong evidence that glucocorticoids reduce adverse neonatal outcomes when administered to women who are likely to deliver before 34 weeks of gestation [1,2]. In addition, use of glucocorticoids at the time of elective cesarean delivery at term from the results of the Antenatal Steroids for Term Elective Caesarean Section (ASTECS) trial demonstrated reduction in the rate of admission to neonatal intensive care units for respiratory complications in the betamethasone group when comparing to placebo [3]. However, the use of glucocorticoids in the late preterm period to prevent adverse neonatal respiratory outcomes remained inconclusive after 2 smaller randomized trials [4,5].

In the current study, Gyamfi-Bannerman and colleagues addressed the issue of whether the use of glucocorticoids, specifically betamethasone, in the late preterm period may prevent adverse neonatal respiratory outcomes. While only 60.2% of the betamethasone group and 58.9% of the placebo group received the proposed 2 doses of study medication, administration of betamethasone decreased the need for substantial respiratory support during the first 72 hours after birth and other respiratory complications.

There were no clinically significant adverse neonatal effects except that the betamethasone cohort babies had a 60% increased relative risk of neonatal hypoglycemia. There were no reported adverse events related to the hypoglycemia, and infants with hypoglycemia were discharged on average 2 days earlier than those without, which suggests that the condition was self-limiting. The authors suggested monitoring neonatal blood glucose after betamethasone exposure in the late preterm period.  It will be important to answer questions about the long-term outcomes of this therapy, both benefits and risks, such as the potential reduction of chronic lung diseases or risk of developmental delay due to hypoglycemia [6].

Applications for Clinical Practice

This multicenter randomized controlled study provides strong evidence for administering antenatal glucocorticoids, such as betamethasone, in women at risk for late preterm delivery. Betamethasone administration significantly decreased the rate of respiratory complications in newborns, with the precaution to monitor for neonatal hypoglycemia.

 —Ka Ming Gordon Ngai, MD, MPH

References

1. Effect of corticosteroids for fetal maturation on perinatal outcomes. NIH Consensus Development Panel on the Effect of Corticosteroids for Fetal Maturation on Perinatal Outcomes. JAMA 1995;273: 413–8.

2. Leviton LC, Goldenberg RL, Baker CS, et al. Methods to encourage the use of antenatal corticosteroid therapy for fetal maturation: a randomized controlled trial. JAMA 1999;281:46–52.

3. Stutchfield PR, Whitaker, Russell I. Antenatal betamethasone and incidence of neonatal respiratory distress after elective caesarean section: pragmatic randomised trial. BMJ 2005;331:662.

4. Balci O, Ozdemir S, Mahmoud AS, et al. The effect of antenatal steroids on fetal lung maturation between the 34th and 36th week of pregnancy. Gynecol Obstet Invest 2010;70:95–9.

5. Porto AM, Coutinho IC, Correia JB, Amorim MM. Effectiveness of antenatal corticosteroids in reducing respiratory disorders in late preterm infants: randomised clinical trial. BMJ 2011;342:d1696.

6. Kerstjens JM, Bocca-Tjeertes IF, de Winter AF, et al. Neonatal morbidities and developmental delay in moderately preterm-born children. Pediatrics 2012;130:e265–72.

Issue
Journal of Clinical Outcomes Management - May 2016, VOL. 23, NO. 5
Publications
Topics
Sections

Study Overview

Objective. To determine whether the administration of betamethasone to women who are likely to deliver in the late preterm period would decrease respiratory and other neonatal complications.

Design. Randomized controlled trial.

Setting and participants. Participants were women with a singleton pregnancy at 34 weeks 0 days to 36 weeks 5 days of gestation and a high probability of delivery in the late preterm period (which extends to 36 weeks 6 days) within the 17 university-based clinical centers participating in the Maternal Fetal Medicine Units Network of the Eunice Kennedy Shriver National Institute of Child Health and Human Development (NICHD). Eligible women were randomly assigned in a 1:1 ratio to a course of 2 intramuscular injections of either 12 mg betamethasone or matching placebo administered 24 hours apart. After administration of the study medications, the women were treated clinically according to local practice, including discharge to home if delivery did not occur.

Main outcome measures. The primary outcome was a composite endpoint consisting of need for respiratory support, stillbirth, or neonatal death within 72 hours after delivery. Need for respiratory support was defined as one or more of the following: the use of continuous positive airway pressure (CPAP) or high-flow nasal cannula for at least 2 consecutive hours, supplemental oxygen with a fraction of inspired oxygen of at least 0.30 for at least 4 continuous hours, extracorporeal membrane oxygenation (ECMO), or mechanical ventilation. Secondary outcomes included 2 composite outcomes: (1) respiratory distress syndrome, transient tachypnea of the newborn, or apnea; and (2) respiratory distress syndrome, intraventricular hemorrhage, or necrotizing enterocolitis.

Main results. Among 24,133 women assessed for eligibility, 2831 women underwent randomization with 1429 assigned to the betamethasone group and 1402 to the placebo group. A total of 860 (60.2%) in the betamethasone group and 826 (58.9%) in the placebo group received the prespecified 2 doses of study medication. 1083 of the 1145 women (94.6%) who did not receive a second dose delivered before 24 hours. Two women in each study group were lost to follow-up, with outcome information available for 2827 neonates.

The rate of the primary outcome was lower in the betamethasone group (11.6%) than in the placebo group (14.4%), with a relative risk of 0.80% (95% CI 0.66 to 0.97; P = 0.02); the number needed to treat was 35 women to prevent 1 case of the primary outcome. In regard to secondary outcomes, the rate of the composite outcome of severe respiratory complications was also lower in the betamethasone group than in the placebo group (8.1% vs. 12.1%; relative risk 0.67; CI 0.53 to 0.84; P < 0.001). Of note, the betamethasone group had a higher incidence of neonatal hypoglycemia when compared to the placebo group (24.0% vs. 15.0%; relative risk 1.60; 95% CI 1.37 to 1.87; P < 0.001).

Conclusion. Administration of antenatal betamethasone in women at risk for late preterm delivery significantly decreased the rate of respiratory complications in newborns.

Commentary

Use of antenatal glucocorticoids for early preterm delivery has been a widely accepted practice, with strong evidence that glucocorticoids reduce adverse neonatal outcomes when administered to women who are likely to deliver before 34 weeks of gestation [1,2]. In addition, use of glucocorticoids at the time of elective cesarean delivery at term from the results of the Antenatal Steroids for Term Elective Caesarean Section (ASTECS) trial demonstrated reduction in the rate of admission to neonatal intensive care units for respiratory complications in the betamethasone group when comparing to placebo [3]. However, the use of glucocorticoids in the late preterm period to prevent adverse neonatal respiratory outcomes remained inconclusive after 2 smaller randomized trials [4,5].

In the current study, Gyamfi-Bannerman and colleagues addressed the issue of whether the use of glucocorticoids, specifically betamethasone, in the late preterm period may prevent adverse neonatal respiratory outcomes. While only 60.2% of the betamethasone group and 58.9% of the placebo group received the proposed 2 doses of study medication, administration of betamethasone decreased the need for substantial respiratory support during the first 72 hours after birth and other respiratory complications.

There were no clinically significant adverse neonatal effects except that the betamethasone cohort babies had a 60% increased relative risk of neonatal hypoglycemia. There were no reported adverse events related to the hypoglycemia, and infants with hypoglycemia were discharged on average 2 days earlier than those without, which suggests that the condition was self-limiting. The authors suggested monitoring neonatal blood glucose after betamethasone exposure in the late preterm period.  It will be important to answer questions about the long-term outcomes of this therapy, both benefits and risks, such as the potential reduction of chronic lung diseases or risk of developmental delay due to hypoglycemia [6].

Applications for Clinical Practice

This multicenter randomized controlled study provides strong evidence for administering antenatal glucocorticoids, such as betamethasone, in women at risk for late preterm delivery. Betamethasone administration significantly decreased the rate of respiratory complications in newborns, with the precaution to monitor for neonatal hypoglycemia.

 —Ka Ming Gordon Ngai, MD, MPH

Study Overview

Objective. To determine whether the administration of betamethasone to women who are likely to deliver in the late preterm period would decrease respiratory and other neonatal complications.

Design. Randomized controlled trial.

Setting and participants. Participants were women with a singleton pregnancy at 34 weeks 0 days to 36 weeks 5 days of gestation and a high probability of delivery in the late preterm period (which extends to 36 weeks 6 days) within the 17 university-based clinical centers participating in the Maternal Fetal Medicine Units Network of the Eunice Kennedy Shriver National Institute of Child Health and Human Development (NICHD). Eligible women were randomly assigned in a 1:1 ratio to a course of 2 intramuscular injections of either 12 mg betamethasone or matching placebo administered 24 hours apart. After administration of the study medications, the women were treated clinically according to local practice, including discharge to home if delivery did not occur.

Main outcome measures. The primary outcome was a composite endpoint consisting of need for respiratory support, stillbirth, or neonatal death within 72 hours after delivery. Need for respiratory support was defined as one or more of the following: the use of continuous positive airway pressure (CPAP) or high-flow nasal cannula for at least 2 consecutive hours, supplemental oxygen with a fraction of inspired oxygen of at least 0.30 for at least 4 continuous hours, extracorporeal membrane oxygenation (ECMO), or mechanical ventilation. Secondary outcomes included 2 composite outcomes: (1) respiratory distress syndrome, transient tachypnea of the newborn, or apnea; and (2) respiratory distress syndrome, intraventricular hemorrhage, or necrotizing enterocolitis.

Main results. Among 24,133 women assessed for eligibility, 2831 women underwent randomization with 1429 assigned to the betamethasone group and 1402 to the placebo group. A total of 860 (60.2%) in the betamethasone group and 826 (58.9%) in the placebo group received the prespecified 2 doses of study medication. 1083 of the 1145 women (94.6%) who did not receive a second dose delivered before 24 hours. Two women in each study group were lost to follow-up, with outcome information available for 2827 neonates.

The rate of the primary outcome was lower in the betamethasone group (11.6%) than in the placebo group (14.4%), with a relative risk of 0.80% (95% CI 0.66 to 0.97; P = 0.02); the number needed to treat was 35 women to prevent 1 case of the primary outcome. In regard to secondary outcomes, the rate of the composite outcome of severe respiratory complications was also lower in the betamethasone group than in the placebo group (8.1% vs. 12.1%; relative risk 0.67; CI 0.53 to 0.84; P < 0.001). Of note, the betamethasone group had a higher incidence of neonatal hypoglycemia when compared to the placebo group (24.0% vs. 15.0%; relative risk 1.60; 95% CI 1.37 to 1.87; P < 0.001).

Conclusion. Administration of antenatal betamethasone in women at risk for late preterm delivery significantly decreased the rate of respiratory complications in newborns.

Commentary

Use of antenatal glucocorticoids for early preterm delivery has been a widely accepted practice, with strong evidence that glucocorticoids reduce adverse neonatal outcomes when administered to women who are likely to deliver before 34 weeks of gestation [1,2]. In addition, use of glucocorticoids at the time of elective cesarean delivery at term from the results of the Antenatal Steroids for Term Elective Caesarean Section (ASTECS) trial demonstrated reduction in the rate of admission to neonatal intensive care units for respiratory complications in the betamethasone group when comparing to placebo [3]. However, the use of glucocorticoids in the late preterm period to prevent adverse neonatal respiratory outcomes remained inconclusive after 2 smaller randomized trials [4,5].

In the current study, Gyamfi-Bannerman and colleagues addressed the issue of whether the use of glucocorticoids, specifically betamethasone, in the late preterm period may prevent adverse neonatal respiratory outcomes. While only 60.2% of the betamethasone group and 58.9% of the placebo group received the proposed 2 doses of study medication, administration of betamethasone decreased the need for substantial respiratory support during the first 72 hours after birth and other respiratory complications.

There were no clinically significant adverse neonatal effects except that the betamethasone cohort babies had a 60% increased relative risk of neonatal hypoglycemia. There were no reported adverse events related to the hypoglycemia, and infants with hypoglycemia were discharged on average 2 days earlier than those without, which suggests that the condition was self-limiting. The authors suggested monitoring neonatal blood glucose after betamethasone exposure in the late preterm period.  It will be important to answer questions about the long-term outcomes of this therapy, both benefits and risks, such as the potential reduction of chronic lung diseases or risk of developmental delay due to hypoglycemia [6].

Applications for Clinical Practice

This multicenter randomized controlled study provides strong evidence for administering antenatal glucocorticoids, such as betamethasone, in women at risk for late preterm delivery. Betamethasone administration significantly decreased the rate of respiratory complications in newborns, with the precaution to monitor for neonatal hypoglycemia.

 —Ka Ming Gordon Ngai, MD, MPH

References

1. Effect of corticosteroids for fetal maturation on perinatal outcomes. NIH Consensus Development Panel on the Effect of Corticosteroids for Fetal Maturation on Perinatal Outcomes. JAMA 1995;273: 413–8.

2. Leviton LC, Goldenberg RL, Baker CS, et al. Methods to encourage the use of antenatal corticosteroid therapy for fetal maturation: a randomized controlled trial. JAMA 1999;281:46–52.

3. Stutchfield PR, Whitaker, Russell I. Antenatal betamethasone and incidence of neonatal respiratory distress after elective caesarean section: pragmatic randomised trial. BMJ 2005;331:662.

4. Balci O, Ozdemir S, Mahmoud AS, et al. The effect of antenatal steroids on fetal lung maturation between the 34th and 36th week of pregnancy. Gynecol Obstet Invest 2010;70:95–9.

5. Porto AM, Coutinho IC, Correia JB, Amorim MM. Effectiveness of antenatal corticosteroids in reducing respiratory disorders in late preterm infants: randomised clinical trial. BMJ 2011;342:d1696.

6. Kerstjens JM, Bocca-Tjeertes IF, de Winter AF, et al. Neonatal morbidities and developmental delay in moderately preterm-born children. Pediatrics 2012;130:e265–72.

References

1. Effect of corticosteroids for fetal maturation on perinatal outcomes. NIH Consensus Development Panel on the Effect of Corticosteroids for Fetal Maturation on Perinatal Outcomes. JAMA 1995;273: 413–8.

2. Leviton LC, Goldenberg RL, Baker CS, et al. Methods to encourage the use of antenatal corticosteroid therapy for fetal maturation: a randomized controlled trial. JAMA 1999;281:46–52.

3. Stutchfield PR, Whitaker, Russell I. Antenatal betamethasone and incidence of neonatal respiratory distress after elective caesarean section: pragmatic randomised trial. BMJ 2005;331:662.

4. Balci O, Ozdemir S, Mahmoud AS, et al. The effect of antenatal steroids on fetal lung maturation between the 34th and 36th week of pregnancy. Gynecol Obstet Invest 2010;70:95–9.

5. Porto AM, Coutinho IC, Correia JB, Amorim MM. Effectiveness of antenatal corticosteroids in reducing respiratory disorders in late preterm infants: randomised clinical trial. BMJ 2011;342:d1696.

6. Kerstjens JM, Bocca-Tjeertes IF, de Winter AF, et al. Neonatal morbidities and developmental delay in moderately preterm-born children. Pediatrics 2012;130:e265–72.

Issue
Journal of Clinical Outcomes Management - May 2016, VOL. 23, NO. 5
Issue
Journal of Clinical Outcomes Management - May 2016, VOL. 23, NO. 5
Publications
Publications
Topics
Article Type
Display Headline
Betamethasone Before All Late Preterm Deliveries?
Display Headline
Betamethasone Before All Late Preterm Deliveries?
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default

Which Revascularization Strategy for Multivessel Coronary Disease?

Article Type
Changed
Thu, 03/01/2018 - 15:22
Display Headline
Which Revascularization Strategy for Multivessel Coronary Disease?

Study Overview

Objective. To compare percutaneous coronary intervention (PCI) using second-generation drug-eluting stents (everolimus-eluting stents) with coronary artery bypass grafting (CABG) among patients with multivessel coronary disease.

Design. Observational registry study with propensity-score matching.

Setting and participants. The study relies on patients identified from the Cardiac Surgery Reporting System (CSRS) and Percutaneous Coronary Intervention Reporting System (PCIRS) registries of the New York State Department of Health. These 2 registries were linked to the New York State Vital Statistics Death registry and to the Statewide Planning and Research Cooperative System registry (SPARCS) to obtain further information like dates of admission, surgery, discharge, and death. Subjects were eligible for inclusion if they had multivessel disease (defined as severe stenosis [≥ 70%] in at least 2 diseased major epicardial coronary arteries) and if they had undergone either PCI with implantation of an everolimus-eluting stent or CABG. Subjects were excluded if they had revascularization within 1 year before index procedure; previous cardiac surgery; severe left main coronary artery disease (degree of stenosis ≥ 50%); PCI with a stent other than an everolimus-eluting stent; myocardial infarction without 24 hours before the index procedure; and unstable hemodynamics or cardiogenic shock.

Main outcome measures. The primary outcome of the study was all-cause mortality. Various secondary outcomes included rates of myocardial infarction, stroke, and repeat vascularization.

Main results. Among 116,915 patients assessed for eligibility, 82,096 were excluded. Among 34,819 who met inclusion criteria, 18,446 were included in the propensity score–matched analysis. With a 1:1 matching algorithm, 9223 were in the PCI with everolimus-eluting stent group and 9223 were in the CABG group. Short-term outcomes (in hospital or ≤ 30 days after the index procedure) favored PCI with everolimus-eluting stents over CABG, with a significantly lower risk of death (0.6% vs. 1.1%; hazard ratio [HR], 0.49; 95% confidence interval [CI], 0.35 to 0.69; P < 0.002) as well as stroke (0.2% vs 1.2%; HR, 0.18; 95% CI, 0.11 to 0.29; P < 0.001). The 2 groups had similar rates of myocardial infarction in the short-term (0.5% and 0.4%; HR, 1.37; 95% CI, 0.89 to 2.12; P = 0.16). After a mean follow-up of 2.9 years, there was a similar annual death rate between groups: 3.1% for PCI and 2.9% for CABG (HR, 1.04; 95% CI, 0.93 to 1.17; P = 0.50). PCI with everolimus-eluting stents was associated with a higher risk of a first myocardial infarction than was CABG (1.9% vs 1.1% per year; HR, 1.51; 95% CI, 1.29 to 1.77; P < 0.001). PCI with everolimus-eluting stents was associated with a lower risk of a first stroke than CABG (0.7% vs. 1.0% per year; HR, 0.62; 95% CI, 0.50 to 0.76; P < 0.001). Finally, PCI with everolimus-eluting stents was associated with a higher risk of a first repeat-revascularization procedure than CABG (7.2% vs. 3.1% per year; HR, 2.35; 95% CI, 2.14 to 2.58; P < 0.001).

Conclusion. In the setting of newer stent technology with second-generation everolimus-eluting stents, the risk of death associated with PCI was similar to that associated with CABG for multivessel coronary artery disease. In the long-term, PCI was associated with a higher risk of myocardial infarction and repeat revascularization, whereas CABG was associated with an increased risk of stroke. In the short-term, PCI had lower risks of both death and stroke.

Commentary

Coronary artery disease is a major public health problem. For patients for whom revascularization is deemed to be appropriate, a choice must be made between PCI and CABG. In previous studies that compared PCI and CABG, CABG was shown to have less need for repeat revascularizations as well as mortality benefits [1–3]. However, these prior studies compared CABG with older generations of stents. In the past decade, stent technologies have improved, as the bare-metal stent era gave way to the first generation of of drug-eluting stents (with sirolimus or paclitaxel), to be followed by second-generation drug-eluting stents (with everolimus or zotarolimus) [4].

In this article, Bangalore and colleagues addressed the issue of whether the use of second-generation drug-eluting stents close the outcome gap that favors CABG over PCI in patients with multivessel coronary artery disease. In patients who were considered to have had complete revascularization performed during PCI (ie, revascularization of all major vessels with clinically significant stenosis), they noted mitigation of the outcome differences between the PCI group and the CABG group. They conclude that the decision-making process by patients and their providers regarding revascularization be placed in the context of individual values and preferences.

One major limitation is that the study is an observational study from registry data. Despite the use of sophisticated statistical techniques including propensity score matching to adjust for confounders that are implicit in any nonrandomized comparison of treatment strategies, observational studies suffer from the definitely proof of causality. These limitations are especially important when the two groups being compared have modest differences in outcome.

Applications for Clinical Practice

This observational study, together with a recent randomized clinical trial in which CABG was compared with PCI with the use of everolimus-eluting stents from the BEST trial [5], provided new insights of the 2 revascularization strategies. Clinicians should engage and empower patients with a shared decision-making approach. The early hazard of CABG in stroke and death may be unacceptable to some patients, whereas others might want to avoid the later hazards of PCI in repeat procedure or having a myocardial infarction. Until a definitive study is available, patients should be informed of the best current knowledge of the pros and cons of the two revascularization strategies.

 —Ka Ming Gordon Ngai, MD, MPH

 

References

1. Farooq V, van Klaveren D, Steyerberg EW, et al. Anatomical and clinical characteristics to guide decision making between coronary artery bypass surgery and percutaneous coronary intervention for individual patients: development and validation of SYNTAX score II. Lancet 2013;381: 639–50.

2. Hannan EL, Racz MJ, Arani DT, et al. A comparison of short- and long-term outcomes for balloon angioplasty and coronary stent placement. J Am Coll Cardiol 2000;36:395–403.

3. Hannan EL, Racz MJ, Walford G, et al. Long-term outcomes of coronary-artery bypass grafting versus stent implantation. N Engl J Med 2005;352: 2174–83.

4. Harrington RA. Selecting revascularization strategies in patients with coronary disease. N Engl J Med 2015;372: 1261–3.

5. Park SJ, Ahn JM, Kim YH, et al. Trial of everolimus-eluting stents or bypass surgery for coronary disease. N Engl J Med 2015;372:1204–12.

Issue
Journal of Clinical Outcomes Management - May 2015, VOL. 22, NO. 5
Publications
Topics
Sections

Study Overview

Objective. To compare percutaneous coronary intervention (PCI) using second-generation drug-eluting stents (everolimus-eluting stents) with coronary artery bypass grafting (CABG) among patients with multivessel coronary disease.

Design. Observational registry study with propensity-score matching.

Setting and participants. The study relies on patients identified from the Cardiac Surgery Reporting System (CSRS) and Percutaneous Coronary Intervention Reporting System (PCIRS) registries of the New York State Department of Health. These 2 registries were linked to the New York State Vital Statistics Death registry and to the Statewide Planning and Research Cooperative System registry (SPARCS) to obtain further information like dates of admission, surgery, discharge, and death. Subjects were eligible for inclusion if they had multivessel disease (defined as severe stenosis [≥ 70%] in at least 2 diseased major epicardial coronary arteries) and if they had undergone either PCI with implantation of an everolimus-eluting stent or CABG. Subjects were excluded if they had revascularization within 1 year before index procedure; previous cardiac surgery; severe left main coronary artery disease (degree of stenosis ≥ 50%); PCI with a stent other than an everolimus-eluting stent; myocardial infarction without 24 hours before the index procedure; and unstable hemodynamics or cardiogenic shock.

Main outcome measures. The primary outcome of the study was all-cause mortality. Various secondary outcomes included rates of myocardial infarction, stroke, and repeat vascularization.

Main results. Among 116,915 patients assessed for eligibility, 82,096 were excluded. Among 34,819 who met inclusion criteria, 18,446 were included in the propensity score–matched analysis. With a 1:1 matching algorithm, 9223 were in the PCI with everolimus-eluting stent group and 9223 were in the CABG group. Short-term outcomes (in hospital or ≤ 30 days after the index procedure) favored PCI with everolimus-eluting stents over CABG, with a significantly lower risk of death (0.6% vs. 1.1%; hazard ratio [HR], 0.49; 95% confidence interval [CI], 0.35 to 0.69; P < 0.002) as well as stroke (0.2% vs 1.2%; HR, 0.18; 95% CI, 0.11 to 0.29; P < 0.001). The 2 groups had similar rates of myocardial infarction in the short-term (0.5% and 0.4%; HR, 1.37; 95% CI, 0.89 to 2.12; P = 0.16). After a mean follow-up of 2.9 years, there was a similar annual death rate between groups: 3.1% for PCI and 2.9% for CABG (HR, 1.04; 95% CI, 0.93 to 1.17; P = 0.50). PCI with everolimus-eluting stents was associated with a higher risk of a first myocardial infarction than was CABG (1.9% vs 1.1% per year; HR, 1.51; 95% CI, 1.29 to 1.77; P < 0.001). PCI with everolimus-eluting stents was associated with a lower risk of a first stroke than CABG (0.7% vs. 1.0% per year; HR, 0.62; 95% CI, 0.50 to 0.76; P < 0.001). Finally, PCI with everolimus-eluting stents was associated with a higher risk of a first repeat-revascularization procedure than CABG (7.2% vs. 3.1% per year; HR, 2.35; 95% CI, 2.14 to 2.58; P < 0.001).

Conclusion. In the setting of newer stent technology with second-generation everolimus-eluting stents, the risk of death associated with PCI was similar to that associated with CABG for multivessel coronary artery disease. In the long-term, PCI was associated with a higher risk of myocardial infarction and repeat revascularization, whereas CABG was associated with an increased risk of stroke. In the short-term, PCI had lower risks of both death and stroke.

Commentary

Coronary artery disease is a major public health problem. For patients for whom revascularization is deemed to be appropriate, a choice must be made between PCI and CABG. In previous studies that compared PCI and CABG, CABG was shown to have less need for repeat revascularizations as well as mortality benefits [1–3]. However, these prior studies compared CABG with older generations of stents. In the past decade, stent technologies have improved, as the bare-metal stent era gave way to the first generation of of drug-eluting stents (with sirolimus or paclitaxel), to be followed by second-generation drug-eluting stents (with everolimus or zotarolimus) [4].

In this article, Bangalore and colleagues addressed the issue of whether the use of second-generation drug-eluting stents close the outcome gap that favors CABG over PCI in patients with multivessel coronary artery disease. In patients who were considered to have had complete revascularization performed during PCI (ie, revascularization of all major vessels with clinically significant stenosis), they noted mitigation of the outcome differences between the PCI group and the CABG group. They conclude that the decision-making process by patients and their providers regarding revascularization be placed in the context of individual values and preferences.

One major limitation is that the study is an observational study from registry data. Despite the use of sophisticated statistical techniques including propensity score matching to adjust for confounders that are implicit in any nonrandomized comparison of treatment strategies, observational studies suffer from the definitely proof of causality. These limitations are especially important when the two groups being compared have modest differences in outcome.

Applications for Clinical Practice

This observational study, together with a recent randomized clinical trial in which CABG was compared with PCI with the use of everolimus-eluting stents from the BEST trial [5], provided new insights of the 2 revascularization strategies. Clinicians should engage and empower patients with a shared decision-making approach. The early hazard of CABG in stroke and death may be unacceptable to some patients, whereas others might want to avoid the later hazards of PCI in repeat procedure or having a myocardial infarction. Until a definitive study is available, patients should be informed of the best current knowledge of the pros and cons of the two revascularization strategies.

 —Ka Ming Gordon Ngai, MD, MPH

 

Study Overview

Objective. To compare percutaneous coronary intervention (PCI) using second-generation drug-eluting stents (everolimus-eluting stents) with coronary artery bypass grafting (CABG) among patients with multivessel coronary disease.

Design. Observational registry study with propensity-score matching.

Setting and participants. The study relies on patients identified from the Cardiac Surgery Reporting System (CSRS) and Percutaneous Coronary Intervention Reporting System (PCIRS) registries of the New York State Department of Health. These 2 registries were linked to the New York State Vital Statistics Death registry and to the Statewide Planning and Research Cooperative System registry (SPARCS) to obtain further information like dates of admission, surgery, discharge, and death. Subjects were eligible for inclusion if they had multivessel disease (defined as severe stenosis [≥ 70%] in at least 2 diseased major epicardial coronary arteries) and if they had undergone either PCI with implantation of an everolimus-eluting stent or CABG. Subjects were excluded if they had revascularization within 1 year before index procedure; previous cardiac surgery; severe left main coronary artery disease (degree of stenosis ≥ 50%); PCI with a stent other than an everolimus-eluting stent; myocardial infarction without 24 hours before the index procedure; and unstable hemodynamics or cardiogenic shock.

Main outcome measures. The primary outcome of the study was all-cause mortality. Various secondary outcomes included rates of myocardial infarction, stroke, and repeat vascularization.

Main results. Among 116,915 patients assessed for eligibility, 82,096 were excluded. Among 34,819 who met inclusion criteria, 18,446 were included in the propensity score–matched analysis. With a 1:1 matching algorithm, 9223 were in the PCI with everolimus-eluting stent group and 9223 were in the CABG group. Short-term outcomes (in hospital or ≤ 30 days after the index procedure) favored PCI with everolimus-eluting stents over CABG, with a significantly lower risk of death (0.6% vs. 1.1%; hazard ratio [HR], 0.49; 95% confidence interval [CI], 0.35 to 0.69; P < 0.002) as well as stroke (0.2% vs 1.2%; HR, 0.18; 95% CI, 0.11 to 0.29; P < 0.001). The 2 groups had similar rates of myocardial infarction in the short-term (0.5% and 0.4%; HR, 1.37; 95% CI, 0.89 to 2.12; P = 0.16). After a mean follow-up of 2.9 years, there was a similar annual death rate between groups: 3.1% for PCI and 2.9% for CABG (HR, 1.04; 95% CI, 0.93 to 1.17; P = 0.50). PCI with everolimus-eluting stents was associated with a higher risk of a first myocardial infarction than was CABG (1.9% vs 1.1% per year; HR, 1.51; 95% CI, 1.29 to 1.77; P < 0.001). PCI with everolimus-eluting stents was associated with a lower risk of a first stroke than CABG (0.7% vs. 1.0% per year; HR, 0.62; 95% CI, 0.50 to 0.76; P < 0.001). Finally, PCI with everolimus-eluting stents was associated with a higher risk of a first repeat-revascularization procedure than CABG (7.2% vs. 3.1% per year; HR, 2.35; 95% CI, 2.14 to 2.58; P < 0.001).

Conclusion. In the setting of newer stent technology with second-generation everolimus-eluting stents, the risk of death associated with PCI was similar to that associated with CABG for multivessel coronary artery disease. In the long-term, PCI was associated with a higher risk of myocardial infarction and repeat revascularization, whereas CABG was associated with an increased risk of stroke. In the short-term, PCI had lower risks of both death and stroke.

Commentary

Coronary artery disease is a major public health problem. For patients for whom revascularization is deemed to be appropriate, a choice must be made between PCI and CABG. In previous studies that compared PCI and CABG, CABG was shown to have less need for repeat revascularizations as well as mortality benefits [1–3]. However, these prior studies compared CABG with older generations of stents. In the past decade, stent technologies have improved, as the bare-metal stent era gave way to the first generation of of drug-eluting stents (with sirolimus or paclitaxel), to be followed by second-generation drug-eluting stents (with everolimus or zotarolimus) [4].

In this article, Bangalore and colleagues addressed the issue of whether the use of second-generation drug-eluting stents close the outcome gap that favors CABG over PCI in patients with multivessel coronary artery disease. In patients who were considered to have had complete revascularization performed during PCI (ie, revascularization of all major vessels with clinically significant stenosis), they noted mitigation of the outcome differences between the PCI group and the CABG group. They conclude that the decision-making process by patients and their providers regarding revascularization be placed in the context of individual values and preferences.

One major limitation is that the study is an observational study from registry data. Despite the use of sophisticated statistical techniques including propensity score matching to adjust for confounders that are implicit in any nonrandomized comparison of treatment strategies, observational studies suffer from the definitely proof of causality. These limitations are especially important when the two groups being compared have modest differences in outcome.

Applications for Clinical Practice

This observational study, together with a recent randomized clinical trial in which CABG was compared with PCI with the use of everolimus-eluting stents from the BEST trial [5], provided new insights of the 2 revascularization strategies. Clinicians should engage and empower patients with a shared decision-making approach. The early hazard of CABG in stroke and death may be unacceptable to some patients, whereas others might want to avoid the later hazards of PCI in repeat procedure or having a myocardial infarction. Until a definitive study is available, patients should be informed of the best current knowledge of the pros and cons of the two revascularization strategies.

 —Ka Ming Gordon Ngai, MD, MPH

 

References

1. Farooq V, van Klaveren D, Steyerberg EW, et al. Anatomical and clinical characteristics to guide decision making between coronary artery bypass surgery and percutaneous coronary intervention for individual patients: development and validation of SYNTAX score II. Lancet 2013;381: 639–50.

2. Hannan EL, Racz MJ, Arani DT, et al. A comparison of short- and long-term outcomes for balloon angioplasty and coronary stent placement. J Am Coll Cardiol 2000;36:395–403.

3. Hannan EL, Racz MJ, Walford G, et al. Long-term outcomes of coronary-artery bypass grafting versus stent implantation. N Engl J Med 2005;352: 2174–83.

4. Harrington RA. Selecting revascularization strategies in patients with coronary disease. N Engl J Med 2015;372: 1261–3.

5. Park SJ, Ahn JM, Kim YH, et al. Trial of everolimus-eluting stents or bypass surgery for coronary disease. N Engl J Med 2015;372:1204–12.

References

1. Farooq V, van Klaveren D, Steyerberg EW, et al. Anatomical and clinical characteristics to guide decision making between coronary artery bypass surgery and percutaneous coronary intervention for individual patients: development and validation of SYNTAX score II. Lancet 2013;381: 639–50.

2. Hannan EL, Racz MJ, Arani DT, et al. A comparison of short- and long-term outcomes for balloon angioplasty and coronary stent placement. J Am Coll Cardiol 2000;36:395–403.

3. Hannan EL, Racz MJ, Walford G, et al. Long-term outcomes of coronary-artery bypass grafting versus stent implantation. N Engl J Med 2005;352: 2174–83.

4. Harrington RA. Selecting revascularization strategies in patients with coronary disease. N Engl J Med 2015;372: 1261–3.

5. Park SJ, Ahn JM, Kim YH, et al. Trial of everolimus-eluting stents or bypass surgery for coronary disease. N Engl J Med 2015;372:1204–12.

Issue
Journal of Clinical Outcomes Management - May 2015, VOL. 22, NO. 5
Issue
Journal of Clinical Outcomes Management - May 2015, VOL. 22, NO. 5
Publications
Publications
Topics
Article Type
Display Headline
Which Revascularization Strategy for Multivessel Coronary Disease?
Display Headline
Which Revascularization Strategy for Multivessel Coronary Disease?
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default

Epidural Steroid Injections for Spinal Stenosis Back Pain Simply Don’t Work

Article Type
Changed
Thu, 03/08/2018 - 12:48
Display Headline
Epidural Steroid Injections for Spinal Stenosis Back Pain Simply Don’t Work

Study Overview

Objective. To determine the effectiveness of epidural injections of glucocorticoids plus anesthetic compared with injections of anesthetic alone in patients with lumbar spinal stenosis.

Design. The LESS (Lumbar Epidural Steroid Injection for Spinal Stenosis) trial—a double-blind, multisite, randomized controlled trial.

Setting and participants. The study was conducted at 16 sites in the United States and enrolled 400 patients between April 2011 and June 2013. Patients at least 50 years of age with spinal stenosis as evidenced by magnetic resonance imaging (MRI) or computed tomography (CT) were invited to participate. Additional eligibility criteria included an average pain rating of more than 4 on a scale of 0 to 10 (0 being the lowest score) for back, buttock, or leg pain. Patients were excluded if they did not have stenosis of the central canal, had spondylolisthesis requiring surgery, or had received epidural glucocorticoid injections within the previous 6 months. Patients were randomly assigned to receive a standard epidural injection of glucocorticoids plus lidocaine or lidocaine alone. At the 3-week follow-up they could choose to receive a repeat injection. At the 6-week assessment they were allowed to cross over to the other treatment group. Patients were blinded throughout the study. The treating physicians were also blinded through the use of 2 opaque prefilled syringes provided by the study staff—one marked “inject” and one marked “discard.”

Main outcome measures. The 2 outcomes, measured at 6 weeks, were the Roland-Morris Disability Questionnaire (RMDQ) score (range, 0 to 24, with higher scores indicating greater physical disability) and the patient’s rating of average buttock, hip, or leg pain in the previous week (scale of 0 to 10 with 0 indicating no pain and 10 indicating “pain as bad as you can imagine”).

Eight secondary patient-oriented outcomes were also measured: (1) at least minimal clinically meaningful improvement (≥ 30%), (2) substantial clinically meaningful improvement (≥ 50%), (3) average back pain in the previous week, and scores on the (4) Brief Pain Inventory (BPI) interference scale, (5) 8-question Patient Health Questionnaire (PHQ-8), (6) Generalized Anxiety Disorder 7 scale (GAD-7), (7) EQ-5D (a health status measure) and (8) Swiss Spinal Stenosis Questionnaire (SSSQ).

Main results. The 2 groups were similar with respect to baseline characteristics, except that the duration of pain was shorter in the lidocaine-alone group. At 6 weeks, both groups had improved RMDQ scores (glucocorticoid –4.2 points vs. no glucocorticoid –3.1 points, respectively). However, the difference in RMDQ score between the 2 groups was not statistically significant (–1.0 points [95% CI, –2.1 to 0.1]; P = 0.07). In addition, there was no difference in treatment effect at 6 weeks as measured by patient’s reported leg pain (–0.2 points [95% CI, –0.8 to 0.4]; P = 0.48). Furthermore, there were no significant differences in the secondary outcomes of clinically meaningful improvement, BPI, SSSQ symptoms and physical function, EQ-5D, and GAD-7 scales at 6 weeks. Among the secondary outcomes, only symptoms of depression and patient satisfaction showed a statistically significant improvement in the glucocorticoid plus lidocaine group. Of note, though not statistically significant, there were more adverse events in the glucocorticoid plus lidocaine group compared to the lidocaine alone group (21.5% vs. 15.5%, respectively). Finally, the glucocorticoid plus lidocaine group also had a significantly higher proportion of patients with cortisol serum suppression compared to the lidocaine alone group.

Conclusion. The authors concluded that there was no difference in pain-related functional disability (as measured by the RMDQ score) and pain intensity between patients receiving fluoroscopically guided epidural injections with glucocorticoids plus lidocaine compared with lidocaine alone for lumbar spinal stenosis. The injection of glucocorticoid should be avoided due to its potentially systemic effects, including suppression of the hypothalamic-pituitary axis and reduction in bone mineral density, which may increase the risk of fracture.

Commentary

Lumbar spinal stenosis is one of the most common causes of spine-related back and leg pain; it disproportionally affects older adults due to degenerative changes resulting in narrowing of the spinal canal and nerve-root. Epidural glucocorticoid injections containing a glucocorticoid and an anesthetic are commonly used to relieve symptoms of lumbar stenosis. While this treatment approach is controversial, more than 2.2 million lumbar epidural glucocorticoid injections are performed in the Medicare population each year [1,2]. Previous uncontrolled studies suggest that epidural glucocorticoid injections provide short-term pain relief for some patients with spinal stenosis [3]. While complications from the procedure are rare, a multistate outbreak of fungal meningitis due to contaminated glucocorticoid injections affected at least 751 patients with 64 deaths in 2012 [4].

The purpose of the current study by Friedly et al was to determine whether adding a glucocorticoid to an anesthetic in epidural spinal injections is superior to anesthetic alone for symptom relief and functional improvement in patients with lumbar spinal stenosis. In contrast to previous studies, the authors defined short-term results as 3 weeks after injection, and long-term results as 6 weeks after injection. Despite the shorter follow-up period, results were similar to previous studies, in that adding glucocorticoid to anesthetic in epidural spinal injection reduced pain and improved patient’s functionality short-term, but improvements were not sustained long-term. Based on these results, the authors concluded that there is no benefit in adding glucocorticoid epidural injections for back pain arising from lumbar spinal stenosis.

One major limitation of this study is the lack of a placebo arm. Because of the lack of a placebo arm, it cannot be ascertained whether epidural injection with lidocaine alone conferred a benefit. However, this study provides robust evidence that epidural steroid injections are not beneficial for treatment of back and leg pain associated with lumbar spinal stenosis.

Applications for Clinical Practice

Epidural steroid injection is long accepted in medical communities as a safe and effective treatment for lumbar spinal stenosis symptoms. In light of the potential dangers of epidural steroid injections, including meningitis, coupled with the increasing cost of the procedure, other potential side effects, and demonstrated ineffectiveness of the treatment, providers should stop recommending epidural steroid injections for lumbar spinal stenosis.

—Ka Ming Gordon Ngai, MD, MPH

References

1. Manchikanti L, Pampati V, Boswell MV, et al. Analysis of the growth of epidural injections and costs in the Medicare population: a comparative evaluation of 1997, 2002, and 2006 data. Pain Physician 2010;13:199–212.

2. Manchikanti L, Pampati V, Falco FJ, et al. Assessment of the growth of epidural injections in the medicare population from 2000 to 2011. Pain Physician 2013;16:E349–364.

3. Shamliyan TA, Staal JB, Goldmann D, et al. Epidural steroid injections for radicular lumbosacral pain: a systematic review. Phys Med Rehabil Clin North Am 2014;25:471–89.

4. CDC. Multistate outbreak of fungal meningitis and other infections. 23 Oct 2013. Accessed 9 Jul 2014 at www.cdc.gov/hai/outbreaks/meningitis.html.

Issue
Journal of Clinical Outcomes Management - AUGUST 2014, VOL. 21, NO. 8
Publications
Topics
Sections

Study Overview

Objective. To determine the effectiveness of epidural injections of glucocorticoids plus anesthetic compared with injections of anesthetic alone in patients with lumbar spinal stenosis.

Design. The LESS (Lumbar Epidural Steroid Injection for Spinal Stenosis) trial—a double-blind, multisite, randomized controlled trial.

Setting and participants. The study was conducted at 16 sites in the United States and enrolled 400 patients between April 2011 and June 2013. Patients at least 50 years of age with spinal stenosis as evidenced by magnetic resonance imaging (MRI) or computed tomography (CT) were invited to participate. Additional eligibility criteria included an average pain rating of more than 4 on a scale of 0 to 10 (0 being the lowest score) for back, buttock, or leg pain. Patients were excluded if they did not have stenosis of the central canal, had spondylolisthesis requiring surgery, or had received epidural glucocorticoid injections within the previous 6 months. Patients were randomly assigned to receive a standard epidural injection of glucocorticoids plus lidocaine or lidocaine alone. At the 3-week follow-up they could choose to receive a repeat injection. At the 6-week assessment they were allowed to cross over to the other treatment group. Patients were blinded throughout the study. The treating physicians were also blinded through the use of 2 opaque prefilled syringes provided by the study staff—one marked “inject” and one marked “discard.”

Main outcome measures. The 2 outcomes, measured at 6 weeks, were the Roland-Morris Disability Questionnaire (RMDQ) score (range, 0 to 24, with higher scores indicating greater physical disability) and the patient’s rating of average buttock, hip, or leg pain in the previous week (scale of 0 to 10 with 0 indicating no pain and 10 indicating “pain as bad as you can imagine”).

Eight secondary patient-oriented outcomes were also measured: (1) at least minimal clinically meaningful improvement (≥ 30%), (2) substantial clinically meaningful improvement (≥ 50%), (3) average back pain in the previous week, and scores on the (4) Brief Pain Inventory (BPI) interference scale, (5) 8-question Patient Health Questionnaire (PHQ-8), (6) Generalized Anxiety Disorder 7 scale (GAD-7), (7) EQ-5D (a health status measure) and (8) Swiss Spinal Stenosis Questionnaire (SSSQ).

Main results. The 2 groups were similar with respect to baseline characteristics, except that the duration of pain was shorter in the lidocaine-alone group. At 6 weeks, both groups had improved RMDQ scores (glucocorticoid –4.2 points vs. no glucocorticoid –3.1 points, respectively). However, the difference in RMDQ score between the 2 groups was not statistically significant (–1.0 points [95% CI, –2.1 to 0.1]; P = 0.07). In addition, there was no difference in treatment effect at 6 weeks as measured by patient’s reported leg pain (–0.2 points [95% CI, –0.8 to 0.4]; P = 0.48). Furthermore, there were no significant differences in the secondary outcomes of clinically meaningful improvement, BPI, SSSQ symptoms and physical function, EQ-5D, and GAD-7 scales at 6 weeks. Among the secondary outcomes, only symptoms of depression and patient satisfaction showed a statistically significant improvement in the glucocorticoid plus lidocaine group. Of note, though not statistically significant, there were more adverse events in the glucocorticoid plus lidocaine group compared to the lidocaine alone group (21.5% vs. 15.5%, respectively). Finally, the glucocorticoid plus lidocaine group also had a significantly higher proportion of patients with cortisol serum suppression compared to the lidocaine alone group.

Conclusion. The authors concluded that there was no difference in pain-related functional disability (as measured by the RMDQ score) and pain intensity between patients receiving fluoroscopically guided epidural injections with glucocorticoids plus lidocaine compared with lidocaine alone for lumbar spinal stenosis. The injection of glucocorticoid should be avoided due to its potentially systemic effects, including suppression of the hypothalamic-pituitary axis and reduction in bone mineral density, which may increase the risk of fracture.

Commentary

Lumbar spinal stenosis is one of the most common causes of spine-related back and leg pain; it disproportionally affects older adults due to degenerative changes resulting in narrowing of the spinal canal and nerve-root. Epidural glucocorticoid injections containing a glucocorticoid and an anesthetic are commonly used to relieve symptoms of lumbar stenosis. While this treatment approach is controversial, more than 2.2 million lumbar epidural glucocorticoid injections are performed in the Medicare population each year [1,2]. Previous uncontrolled studies suggest that epidural glucocorticoid injections provide short-term pain relief for some patients with spinal stenosis [3]. While complications from the procedure are rare, a multistate outbreak of fungal meningitis due to contaminated glucocorticoid injections affected at least 751 patients with 64 deaths in 2012 [4].

The purpose of the current study by Friedly et al was to determine whether adding a glucocorticoid to an anesthetic in epidural spinal injections is superior to anesthetic alone for symptom relief and functional improvement in patients with lumbar spinal stenosis. In contrast to previous studies, the authors defined short-term results as 3 weeks after injection, and long-term results as 6 weeks after injection. Despite the shorter follow-up period, results were similar to previous studies, in that adding glucocorticoid to anesthetic in epidural spinal injection reduced pain and improved patient’s functionality short-term, but improvements were not sustained long-term. Based on these results, the authors concluded that there is no benefit in adding glucocorticoid epidural injections for back pain arising from lumbar spinal stenosis.

One major limitation of this study is the lack of a placebo arm. Because of the lack of a placebo arm, it cannot be ascertained whether epidural injection with lidocaine alone conferred a benefit. However, this study provides robust evidence that epidural steroid injections are not beneficial for treatment of back and leg pain associated with lumbar spinal stenosis.

Applications for Clinical Practice

Epidural steroid injection is long accepted in medical communities as a safe and effective treatment for lumbar spinal stenosis symptoms. In light of the potential dangers of epidural steroid injections, including meningitis, coupled with the increasing cost of the procedure, other potential side effects, and demonstrated ineffectiveness of the treatment, providers should stop recommending epidural steroid injections for lumbar spinal stenosis.

—Ka Ming Gordon Ngai, MD, MPH

Study Overview

Objective. To determine the effectiveness of epidural injections of glucocorticoids plus anesthetic compared with injections of anesthetic alone in patients with lumbar spinal stenosis.

Design. The LESS (Lumbar Epidural Steroid Injection for Spinal Stenosis) trial—a double-blind, multisite, randomized controlled trial.

Setting and participants. The study was conducted at 16 sites in the United States and enrolled 400 patients between April 2011 and June 2013. Patients at least 50 years of age with spinal stenosis as evidenced by magnetic resonance imaging (MRI) or computed tomography (CT) were invited to participate. Additional eligibility criteria included an average pain rating of more than 4 on a scale of 0 to 10 (0 being the lowest score) for back, buttock, or leg pain. Patients were excluded if they did not have stenosis of the central canal, had spondylolisthesis requiring surgery, or had received epidural glucocorticoid injections within the previous 6 months. Patients were randomly assigned to receive a standard epidural injection of glucocorticoids plus lidocaine or lidocaine alone. At the 3-week follow-up they could choose to receive a repeat injection. At the 6-week assessment they were allowed to cross over to the other treatment group. Patients were blinded throughout the study. The treating physicians were also blinded through the use of 2 opaque prefilled syringes provided by the study staff—one marked “inject” and one marked “discard.”

Main outcome measures. The 2 outcomes, measured at 6 weeks, were the Roland-Morris Disability Questionnaire (RMDQ) score (range, 0 to 24, with higher scores indicating greater physical disability) and the patient’s rating of average buttock, hip, or leg pain in the previous week (scale of 0 to 10 with 0 indicating no pain and 10 indicating “pain as bad as you can imagine”).

Eight secondary patient-oriented outcomes were also measured: (1) at least minimal clinically meaningful improvement (≥ 30%), (2) substantial clinically meaningful improvement (≥ 50%), (3) average back pain in the previous week, and scores on the (4) Brief Pain Inventory (BPI) interference scale, (5) 8-question Patient Health Questionnaire (PHQ-8), (6) Generalized Anxiety Disorder 7 scale (GAD-7), (7) EQ-5D (a health status measure) and (8) Swiss Spinal Stenosis Questionnaire (SSSQ).

Main results. The 2 groups were similar with respect to baseline characteristics, except that the duration of pain was shorter in the lidocaine-alone group. At 6 weeks, both groups had improved RMDQ scores (glucocorticoid –4.2 points vs. no glucocorticoid –3.1 points, respectively). However, the difference in RMDQ score between the 2 groups was not statistically significant (–1.0 points [95% CI, –2.1 to 0.1]; P = 0.07). In addition, there was no difference in treatment effect at 6 weeks as measured by patient’s reported leg pain (–0.2 points [95% CI, –0.8 to 0.4]; P = 0.48). Furthermore, there were no significant differences in the secondary outcomes of clinically meaningful improvement, BPI, SSSQ symptoms and physical function, EQ-5D, and GAD-7 scales at 6 weeks. Among the secondary outcomes, only symptoms of depression and patient satisfaction showed a statistically significant improvement in the glucocorticoid plus lidocaine group. Of note, though not statistically significant, there were more adverse events in the glucocorticoid plus lidocaine group compared to the lidocaine alone group (21.5% vs. 15.5%, respectively). Finally, the glucocorticoid plus lidocaine group also had a significantly higher proportion of patients with cortisol serum suppression compared to the lidocaine alone group.

Conclusion. The authors concluded that there was no difference in pain-related functional disability (as measured by the RMDQ score) and pain intensity between patients receiving fluoroscopically guided epidural injections with glucocorticoids plus lidocaine compared with lidocaine alone for lumbar spinal stenosis. The injection of glucocorticoid should be avoided due to its potentially systemic effects, including suppression of the hypothalamic-pituitary axis and reduction in bone mineral density, which may increase the risk of fracture.

Commentary

Lumbar spinal stenosis is one of the most common causes of spine-related back and leg pain; it disproportionally affects older adults due to degenerative changes resulting in narrowing of the spinal canal and nerve-root. Epidural glucocorticoid injections containing a glucocorticoid and an anesthetic are commonly used to relieve symptoms of lumbar stenosis. While this treatment approach is controversial, more than 2.2 million lumbar epidural glucocorticoid injections are performed in the Medicare population each year [1,2]. Previous uncontrolled studies suggest that epidural glucocorticoid injections provide short-term pain relief for some patients with spinal stenosis [3]. While complications from the procedure are rare, a multistate outbreak of fungal meningitis due to contaminated glucocorticoid injections affected at least 751 patients with 64 deaths in 2012 [4].

The purpose of the current study by Friedly et al was to determine whether adding a glucocorticoid to an anesthetic in epidural spinal injections is superior to anesthetic alone for symptom relief and functional improvement in patients with lumbar spinal stenosis. In contrast to previous studies, the authors defined short-term results as 3 weeks after injection, and long-term results as 6 weeks after injection. Despite the shorter follow-up period, results were similar to previous studies, in that adding glucocorticoid to anesthetic in epidural spinal injection reduced pain and improved patient’s functionality short-term, but improvements were not sustained long-term. Based on these results, the authors concluded that there is no benefit in adding glucocorticoid epidural injections for back pain arising from lumbar spinal stenosis.

One major limitation of this study is the lack of a placebo arm. Because of the lack of a placebo arm, it cannot be ascertained whether epidural injection with lidocaine alone conferred a benefit. However, this study provides robust evidence that epidural steroid injections are not beneficial for treatment of back and leg pain associated with lumbar spinal stenosis.

Applications for Clinical Practice

Epidural steroid injection is long accepted in medical communities as a safe and effective treatment for lumbar spinal stenosis symptoms. In light of the potential dangers of epidural steroid injections, including meningitis, coupled with the increasing cost of the procedure, other potential side effects, and demonstrated ineffectiveness of the treatment, providers should stop recommending epidural steroid injections for lumbar spinal stenosis.

—Ka Ming Gordon Ngai, MD, MPH

References

1. Manchikanti L, Pampati V, Boswell MV, et al. Analysis of the growth of epidural injections and costs in the Medicare population: a comparative evaluation of 1997, 2002, and 2006 data. Pain Physician 2010;13:199–212.

2. Manchikanti L, Pampati V, Falco FJ, et al. Assessment of the growth of epidural injections in the medicare population from 2000 to 2011. Pain Physician 2013;16:E349–364.

3. Shamliyan TA, Staal JB, Goldmann D, et al. Epidural steroid injections for radicular lumbosacral pain: a systematic review. Phys Med Rehabil Clin North Am 2014;25:471–89.

4. CDC. Multistate outbreak of fungal meningitis and other infections. 23 Oct 2013. Accessed 9 Jul 2014 at www.cdc.gov/hai/outbreaks/meningitis.html.

References

1. Manchikanti L, Pampati V, Boswell MV, et al. Analysis of the growth of epidural injections and costs in the Medicare population: a comparative evaluation of 1997, 2002, and 2006 data. Pain Physician 2010;13:199–212.

2. Manchikanti L, Pampati V, Falco FJ, et al. Assessment of the growth of epidural injections in the medicare population from 2000 to 2011. Pain Physician 2013;16:E349–364.

3. Shamliyan TA, Staal JB, Goldmann D, et al. Epidural steroid injections for radicular lumbosacral pain: a systematic review. Phys Med Rehabil Clin North Am 2014;25:471–89.

4. CDC. Multistate outbreak of fungal meningitis and other infections. 23 Oct 2013. Accessed 9 Jul 2014 at www.cdc.gov/hai/outbreaks/meningitis.html.

Issue
Journal of Clinical Outcomes Management - AUGUST 2014, VOL. 21, NO. 8
Issue
Journal of Clinical Outcomes Management - AUGUST 2014, VOL. 21, NO. 8
Publications
Publications
Topics
Article Type
Display Headline
Epidural Steroid Injections for Spinal Stenosis Back Pain Simply Don’t Work
Display Headline
Epidural Steroid Injections for Spinal Stenosis Back Pain Simply Don’t Work
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default