User login
Tighter rules for ad hoc PCI
The increased frequency in recent years of what has been termed "ad hoc" percutaneous coronary intervention is of concern to both interventional cardiologists and third-party payers.
The definition of ad hoc PCI that accompanied recent guidelines on that subject in a statement by the Society of Cardiovascular Angiography and Interventions (SCAI) is a "diagnostic catheterization followed in the same session or same sitting by PCI." Much of this increase has occurred in patients without symptoms and with minimal if any evidence of ischemia. Convenience and economics also play a role. As a result, cardiologists presume that they can do no harm without asking the question of whether they are doing any good.
A recent report on 144,737 nonacute PCIs using the National Cardiovascular Data Registry indicated that almost 30,000 PCIs (24.4%) were performed in patients without symptoms or class I angina and 30% were at low risk by noninvasive testing. In these nonacute patients, 67% were considered either inappropriate or uncertain (JAMA 2011;306:53-61). The rate of performing inappropriate PCI in hospitals varied between 6% and 16%. A number of hospitals had inappropriate rates exceeding 25%, and some had rates as high as 48%. The registry does not provide the number of ad hoc procedures performed, but one might presume that many of these patients would have fit the criteria for entry into the COURAGE trial (N. Engl. J. Med. 2007;356:1503-16), in which patients with stable coronary disease, 43% of whom had either no angina or class I angina, did as well with medical treatment as with PCI.
Angiographers have admitted having difficulty assessing the severity of stenosis, and therefore often proceeding to ad hoc PCI. The recent FAME study suggests that the measure of fractional flow reserve (FFR) is able to define coronary lesions that are clinically significant (N. Engl. J. Med. 2009;360;213-24). However, the conclusions of FAME have been challenged in regard to the clinical importance of FFR measurement.
Included in the recent SCAI guidelines is the requirement that before ad hoc PCI is performed, patients should be given information about the appropriateness, relative risk, and benefit of the procedure as well as therapeutic alternatives to PCI. For patients with ongoing symptoms and positive diagnostic tests for ischemia, this is easily obtained prior to intervention. But patients without symptoms and without evidence by stress testing may not be given the real story before the procedure. For these patients, SCAI advises that "time-out" be called and that they be given time to consider the alternatives for treatment of their disease (Catheter. Cardiovasc. Interv. 2012 Nov. 29 [doi: 10.1002/ccd.24701]).
Unfortunately for all of us, the federal government is also concerned about the issue of appropriateness. A recent whistleblower lawsuit in Ohio was resolved with a payment of fines of $3 million by the hospital and more than $500,000 by the physician group involved in the lawsuit. According to press reports, the physicians defended their "high rates as a result of their aggressive style of medicine." The physicians defended the medical care that they provided although they "might not have met the government’s guidelines of reimbursement" (New York Times, Jan. 5, 2013, sec. B1).
Unless we adhere to good practice guidelines, the federal government will force our adherence, whether we like it or not.
Dr. Goldstein, medical editor of Cardiology News, is professor of medicine at Wayne State University and division head emeritus of cardiovascular medicine at Henry Ford Hospital, both in Detroit. He is on data safety monitoring committees for the National Institutes of Health and several pharmaceutical companies. This column, "Heart of the Matter," appears regularly in Cardiology News.
The increased frequency in recent years of what has been termed "ad hoc" percutaneous coronary intervention is of concern to both interventional cardiologists and third-party payers.
The definition of ad hoc PCI that accompanied recent guidelines on that subject in a statement by the Society of Cardiovascular Angiography and Interventions (SCAI) is a "diagnostic catheterization followed in the same session or same sitting by PCI." Much of this increase has occurred in patients without symptoms and with minimal if any evidence of ischemia. Convenience and economics also play a role. As a result, cardiologists presume that they can do no harm without asking the question of whether they are doing any good.
A recent report on 144,737 nonacute PCIs using the National Cardiovascular Data Registry indicated that almost 30,000 PCIs (24.4%) were performed in patients without symptoms or class I angina and 30% were at low risk by noninvasive testing. In these nonacute patients, 67% were considered either inappropriate or uncertain (JAMA 2011;306:53-61). The rate of performing inappropriate PCI in hospitals varied between 6% and 16%. A number of hospitals had inappropriate rates exceeding 25%, and some had rates as high as 48%. The registry does not provide the number of ad hoc procedures performed, but one might presume that many of these patients would have fit the criteria for entry into the COURAGE trial (N. Engl. J. Med. 2007;356:1503-16), in which patients with stable coronary disease, 43% of whom had either no angina or class I angina, did as well with medical treatment as with PCI.
Angiographers have admitted having difficulty assessing the severity of stenosis, and therefore often proceeding to ad hoc PCI. The recent FAME study suggests that the measure of fractional flow reserve (FFR) is able to define coronary lesions that are clinically significant (N. Engl. J. Med. 2009;360;213-24). However, the conclusions of FAME have been challenged in regard to the clinical importance of FFR measurement.
Included in the recent SCAI guidelines is the requirement that before ad hoc PCI is performed, patients should be given information about the appropriateness, relative risk, and benefit of the procedure as well as therapeutic alternatives to PCI. For patients with ongoing symptoms and positive diagnostic tests for ischemia, this is easily obtained prior to intervention. But patients without symptoms and without evidence by stress testing may not be given the real story before the procedure. For these patients, SCAI advises that "time-out" be called and that they be given time to consider the alternatives for treatment of their disease (Catheter. Cardiovasc. Interv. 2012 Nov. 29 [doi: 10.1002/ccd.24701]).
Unfortunately for all of us, the federal government is also concerned about the issue of appropriateness. A recent whistleblower lawsuit in Ohio was resolved with a payment of fines of $3 million by the hospital and more than $500,000 by the physician group involved in the lawsuit. According to press reports, the physicians defended their "high rates as a result of their aggressive style of medicine." The physicians defended the medical care that they provided although they "might not have met the government’s guidelines of reimbursement" (New York Times, Jan. 5, 2013, sec. B1).
Unless we adhere to good practice guidelines, the federal government will force our adherence, whether we like it or not.
Dr. Goldstein, medical editor of Cardiology News, is professor of medicine at Wayne State University and division head emeritus of cardiovascular medicine at Henry Ford Hospital, both in Detroit. He is on data safety monitoring committees for the National Institutes of Health and several pharmaceutical companies. This column, "Heart of the Matter," appears regularly in Cardiology News.
The increased frequency in recent years of what has been termed "ad hoc" percutaneous coronary intervention is of concern to both interventional cardiologists and third-party payers.
The definition of ad hoc PCI that accompanied recent guidelines on that subject in a statement by the Society of Cardiovascular Angiography and Interventions (SCAI) is a "diagnostic catheterization followed in the same session or same sitting by PCI." Much of this increase has occurred in patients without symptoms and with minimal if any evidence of ischemia. Convenience and economics also play a role. As a result, cardiologists presume that they can do no harm without asking the question of whether they are doing any good.
A recent report on 144,737 nonacute PCIs using the National Cardiovascular Data Registry indicated that almost 30,000 PCIs (24.4%) were performed in patients without symptoms or class I angina and 30% were at low risk by noninvasive testing. In these nonacute patients, 67% were considered either inappropriate or uncertain (JAMA 2011;306:53-61). The rate of performing inappropriate PCI in hospitals varied between 6% and 16%. A number of hospitals had inappropriate rates exceeding 25%, and some had rates as high as 48%. The registry does not provide the number of ad hoc procedures performed, but one might presume that many of these patients would have fit the criteria for entry into the COURAGE trial (N. Engl. J. Med. 2007;356:1503-16), in which patients with stable coronary disease, 43% of whom had either no angina or class I angina, did as well with medical treatment as with PCI.
Angiographers have admitted having difficulty assessing the severity of stenosis, and therefore often proceeding to ad hoc PCI. The recent FAME study suggests that the measure of fractional flow reserve (FFR) is able to define coronary lesions that are clinically significant (N. Engl. J. Med. 2009;360;213-24). However, the conclusions of FAME have been challenged in regard to the clinical importance of FFR measurement.
Included in the recent SCAI guidelines is the requirement that before ad hoc PCI is performed, patients should be given information about the appropriateness, relative risk, and benefit of the procedure as well as therapeutic alternatives to PCI. For patients with ongoing symptoms and positive diagnostic tests for ischemia, this is easily obtained prior to intervention. But patients without symptoms and without evidence by stress testing may not be given the real story before the procedure. For these patients, SCAI advises that "time-out" be called and that they be given time to consider the alternatives for treatment of their disease (Catheter. Cardiovasc. Interv. 2012 Nov. 29 [doi: 10.1002/ccd.24701]).
Unfortunately for all of us, the federal government is also concerned about the issue of appropriateness. A recent whistleblower lawsuit in Ohio was resolved with a payment of fines of $3 million by the hospital and more than $500,000 by the physician group involved in the lawsuit. According to press reports, the physicians defended their "high rates as a result of their aggressive style of medicine." The physicians defended the medical care that they provided although they "might not have met the government’s guidelines of reimbursement" (New York Times, Jan. 5, 2013, sec. B1).
Unless we adhere to good practice guidelines, the federal government will force our adherence, whether we like it or not.
Dr. Goldstein, medical editor of Cardiology News, is professor of medicine at Wayne State University and division head emeritus of cardiovascular medicine at Henry Ford Hospital, both in Detroit. He is on data safety monitoring committees for the National Institutes of Health and several pharmaceutical companies. This column, "Heart of the Matter," appears regularly in Cardiology News.
Who Should Run Our Hospitals?
Over the past century, the management of American hospitals has changed dramatically. These changes have occurred as a result of major shifts in the social and financial environment and have had a major effect on how medicine was practiced in the past and how it will be practiced in the future.
The American hospital as we know it today was established in the late 19th and early 20th centuries largely by the Catholic, Protestant, and Jewish communities to provide care for the elderly and chronically ill patients who otherwise would not be cared for at home.
With the development of surgical techniques from tonsillectomies to cholecystectomies in the mid-20th century, they became the workshop of general surgeons, who largely controlled the hospitals. With the subsequent development of antibiotics and medical treatment for cardiovascular disease, the internal medicine specialties demanded a larger role in institutional management.
The increased complexity and expense of medical care led to concerns about how to provide a financial base for medical care and led to the establishment of private medical insurance programs, and ultimately, to Medicare and Medicaid. Hospitals were no longer concerned about being a health resource but suddenly became a profit center. Community hospitals expanded in order to meet the needs of new technologies with the support of grants and loans from the federal government.
With this growth, the management of the hospital of the 20th century required the creation of a new breed of hospital staff: the hospital administrator. They were hired to manage the financial and administrative aspects of these new and growing organizations. Although the hospital administration was structured to provide equipoise between the medical and financial priorities of the hospital, that balance was not easily maintained, and as the financial aspects became central, the hospital administrator became supreme and physicians lost control.
Today, the American hospital has become central to the support of nonprofit and for-profit regional and national health care conglomerates, and control has become the province of boards of directors with little medical input and larger community control. As a consequence, the physician has now become a real or quasi-employee of the hospital.
In a recent perspective paper, Dr. Richard Gunderson (Acad. Med. 2009;84:1348-51) emphasizes the need to train physicians to provide leadership for the future management of the hospital. He points out that in 1935, physicians were in charge of 35% of the nation's hospitals, but that number has shrunk to 4% of our current 6,500 U.S. hospitals. The academic medical community has largely ignored its role in preparing medical students for administrative leadership as it focused on the clinical knowledge required for the medical competence.
Dr. Gunderson, of Indiana University in Indianapolis, advocates the identification of student leaders in the selection of medical students and proposes the inclusion of courses in medical finance and social issues in the medical school curriculum in order to prepare them for a leadership role in redefining the future of medical care and hospital management.
Amanda Goodall, Ph.D., a senior research fellow at the Institute for the Study of Labor in Bonn, Germany, provides an even more challenging analysis of the importance of medical leadership at the hospital (Social Science Med. 2011;73:535-9). She notes that these changes in leadership are not unique to the United States but also have taken place in European hospitals. Using a quality scoring system, she analyzed the quality performance of 100 of the U.S. News and World Report's Best Hospitals 2009 in the fields of cancer, digestive disorders, and heart and heart surgery. She found a positive correlation between hospital quality ranking and physician CEO leadership.
Those of us who have grown up through this management evolution have seen its real impact on the care of hospital patients. Some of the changes have been positive, while others have proved frustrating for both patients and physicians who practice in the new environment.
The need for leadership by those of us who have direct patient care responsibilities is essential for an inclusive decision-making process. When patient care comes to discussion at the board meeting, physicians and nurses bring to the process a perspective that only they can provide. It is essential that their voices be heard.
Over the past century, the management of American hospitals has changed dramatically. These changes have occurred as a result of major shifts in the social and financial environment and have had a major effect on how medicine was practiced in the past and how it will be practiced in the future.
The American hospital as we know it today was established in the late 19th and early 20th centuries largely by the Catholic, Protestant, and Jewish communities to provide care for the elderly and chronically ill patients who otherwise would not be cared for at home.
With the development of surgical techniques from tonsillectomies to cholecystectomies in the mid-20th century, they became the workshop of general surgeons, who largely controlled the hospitals. With the subsequent development of antibiotics and medical treatment for cardiovascular disease, the internal medicine specialties demanded a larger role in institutional management.
The increased complexity and expense of medical care led to concerns about how to provide a financial base for medical care and led to the establishment of private medical insurance programs, and ultimately, to Medicare and Medicaid. Hospitals were no longer concerned about being a health resource but suddenly became a profit center. Community hospitals expanded in order to meet the needs of new technologies with the support of grants and loans from the federal government.
With this growth, the management of the hospital of the 20th century required the creation of a new breed of hospital staff: the hospital administrator. They were hired to manage the financial and administrative aspects of these new and growing organizations. Although the hospital administration was structured to provide equipoise between the medical and financial priorities of the hospital, that balance was not easily maintained, and as the financial aspects became central, the hospital administrator became supreme and physicians lost control.
Today, the American hospital has become central to the support of nonprofit and for-profit regional and national health care conglomerates, and control has become the province of boards of directors with little medical input and larger community control. As a consequence, the physician has now become a real or quasi-employee of the hospital.
In a recent perspective paper, Dr. Richard Gunderson (Acad. Med. 2009;84:1348-51) emphasizes the need to train physicians to provide leadership for the future management of the hospital. He points out that in 1935, physicians were in charge of 35% of the nation's hospitals, but that number has shrunk to 4% of our current 6,500 U.S. hospitals. The academic medical community has largely ignored its role in preparing medical students for administrative leadership as it focused on the clinical knowledge required for the medical competence.
Dr. Gunderson, of Indiana University in Indianapolis, advocates the identification of student leaders in the selection of medical students and proposes the inclusion of courses in medical finance and social issues in the medical school curriculum in order to prepare them for a leadership role in redefining the future of medical care and hospital management.
Amanda Goodall, Ph.D., a senior research fellow at the Institute for the Study of Labor in Bonn, Germany, provides an even more challenging analysis of the importance of medical leadership at the hospital (Social Science Med. 2011;73:535-9). She notes that these changes in leadership are not unique to the United States but also have taken place in European hospitals. Using a quality scoring system, she analyzed the quality performance of 100 of the U.S. News and World Report's Best Hospitals 2009 in the fields of cancer, digestive disorders, and heart and heart surgery. She found a positive correlation between hospital quality ranking and physician CEO leadership.
Those of us who have grown up through this management evolution have seen its real impact on the care of hospital patients. Some of the changes have been positive, while others have proved frustrating for both patients and physicians who practice in the new environment.
The need for leadership by those of us who have direct patient care responsibilities is essential for an inclusive decision-making process. When patient care comes to discussion at the board meeting, physicians and nurses bring to the process a perspective that only they can provide. It is essential that their voices be heard.
Over the past century, the management of American hospitals has changed dramatically. These changes have occurred as a result of major shifts in the social and financial environment and have had a major effect on how medicine was practiced in the past and how it will be practiced in the future.
The American hospital as we know it today was established in the late 19th and early 20th centuries largely by the Catholic, Protestant, and Jewish communities to provide care for the elderly and chronically ill patients who otherwise would not be cared for at home.
With the development of surgical techniques from tonsillectomies to cholecystectomies in the mid-20th century, they became the workshop of general surgeons, who largely controlled the hospitals. With the subsequent development of antibiotics and medical treatment for cardiovascular disease, the internal medicine specialties demanded a larger role in institutional management.
The increased complexity and expense of medical care led to concerns about how to provide a financial base for medical care and led to the establishment of private medical insurance programs, and ultimately, to Medicare and Medicaid. Hospitals were no longer concerned about being a health resource but suddenly became a profit center. Community hospitals expanded in order to meet the needs of new technologies with the support of grants and loans from the federal government.
With this growth, the management of the hospital of the 20th century required the creation of a new breed of hospital staff: the hospital administrator. They were hired to manage the financial and administrative aspects of these new and growing organizations. Although the hospital administration was structured to provide equipoise between the medical and financial priorities of the hospital, that balance was not easily maintained, and as the financial aspects became central, the hospital administrator became supreme and physicians lost control.
Today, the American hospital has become central to the support of nonprofit and for-profit regional and national health care conglomerates, and control has become the province of boards of directors with little medical input and larger community control. As a consequence, the physician has now become a real or quasi-employee of the hospital.
In a recent perspective paper, Dr. Richard Gunderson (Acad. Med. 2009;84:1348-51) emphasizes the need to train physicians to provide leadership for the future management of the hospital. He points out that in 1935, physicians were in charge of 35% of the nation's hospitals, but that number has shrunk to 4% of our current 6,500 U.S. hospitals. The academic medical community has largely ignored its role in preparing medical students for administrative leadership as it focused on the clinical knowledge required for the medical competence.
Dr. Gunderson, of Indiana University in Indianapolis, advocates the identification of student leaders in the selection of medical students and proposes the inclusion of courses in medical finance and social issues in the medical school curriculum in order to prepare them for a leadership role in redefining the future of medical care and hospital management.
Amanda Goodall, Ph.D., a senior research fellow at the Institute for the Study of Labor in Bonn, Germany, provides an even more challenging analysis of the importance of medical leadership at the hospital (Social Science Med. 2011;73:535-9). She notes that these changes in leadership are not unique to the United States but also have taken place in European hospitals. Using a quality scoring system, she analyzed the quality performance of 100 of the U.S. News and World Report's Best Hospitals 2009 in the fields of cancer, digestive disorders, and heart and heart surgery. She found a positive correlation between hospital quality ranking and physician CEO leadership.
Those of us who have grown up through this management evolution have seen its real impact on the care of hospital patients. Some of the changes have been positive, while others have proved frustrating for both patients and physicians who practice in the new environment.
The need for leadership by those of us who have direct patient care responsibilities is essential for an inclusive decision-making process. When patient care comes to discussion at the board meeting, physicians and nurses bring to the process a perspective that only they can provide. It is essential that their voices be heard.
Obsessing on Atrial Fib
The recent Registry on Cardiac Rhythm Disorders Assessing the Control of Atrial Fibrillation (RECORD AF) provides further data to belie our obsession with obtaining or maintaining normal sinus rhythm in patients with intermittent or paroxysmal AF (J. Am. Coll. Cardiol. 2011;58:493-501).
Registry studies fail to provide the randomized data that we demand in control trials, but can often yield data about real-world therapy. This registry, which included 5,604 patients from around the world and whose authors were either consultants or employees of Sanofi-Aventis, the makers of dronedarone, confirms much of what has already been said on the issue. There is little or no benefit associated with the rhythm control therapy compared to a heart rate strategy when examined in this community-based unselected population.
Because patients in this study were not randomized to a particular therapy, participating doctors could use either strategy. Unfortunately, patients in the rate control arm were older and more often had AF, heart failure, and valvular heart disease at baseline. Despite this imbalance, the heart rate strategy was as good as rhythm control. Both groups experienced an 18% incidence of adverse clinical events that were determined by the clinical characteristics of the patient and not the therapeutic strategy used or heart rate achieved. Success was measured by the presence of normal sinus rhythm in the rhythm-controlled patients or a heart rate of less than 80 bpm in the rate-controlled patients at 1 year follow-up, which was achieved in 60% and 47%, respectively. If the heart rate target was below 85 bpm, the success was achieved in 60% vs. 52%, respectively. These observations are consistent with previous studies comparing rhythm and rate control strategies.
This obsession with the maintenance of normal sinus rhythm in patients with AF has spawned a whole industry associated with the technology and application of catheter ablation, atrial defibrillation, left atrial occlusive devices, and the continued development of anti-arrhythmic drugs. All of these interventions have achieved some success but have been associated with significant drug and device adverse events.
The most recently approved anti-arrhythmic drug, dronedarone (Multaq), has been extensively studied in AF. Three major clinical trials have examined the drug in paroxysmal, persistent, and permanent AF. The most recent trial, Permanent Atrial Fibrillation Outcome Study Using Dronedarone (PALLAS), compared dronedarone to placebo in 3,000 patients with permanent AF and who also had a number of comorbidities, including symptomatic heart failure and a decrease in ejection fraction, but excluded New York Heart Association class III heart failure. Only an electrophysiologist is able to make the distinction between these two clinical heart failure settings. The study was prematurely stopped because of a significant increase in cardiovascular events, including mortality (
Dronedarone was approved in 2009 for patients with paroxysmal and persistent AF and atrial flutter by the Food and Drug Administration based on the ATHENA trial, which reported a decrease in recurrent AF in patients treated with the drug. In addition, dronedarone decreased the combined cardiovascular end point of mortality and rehospitalization, achieved mostly by a decrease in rehospitalization. However, its approval included a boxed warning that it is “contraindicated in patients with NYHA Class IV heart failure or NYHA Class II-III heart failure with a recent decompensation requiring hospitalization,” because of the increased risks observed in the previous Trial with Dronedarone in Moderate to Severe CHF Evaluating Morbidity Decrease (ANDROMEDA). That trial, which included mostly patients with NYHA class III-IV, was stopped prematurely because of the increase in heart failure mortality.
Dr. Stuart Connolly, the co–primary investigator of PALLAS, emphasized the difference between ATHENA, which randomized patients with nonpermanent AF, and PALLAS, which randomized patients with permanent AF. He thought that it was “reasonable” for patients with nonpermanent AF to continue with dronedarone, because “they will still benefit from it in terms of reduced CV hospitalization.”
Although there are surely some patients in whom AF causes significant symptoms that warrant aggressive therapy, the vast majority of patients, as indicated in RECORD AF, tolerate AF quite well. Much of the quest for rhythm control is related to the need to prevent systemic emboli and the requirement for anticoagulation therapy using vitamin K derivatives. The development of new antithrombotic drugs and factor Xa inhibitors now provides a safer and more effective alternative. It is time to relax our obsessive approach to atrial fibrillation therapy and become more realistic about our long-term goals for its therapy.
The recent Registry on Cardiac Rhythm Disorders Assessing the Control of Atrial Fibrillation (RECORD AF) provides further data to belie our obsession with obtaining or maintaining normal sinus rhythm in patients with intermittent or paroxysmal AF (J. Am. Coll. Cardiol. 2011;58:493-501).
Registry studies fail to provide the randomized data that we demand in control trials, but can often yield data about real-world therapy. This registry, which included 5,604 patients from around the world and whose authors were either consultants or employees of Sanofi-Aventis, the makers of dronedarone, confirms much of what has already been said on the issue. There is little or no benefit associated with the rhythm control therapy compared to a heart rate strategy when examined in this community-based unselected population.
Because patients in this study were not randomized to a particular therapy, participating doctors could use either strategy. Unfortunately, patients in the rate control arm were older and more often had AF, heart failure, and valvular heart disease at baseline. Despite this imbalance, the heart rate strategy was as good as rhythm control. Both groups experienced an 18% incidence of adverse clinical events that were determined by the clinical characteristics of the patient and not the therapeutic strategy used or heart rate achieved. Success was measured by the presence of normal sinus rhythm in the rhythm-controlled patients or a heart rate of less than 80 bpm in the rate-controlled patients at 1 year follow-up, which was achieved in 60% and 47%, respectively. If the heart rate target was below 85 bpm, the success was achieved in 60% vs. 52%, respectively. These observations are consistent with previous studies comparing rhythm and rate control strategies.
This obsession with the maintenance of normal sinus rhythm in patients with AF has spawned a whole industry associated with the technology and application of catheter ablation, atrial defibrillation, left atrial occlusive devices, and the continued development of anti-arrhythmic drugs. All of these interventions have achieved some success but have been associated with significant drug and device adverse events.
The most recently approved anti-arrhythmic drug, dronedarone (Multaq), has been extensively studied in AF. Three major clinical trials have examined the drug in paroxysmal, persistent, and permanent AF. The most recent trial, Permanent Atrial Fibrillation Outcome Study Using Dronedarone (PALLAS), compared dronedarone to placebo in 3,000 patients with permanent AF and who also had a number of comorbidities, including symptomatic heart failure and a decrease in ejection fraction, but excluded New York Heart Association class III heart failure. Only an electrophysiologist is able to make the distinction between these two clinical heart failure settings. The study was prematurely stopped because of a significant increase in cardiovascular events, including mortality (
Dronedarone was approved in 2009 for patients with paroxysmal and persistent AF and atrial flutter by the Food and Drug Administration based on the ATHENA trial, which reported a decrease in recurrent AF in patients treated with the drug. In addition, dronedarone decreased the combined cardiovascular end point of mortality and rehospitalization, achieved mostly by a decrease in rehospitalization. However, its approval included a boxed warning that it is “contraindicated in patients with NYHA Class IV heart failure or NYHA Class II-III heart failure with a recent decompensation requiring hospitalization,” because of the increased risks observed in the previous Trial with Dronedarone in Moderate to Severe CHF Evaluating Morbidity Decrease (ANDROMEDA). That trial, which included mostly patients with NYHA class III-IV, was stopped prematurely because of the increase in heart failure mortality.
Dr. Stuart Connolly, the co–primary investigator of PALLAS, emphasized the difference between ATHENA, which randomized patients with nonpermanent AF, and PALLAS, which randomized patients with permanent AF. He thought that it was “reasonable” for patients with nonpermanent AF to continue with dronedarone, because “they will still benefit from it in terms of reduced CV hospitalization.”
Although there are surely some patients in whom AF causes significant symptoms that warrant aggressive therapy, the vast majority of patients, as indicated in RECORD AF, tolerate AF quite well. Much of the quest for rhythm control is related to the need to prevent systemic emboli and the requirement for anticoagulation therapy using vitamin K derivatives. The development of new antithrombotic drugs and factor Xa inhibitors now provides a safer and more effective alternative. It is time to relax our obsessive approach to atrial fibrillation therapy and become more realistic about our long-term goals for its therapy.
The recent Registry on Cardiac Rhythm Disorders Assessing the Control of Atrial Fibrillation (RECORD AF) provides further data to belie our obsession with obtaining or maintaining normal sinus rhythm in patients with intermittent or paroxysmal AF (J. Am. Coll. Cardiol. 2011;58:493-501).
Registry studies fail to provide the randomized data that we demand in control trials, but can often yield data about real-world therapy. This registry, which included 5,604 patients from around the world and whose authors were either consultants or employees of Sanofi-Aventis, the makers of dronedarone, confirms much of what has already been said on the issue. There is little or no benefit associated with the rhythm control therapy compared to a heart rate strategy when examined in this community-based unselected population.
Because patients in this study were not randomized to a particular therapy, participating doctors could use either strategy. Unfortunately, patients in the rate control arm were older and more often had AF, heart failure, and valvular heart disease at baseline. Despite this imbalance, the heart rate strategy was as good as rhythm control. Both groups experienced an 18% incidence of adverse clinical events that were determined by the clinical characteristics of the patient and not the therapeutic strategy used or heart rate achieved. Success was measured by the presence of normal sinus rhythm in the rhythm-controlled patients or a heart rate of less than 80 bpm in the rate-controlled patients at 1 year follow-up, which was achieved in 60% and 47%, respectively. If the heart rate target was below 85 bpm, the success was achieved in 60% vs. 52%, respectively. These observations are consistent with previous studies comparing rhythm and rate control strategies.
This obsession with the maintenance of normal sinus rhythm in patients with AF has spawned a whole industry associated with the technology and application of catheter ablation, atrial defibrillation, left atrial occlusive devices, and the continued development of anti-arrhythmic drugs. All of these interventions have achieved some success but have been associated with significant drug and device adverse events.
The most recently approved anti-arrhythmic drug, dronedarone (Multaq), has been extensively studied in AF. Three major clinical trials have examined the drug in paroxysmal, persistent, and permanent AF. The most recent trial, Permanent Atrial Fibrillation Outcome Study Using Dronedarone (PALLAS), compared dronedarone to placebo in 3,000 patients with permanent AF and who also had a number of comorbidities, including symptomatic heart failure and a decrease in ejection fraction, but excluded New York Heart Association class III heart failure. Only an electrophysiologist is able to make the distinction between these two clinical heart failure settings. The study was prematurely stopped because of a significant increase in cardiovascular events, including mortality (
Dronedarone was approved in 2009 for patients with paroxysmal and persistent AF and atrial flutter by the Food and Drug Administration based on the ATHENA trial, which reported a decrease in recurrent AF in patients treated with the drug. In addition, dronedarone decreased the combined cardiovascular end point of mortality and rehospitalization, achieved mostly by a decrease in rehospitalization. However, its approval included a boxed warning that it is “contraindicated in patients with NYHA Class IV heart failure or NYHA Class II-III heart failure with a recent decompensation requiring hospitalization,” because of the increased risks observed in the previous Trial with Dronedarone in Moderate to Severe CHF Evaluating Morbidity Decrease (ANDROMEDA). That trial, which included mostly patients with NYHA class III-IV, was stopped prematurely because of the increase in heart failure mortality.
Dr. Stuart Connolly, the co–primary investigator of PALLAS, emphasized the difference between ATHENA, which randomized patients with nonpermanent AF, and PALLAS, which randomized patients with permanent AF. He thought that it was “reasonable” for patients with nonpermanent AF to continue with dronedarone, because “they will still benefit from it in terms of reduced CV hospitalization.”
Although there are surely some patients in whom AF causes significant symptoms that warrant aggressive therapy, the vast majority of patients, as indicated in RECORD AF, tolerate AF quite well. Much of the quest for rhythm control is related to the need to prevent systemic emboli and the requirement for anticoagulation therapy using vitamin K derivatives. The development of new antithrombotic drugs and factor Xa inhibitors now provides a safer and more effective alternative. It is time to relax our obsessive approach to atrial fibrillation therapy and become more realistic about our long-term goals for its therapy.
Angiography in Asymptomatic Patients
They came for a second opinion. They were both in their 50s; she a lawyer, the husband a stockbroker. He had insulin dependent diabetes for 20 years but was otherwise well. She was concerned that her husband would die suddenly just as his father had at age 70. He was without symptoms but had a nuclear exercise stress test at the behest of his local medical doctor because of his diabetes.
The test was said to be abnormal, but three subsequent in-house readers found the results normal. He was advised to have an angiogram by another cardiologist. “What should we do?”
I told her that an angiogram or a stent would not prevent him from dying suddenly. I outlined all the pros and cons and advised against it. The wife was very anxious and wanted an angiogram so that her husband wouldn't die suddenly. They both left my office, never to be seen again.
A recent report by Dr. William B. Borden and colleagues (JAMA 2011;305:1882-9) examined the change in clinical practice in regard to percutaneous coronary intervention before and after the report of the COURAGE trial 4 years ago (N. Engl. J. Med. 2007;356:1503-16), which indicated that there was no mortality or morbidity benefit in patients with stable angina who received PCI when compared to optimal medical therapy.
Dr. Borden and colleagues presumed that the results of the COURAGE trial would transform clinical practice, and that most of the 293,795 patients in their study who went on to PCI in the COURAGE-like population would receive optimal medical therapy before PCI.
In fact, optimal medical therapy (defined as therapy with aspirin, a beta-blocker, an ACE inhibitor, and a statin) was used in 43.4% of the patients before COURAGE and in 45.0% after the COURAGE report. In COURAGE, 32% had diabetes, 12% of the patients were asymptomatic, and 30% had class I angina.
In the most recent analysis by Dr. Borden, one-third of patients (more than 70,000) had no angina prior to PCI. One must wonder what the perceived patient benefit was that led to the performance of a PCI in those patients.
My patient's other cardiologist advised angiography for my patient partly because of a concern for the early identification of ischemic heart disease in diabetic patients. Indeed, this concern had led the American Diabetes Association to recommend that in addition to standard secondary prevention therapy for both diabetes and coronary artery disease, patients with two or more risk factors for coronary artery disease undergo early screening (Diabetes Care 1998;13:1551-9).
These recommendations, however, were not evidence based, but made on the recommendation of an expert panel. The DIAD (Detection of Ischemia in Asymptomatic Diabetics) trial has since provided further insight into the issue of screening asymptomatic diabetic patients (JAMA 2009;301:1547-55), an issue that remains controversial.
Although not a randomized trial, DIAD indicates that the event rate in asymptomatic diabetic patients in general is low, and that a positive myocardial perfusion stress test did not identify patients who were at an increase risk of ischemic events.
Of the 522 asymptomatic patients screened, 409 (78%) had normal results, 50 (10%) had a small perfusion defect, and 33 (6%) had moderate or large perfusion defects. Although there was no significantly increased risk of cardiac events in patients with small defects when they were compared with those who had no perfusion defect, there was a sixfold increase risk in patients with moderate to large defects on myocardial perfusion imaging. Only 4.4% of patients went on to angiography, a decision driven by the clinical judgment of the patient's physician.
Of course, in my example, the greatest pressure for angiography came from the patient's wife, who was convinced that on the basis of conventional wisdom, myocardial perfusion imaging–guided PCI would identify a critical lesion that, when treated with PCI, would prolong her husband's life. And as a matter of fact, in order to prove the absence of coronary artery disease based on the normal perfusion test, I agreed to arrange an angiogram should they need reassurance that the test was normal. What would have eventuated should we have found a lesion remains for your conjecture.
But it is clear that there is an overabundance of angiograms being performed in asymptomatic patients, which more than likely leads to the performance of unnecessary PCIs in asymptomatic patients. Angiography has become the “carpenter's hammer,” with the little regard for its benefit.
A more reasonable and effective approach to diabetes patients (as well as other asymptomatic patients) is the institution of adequate primary prevention, which has been shown to have both morbidity and mortality benefits.
They came for a second opinion. They were both in their 50s; she a lawyer, the husband a stockbroker. He had insulin dependent diabetes for 20 years but was otherwise well. She was concerned that her husband would die suddenly just as his father had at age 70. He was without symptoms but had a nuclear exercise stress test at the behest of his local medical doctor because of his diabetes.
The test was said to be abnormal, but three subsequent in-house readers found the results normal. He was advised to have an angiogram by another cardiologist. “What should we do?”
I told her that an angiogram or a stent would not prevent him from dying suddenly. I outlined all the pros and cons and advised against it. The wife was very anxious and wanted an angiogram so that her husband wouldn't die suddenly. They both left my office, never to be seen again.
A recent report by Dr. William B. Borden and colleagues (JAMA 2011;305:1882-9) examined the change in clinical practice in regard to percutaneous coronary intervention before and after the report of the COURAGE trial 4 years ago (N. Engl. J. Med. 2007;356:1503-16), which indicated that there was no mortality or morbidity benefit in patients with stable angina who received PCI when compared to optimal medical therapy.
Dr. Borden and colleagues presumed that the results of the COURAGE trial would transform clinical practice, and that most of the 293,795 patients in their study who went on to PCI in the COURAGE-like population would receive optimal medical therapy before PCI.
In fact, optimal medical therapy (defined as therapy with aspirin, a beta-blocker, an ACE inhibitor, and a statin) was used in 43.4% of the patients before COURAGE and in 45.0% after the COURAGE report. In COURAGE, 32% had diabetes, 12% of the patients were asymptomatic, and 30% had class I angina.
In the most recent analysis by Dr. Borden, one-third of patients (more than 70,000) had no angina prior to PCI. One must wonder what the perceived patient benefit was that led to the performance of a PCI in those patients.
My patient's other cardiologist advised angiography for my patient partly because of a concern for the early identification of ischemic heart disease in diabetic patients. Indeed, this concern had led the American Diabetes Association to recommend that in addition to standard secondary prevention therapy for both diabetes and coronary artery disease, patients with two or more risk factors for coronary artery disease undergo early screening (Diabetes Care 1998;13:1551-9).
These recommendations, however, were not evidence based, but made on the recommendation of an expert panel. The DIAD (Detection of Ischemia in Asymptomatic Diabetics) trial has since provided further insight into the issue of screening asymptomatic diabetic patients (JAMA 2009;301:1547-55), an issue that remains controversial.
Although not a randomized trial, DIAD indicates that the event rate in asymptomatic diabetic patients in general is low, and that a positive myocardial perfusion stress test did not identify patients who were at an increase risk of ischemic events.
Of the 522 asymptomatic patients screened, 409 (78%) had normal results, 50 (10%) had a small perfusion defect, and 33 (6%) had moderate or large perfusion defects. Although there was no significantly increased risk of cardiac events in patients with small defects when they were compared with those who had no perfusion defect, there was a sixfold increase risk in patients with moderate to large defects on myocardial perfusion imaging. Only 4.4% of patients went on to angiography, a decision driven by the clinical judgment of the patient's physician.
Of course, in my example, the greatest pressure for angiography came from the patient's wife, who was convinced that on the basis of conventional wisdom, myocardial perfusion imaging–guided PCI would identify a critical lesion that, when treated with PCI, would prolong her husband's life. And as a matter of fact, in order to prove the absence of coronary artery disease based on the normal perfusion test, I agreed to arrange an angiogram should they need reassurance that the test was normal. What would have eventuated should we have found a lesion remains for your conjecture.
But it is clear that there is an overabundance of angiograms being performed in asymptomatic patients, which more than likely leads to the performance of unnecessary PCIs in asymptomatic patients. Angiography has become the “carpenter's hammer,” with the little regard for its benefit.
A more reasonable and effective approach to diabetes patients (as well as other asymptomatic patients) is the institution of adequate primary prevention, which has been shown to have both morbidity and mortality benefits.
They came for a second opinion. They were both in their 50s; she a lawyer, the husband a stockbroker. He had insulin dependent diabetes for 20 years but was otherwise well. She was concerned that her husband would die suddenly just as his father had at age 70. He was without symptoms but had a nuclear exercise stress test at the behest of his local medical doctor because of his diabetes.
The test was said to be abnormal, but three subsequent in-house readers found the results normal. He was advised to have an angiogram by another cardiologist. “What should we do?”
I told her that an angiogram or a stent would not prevent him from dying suddenly. I outlined all the pros and cons and advised against it. The wife was very anxious and wanted an angiogram so that her husband wouldn't die suddenly. They both left my office, never to be seen again.
A recent report by Dr. William B. Borden and colleagues (JAMA 2011;305:1882-9) examined the change in clinical practice in regard to percutaneous coronary intervention before and after the report of the COURAGE trial 4 years ago (N. Engl. J. Med. 2007;356:1503-16), which indicated that there was no mortality or morbidity benefit in patients with stable angina who received PCI when compared to optimal medical therapy.
Dr. Borden and colleagues presumed that the results of the COURAGE trial would transform clinical practice, and that most of the 293,795 patients in their study who went on to PCI in the COURAGE-like population would receive optimal medical therapy before PCI.
In fact, optimal medical therapy (defined as therapy with aspirin, a beta-blocker, an ACE inhibitor, and a statin) was used in 43.4% of the patients before COURAGE and in 45.0% after the COURAGE report. In COURAGE, 32% had diabetes, 12% of the patients were asymptomatic, and 30% had class I angina.
In the most recent analysis by Dr. Borden, one-third of patients (more than 70,000) had no angina prior to PCI. One must wonder what the perceived patient benefit was that led to the performance of a PCI in those patients.
My patient's other cardiologist advised angiography for my patient partly because of a concern for the early identification of ischemic heart disease in diabetic patients. Indeed, this concern had led the American Diabetes Association to recommend that in addition to standard secondary prevention therapy for both diabetes and coronary artery disease, patients with two or more risk factors for coronary artery disease undergo early screening (Diabetes Care 1998;13:1551-9).
These recommendations, however, were not evidence based, but made on the recommendation of an expert panel. The DIAD (Detection of Ischemia in Asymptomatic Diabetics) trial has since provided further insight into the issue of screening asymptomatic diabetic patients (JAMA 2009;301:1547-55), an issue that remains controversial.
Although not a randomized trial, DIAD indicates that the event rate in asymptomatic diabetic patients in general is low, and that a positive myocardial perfusion stress test did not identify patients who were at an increase risk of ischemic events.
Of the 522 asymptomatic patients screened, 409 (78%) had normal results, 50 (10%) had a small perfusion defect, and 33 (6%) had moderate or large perfusion defects. Although there was no significantly increased risk of cardiac events in patients with small defects when they were compared with those who had no perfusion defect, there was a sixfold increase risk in patients with moderate to large defects on myocardial perfusion imaging. Only 4.4% of patients went on to angiography, a decision driven by the clinical judgment of the patient's physician.
Of course, in my example, the greatest pressure for angiography came from the patient's wife, who was convinced that on the basis of conventional wisdom, myocardial perfusion imaging–guided PCI would identify a critical lesion that, when treated with PCI, would prolong her husband's life. And as a matter of fact, in order to prove the absence of coronary artery disease based on the normal perfusion test, I agreed to arrange an angiogram should they need reassurance that the test was normal. What would have eventuated should we have found a lesion remains for your conjecture.
But it is clear that there is an overabundance of angiograms being performed in asymptomatic patients, which more than likely leads to the performance of unnecessary PCIs in asymptomatic patients. Angiography has become the “carpenter's hammer,” with the little regard for its benefit.
A more reasonable and effective approach to diabetes patients (as well as other asymptomatic patients) is the institution of adequate primary prevention, which has been shown to have both morbidity and mortality benefits.
On Transcatheter Aortic Valves
The natural history and pathology of aortic stenosis has been well described since the mid-18th century by John Baptist Morgagni. Its latency period usually runs 6-7 decades before expressing its classic symptoms. Once the symptoms of heart failure, angina, and syncope occur, the life span of patients is measured in 1-2 years.
Because of the increased number of octogenarians around these days, aortic stenosis has become a larger therapeutic problem to cardiologists. Unfortunately, when octogenarians come to the doctor with the symptoms of aortic stenosis, they usually bring a number of other comorbidities, such as coronary artery disease, diabetes, pulmonary insufficiency, and renal dysfunction, just to name a few. Surgical intervention in these patients carries high risk and both the patient and surgeon are reluctant to proceed with high-risk surgery in such a complex medical environment.
The recent development of a percutaneous aortic valve that can be implanted either transvenously or transapically has provided interesting options for these elderly patients. Several transcatheter aortic valves are now available in Europe, but until the last few months there have been no randomized clinical trials evaluating there efficacy.
The two most recent trials, the PARTNER trials, using a SAPIEN heart valve system (Edwards Lifesciences) have provided an opportunity to consider the potential benefits of transcatheter aortic-valve replacement (TAVR). The first reported trial compared TAVR to standard medical therapy in patients with severe aortic stenosis deemed inoperable for traditional aortic valve replacement (AVR). A second group of patient with severe aortic stenosis was randomized to either TAVR or AVR. Both studies have provided optimism that these percutaneous devices can provided significant benefit.
The initial PARTNER study randomized 358 stenosis patients who were considered to be inoperable, to either TAVR or standard medical therapy including in some case balloon aortic valvulotomy (N. Engl. J. Med. 2010; 363:1597-607). That trial reported a 30-day mortality of 5.0% and 2.8% and a 1-year mortality of 30.7% and 50.7% in the TAVR and standard medical therapy groups, respectively. Associated with this improvement in mortality, there was both symptomatic improvement and decrease in hospitalization in the TAVR treated patients. There was, however, an increase occurrence of major strokes, at 5.0% in the TAVR patients compared with 1.1% in the medical patients.
The most recent PARTNER trial reported at the annual meeting of the American Cardiology compared TAVR to standard surgical AVR in patients with severe aortic stenosis. In that trial, 699 patients with mean aortic valve area of 0.6-0.7 cm
The device used in PARTNER is currently approved for use in Europe and soon to be available in the United States. Several other transcatheter valve systems are currently in development by device companies, and one, the CoreValve (Medtronics) is currently undergoing clinical trials in the United States. The devices included in the early trials have been improved upon and investigators using the Edwards Lifesciences device are currently testing the fourth generation of that valve, which is smaller and easier to pass through the femoral artery.
In addition, protection devices are being developed to deal with the observed increased stroke morbidity. Although stroke remains a problem, emboli have not been limited to the brain but some reports suggest that, there is evidence for intracoronary embolism.
The development of these valves are obviously on the fast track but unfortunately little is known about their long-term durability. There are some follow-up data from Europe where the valve has been in use for about 2 years. When weighed against the years of experience and the excellent durability of the current AVR there should be some reticence to the application of these valves in patients at better surgical risks.
Although the operative risks for either TAVR or AVR are acceptable, considering the natural history of the disease, unfortunately the long-term risks of the elderly patients with aortic stenosis remains high even after successful valve replacement.
The natural history and pathology of aortic stenosis has been well described since the mid-18th century by John Baptist Morgagni. Its latency period usually runs 6-7 decades before expressing its classic symptoms. Once the symptoms of heart failure, angina, and syncope occur, the life span of patients is measured in 1-2 years.
Because of the increased number of octogenarians around these days, aortic stenosis has become a larger therapeutic problem to cardiologists. Unfortunately, when octogenarians come to the doctor with the symptoms of aortic stenosis, they usually bring a number of other comorbidities, such as coronary artery disease, diabetes, pulmonary insufficiency, and renal dysfunction, just to name a few. Surgical intervention in these patients carries high risk and both the patient and surgeon are reluctant to proceed with high-risk surgery in such a complex medical environment.
The recent development of a percutaneous aortic valve that can be implanted either transvenously or transapically has provided interesting options for these elderly patients. Several transcatheter aortic valves are now available in Europe, but until the last few months there have been no randomized clinical trials evaluating there efficacy.
The two most recent trials, the PARTNER trials, using a SAPIEN heart valve system (Edwards Lifesciences) have provided an opportunity to consider the potential benefits of transcatheter aortic-valve replacement (TAVR). The first reported trial compared TAVR to standard medical therapy in patients with severe aortic stenosis deemed inoperable for traditional aortic valve replacement (AVR). A second group of patient with severe aortic stenosis was randomized to either TAVR or AVR. Both studies have provided optimism that these percutaneous devices can provided significant benefit.
The initial PARTNER study randomized 358 stenosis patients who were considered to be inoperable, to either TAVR or standard medical therapy including in some case balloon aortic valvulotomy (N. Engl. J. Med. 2010; 363:1597-607). That trial reported a 30-day mortality of 5.0% and 2.8% and a 1-year mortality of 30.7% and 50.7% in the TAVR and standard medical therapy groups, respectively. Associated with this improvement in mortality, there was both symptomatic improvement and decrease in hospitalization in the TAVR treated patients. There was, however, an increase occurrence of major strokes, at 5.0% in the TAVR patients compared with 1.1% in the medical patients.
The most recent PARTNER trial reported at the annual meeting of the American Cardiology compared TAVR to standard surgical AVR in patients with severe aortic stenosis. In that trial, 699 patients with mean aortic valve area of 0.6-0.7 cm
The device used in PARTNER is currently approved for use in Europe and soon to be available in the United States. Several other transcatheter valve systems are currently in development by device companies, and one, the CoreValve (Medtronics) is currently undergoing clinical trials in the United States. The devices included in the early trials have been improved upon and investigators using the Edwards Lifesciences device are currently testing the fourth generation of that valve, which is smaller and easier to pass through the femoral artery.
In addition, protection devices are being developed to deal with the observed increased stroke morbidity. Although stroke remains a problem, emboli have not been limited to the brain but some reports suggest that, there is evidence for intracoronary embolism.
The development of these valves are obviously on the fast track but unfortunately little is known about their long-term durability. There are some follow-up data from Europe where the valve has been in use for about 2 years. When weighed against the years of experience and the excellent durability of the current AVR there should be some reticence to the application of these valves in patients at better surgical risks.
Although the operative risks for either TAVR or AVR are acceptable, considering the natural history of the disease, unfortunately the long-term risks of the elderly patients with aortic stenosis remains high even after successful valve replacement.
The natural history and pathology of aortic stenosis has been well described since the mid-18th century by John Baptist Morgagni. Its latency period usually runs 6-7 decades before expressing its classic symptoms. Once the symptoms of heart failure, angina, and syncope occur, the life span of patients is measured in 1-2 years.
Because of the increased number of octogenarians around these days, aortic stenosis has become a larger therapeutic problem to cardiologists. Unfortunately, when octogenarians come to the doctor with the symptoms of aortic stenosis, they usually bring a number of other comorbidities, such as coronary artery disease, diabetes, pulmonary insufficiency, and renal dysfunction, just to name a few. Surgical intervention in these patients carries high risk and both the patient and surgeon are reluctant to proceed with high-risk surgery in such a complex medical environment.
The recent development of a percutaneous aortic valve that can be implanted either transvenously or transapically has provided interesting options for these elderly patients. Several transcatheter aortic valves are now available in Europe, but until the last few months there have been no randomized clinical trials evaluating there efficacy.
The two most recent trials, the PARTNER trials, using a SAPIEN heart valve system (Edwards Lifesciences) have provided an opportunity to consider the potential benefits of transcatheter aortic-valve replacement (TAVR). The first reported trial compared TAVR to standard medical therapy in patients with severe aortic stenosis deemed inoperable for traditional aortic valve replacement (AVR). A second group of patient with severe aortic stenosis was randomized to either TAVR or AVR. Both studies have provided optimism that these percutaneous devices can provided significant benefit.
The initial PARTNER study randomized 358 stenosis patients who were considered to be inoperable, to either TAVR or standard medical therapy including in some case balloon aortic valvulotomy (N. Engl. J. Med. 2010; 363:1597-607). That trial reported a 30-day mortality of 5.0% and 2.8% and a 1-year mortality of 30.7% and 50.7% in the TAVR and standard medical therapy groups, respectively. Associated with this improvement in mortality, there was both symptomatic improvement and decrease in hospitalization in the TAVR treated patients. There was, however, an increase occurrence of major strokes, at 5.0% in the TAVR patients compared with 1.1% in the medical patients.
The most recent PARTNER trial reported at the annual meeting of the American Cardiology compared TAVR to standard surgical AVR in patients with severe aortic stenosis. In that trial, 699 patients with mean aortic valve area of 0.6-0.7 cm
The device used in PARTNER is currently approved for use in Europe and soon to be available in the United States. Several other transcatheter valve systems are currently in development by device companies, and one, the CoreValve (Medtronics) is currently undergoing clinical trials in the United States. The devices included in the early trials have been improved upon and investigators using the Edwards Lifesciences device are currently testing the fourth generation of that valve, which is smaller and easier to pass through the femoral artery.
In addition, protection devices are being developed to deal with the observed increased stroke morbidity. Although stroke remains a problem, emboli have not been limited to the brain but some reports suggest that, there is evidence for intracoronary embolism.
The development of these valves are obviously on the fast track but unfortunately little is known about their long-term durability. There are some follow-up data from Europe where the valve has been in use for about 2 years. When weighed against the years of experience and the excellent durability of the current AVR there should be some reticence to the application of these valves in patients at better surgical risks.
Although the operative risks for either TAVR or AVR are acceptable, considering the natural history of the disease, unfortunately the long-term risks of the elderly patients with aortic stenosis remains high even after successful valve replacement.
Coronary Revascularization In Ischemic Heart Disease
Coronary revascularization using bypass grafting with arterial or venous conduits has been with us since 1968 when Dr. Rene Favaloro performed the first saphenous venous graft for the treatment of angina pectoris (J. Thorac. Cardiovasc. Surg. 1969;58:178-85). Although it is clear that coronary artery bypass grafting (CABG) has been effective in decreasing symptomatic angina, with few exceptions there has been little to support its benefit in prolonging life. One of those exceptions was identified in a subgroup of the initial Coronary Artery Surgery Study carried out in the 1980s and sponsored by the National Heart, Lung, and Blood Institute (N. Engl. J. Med. 1985;312:1665-71). Of the 780 patients with chronic stable angina randomized to medicine only or CABG, there was a significant decrease in both angina and mortality in a subgroup of 160 patients with ejection fractions below 50%, primarily in patients with triple-vessel disease.
Since that report in 1985, there have been no clinical mortality trials examining the clinical benefit of CABG surgery in patients with ischemic heart failure. A randomized trial to evaluate the benefit of surgical ventricular reconstruction plus CABG, compared with CABG alone, failed to observe any benefit (N. Engl. J. Med. 2009:360;1705-17).
The suggestion that CABG could improve ventricular function is based on the observations by Dr. Shahbudin Rahimtoola in the 1980s in studies showing improved function in patients before and after CABG (Am. Heart J. 1989;117:211-21). He proposed the concept that areas of “hibernating myocardium” exist in the ischemic ventricle that can be revived by restoring its blood supply by CABG. But to a large degree, patients with ischemic heart failure have not been a prime target for CABG, and attempts to show clinical benefit in symptomatic improvement in heart failure has not been explored.
The recent report of the Surgical Treatment for Ischemic Heart Failure (STICH) trial has provided important information supporting the mortality and morbidity benefit of revascularization in patients with symptomatic ischemic heart failure (N. Engl. J. Med. 2011;364:1607-16). This study, also supported by NHLBI, was carried out in 26 countries throughout the world. In the 1,212 patients randomized to standard medical therapy alone, compared with medical therapy plus CABG, there was no significant benefit observed in the CABG patients in all-cause mortality, but there was a 19% decrease in cardiovascular mortality (P = .05) over a 3-year mean follow-up, and a 26% decrease in all-cause mortality and cardiovascular hospitalization (P less than .001). When patients who received CABG either by random assignment or because they were crossed over to surgery (620) were compared with those patients who remained on medical therapy (592), the effects of surgery were even more impressive, with a 30% decrease in all-cause mortality (P less than .001). The patients included in STICH were severely symptomatic, almost all with significant angina and 37% in NYHA HF class III/IV with a mean ejection fraction of 27%. Surgery carried an early up-front mortality risk of approximately 4%, which took about 2 years to overcome.
One interesting additional aspect of STICH was the viability study carried out in a subset of 601 patients using either dobutamine echocardiograms or SPECT stress testing. Although patients who demonstrated viability had a better outcome, viability did not define those patients who would benefit by CABG (N. Engl. J. Med. 2011;364:1617-25).
The “backstory” of the STICH trial was the failure of the U.S. cardiothoracic surgery centers to participate in it in a significant way. A total of 26 countries were required to achieve the 2,136 patients enrolled in the total STICH trial, and only 307 patients (14%) were American. The failure of the academic and large clinical centers to grasp the importance of this trial, and their reluctance to participate, was unfortunate.
The results of STICH indicate that the addition of CABG to patients already receiving optimal medical therapy provides a significant mortality and morbidity benefit. Unfortunately, viability studies do not provide helpful information in regard to the optimal selection of patients for CABG in ischemic heart failure. That decision appears to depend upon the availability of acceptable target vessels. But the data do support CABG, performed with an acceptable risk in experienced hands, as providing long-term benefits for heart failure patients.
Revascularization provides an additional mode of therapy for the treatment of patients with symptomatic ischemic heart failure, which could become a potential therapeutic target for percutaneous intervention in patients with the appropriate anatomy.
Coronary revascularization using bypass grafting with arterial or venous conduits has been with us since 1968 when Dr. Rene Favaloro performed the first saphenous venous graft for the treatment of angina pectoris (J. Thorac. Cardiovasc. Surg. 1969;58:178-85). Although it is clear that coronary artery bypass grafting (CABG) has been effective in decreasing symptomatic angina, with few exceptions there has been little to support its benefit in prolonging life. One of those exceptions was identified in a subgroup of the initial Coronary Artery Surgery Study carried out in the 1980s and sponsored by the National Heart, Lung, and Blood Institute (N. Engl. J. Med. 1985;312:1665-71). Of the 780 patients with chronic stable angina randomized to medicine only or CABG, there was a significant decrease in both angina and mortality in a subgroup of 160 patients with ejection fractions below 50%, primarily in patients with triple-vessel disease.
Since that report in 1985, there have been no clinical mortality trials examining the clinical benefit of CABG surgery in patients with ischemic heart failure. A randomized trial to evaluate the benefit of surgical ventricular reconstruction plus CABG, compared with CABG alone, failed to observe any benefit (N. Engl. J. Med. 2009:360;1705-17).
The suggestion that CABG could improve ventricular function is based on the observations by Dr. Shahbudin Rahimtoola in the 1980s in studies showing improved function in patients before and after CABG (Am. Heart J. 1989;117:211-21). He proposed the concept that areas of “hibernating myocardium” exist in the ischemic ventricle that can be revived by restoring its blood supply by CABG. But to a large degree, patients with ischemic heart failure have not been a prime target for CABG, and attempts to show clinical benefit in symptomatic improvement in heart failure has not been explored.
The recent report of the Surgical Treatment for Ischemic Heart Failure (STICH) trial has provided important information supporting the mortality and morbidity benefit of revascularization in patients with symptomatic ischemic heart failure (N. Engl. J. Med. 2011;364:1607-16). This study, also supported by NHLBI, was carried out in 26 countries throughout the world. In the 1,212 patients randomized to standard medical therapy alone, compared with medical therapy plus CABG, there was no significant benefit observed in the CABG patients in all-cause mortality, but there was a 19% decrease in cardiovascular mortality (P = .05) over a 3-year mean follow-up, and a 26% decrease in all-cause mortality and cardiovascular hospitalization (P less than .001). When patients who received CABG either by random assignment or because they were crossed over to surgery (620) were compared with those patients who remained on medical therapy (592), the effects of surgery were even more impressive, with a 30% decrease in all-cause mortality (P less than .001). The patients included in STICH were severely symptomatic, almost all with significant angina and 37% in NYHA HF class III/IV with a mean ejection fraction of 27%. Surgery carried an early up-front mortality risk of approximately 4%, which took about 2 years to overcome.
One interesting additional aspect of STICH was the viability study carried out in a subset of 601 patients using either dobutamine echocardiograms or SPECT stress testing. Although patients who demonstrated viability had a better outcome, viability did not define those patients who would benefit by CABG (N. Engl. J. Med. 2011;364:1617-25).
The “backstory” of the STICH trial was the failure of the U.S. cardiothoracic surgery centers to participate in it in a significant way. A total of 26 countries were required to achieve the 2,136 patients enrolled in the total STICH trial, and only 307 patients (14%) were American. The failure of the academic and large clinical centers to grasp the importance of this trial, and their reluctance to participate, was unfortunate.
The results of STICH indicate that the addition of CABG to patients already receiving optimal medical therapy provides a significant mortality and morbidity benefit. Unfortunately, viability studies do not provide helpful information in regard to the optimal selection of patients for CABG in ischemic heart failure. That decision appears to depend upon the availability of acceptable target vessels. But the data do support CABG, performed with an acceptable risk in experienced hands, as providing long-term benefits for heart failure patients.
Revascularization provides an additional mode of therapy for the treatment of patients with symptomatic ischemic heart failure, which could become a potential therapeutic target for percutaneous intervention in patients with the appropriate anatomy.
Coronary revascularization using bypass grafting with arterial or venous conduits has been with us since 1968 when Dr. Rene Favaloro performed the first saphenous venous graft for the treatment of angina pectoris (J. Thorac. Cardiovasc. Surg. 1969;58:178-85). Although it is clear that coronary artery bypass grafting (CABG) has been effective in decreasing symptomatic angina, with few exceptions there has been little to support its benefit in prolonging life. One of those exceptions was identified in a subgroup of the initial Coronary Artery Surgery Study carried out in the 1980s and sponsored by the National Heart, Lung, and Blood Institute (N. Engl. J. Med. 1985;312:1665-71). Of the 780 patients with chronic stable angina randomized to medicine only or CABG, there was a significant decrease in both angina and mortality in a subgroup of 160 patients with ejection fractions below 50%, primarily in patients with triple-vessel disease.
Since that report in 1985, there have been no clinical mortality trials examining the clinical benefit of CABG surgery in patients with ischemic heart failure. A randomized trial to evaluate the benefit of surgical ventricular reconstruction plus CABG, compared with CABG alone, failed to observe any benefit (N. Engl. J. Med. 2009:360;1705-17).
The suggestion that CABG could improve ventricular function is based on the observations by Dr. Shahbudin Rahimtoola in the 1980s in studies showing improved function in patients before and after CABG (Am. Heart J. 1989;117:211-21). He proposed the concept that areas of “hibernating myocardium” exist in the ischemic ventricle that can be revived by restoring its blood supply by CABG. But to a large degree, patients with ischemic heart failure have not been a prime target for CABG, and attempts to show clinical benefit in symptomatic improvement in heart failure has not been explored.
The recent report of the Surgical Treatment for Ischemic Heart Failure (STICH) trial has provided important information supporting the mortality and morbidity benefit of revascularization in patients with symptomatic ischemic heart failure (N. Engl. J. Med. 2011;364:1607-16). This study, also supported by NHLBI, was carried out in 26 countries throughout the world. In the 1,212 patients randomized to standard medical therapy alone, compared with medical therapy plus CABG, there was no significant benefit observed in the CABG patients in all-cause mortality, but there was a 19% decrease in cardiovascular mortality (P = .05) over a 3-year mean follow-up, and a 26% decrease in all-cause mortality and cardiovascular hospitalization (P less than .001). When patients who received CABG either by random assignment or because they were crossed over to surgery (620) were compared with those patients who remained on medical therapy (592), the effects of surgery were even more impressive, with a 30% decrease in all-cause mortality (P less than .001). The patients included in STICH were severely symptomatic, almost all with significant angina and 37% in NYHA HF class III/IV with a mean ejection fraction of 27%. Surgery carried an early up-front mortality risk of approximately 4%, which took about 2 years to overcome.
One interesting additional aspect of STICH was the viability study carried out in a subset of 601 patients using either dobutamine echocardiograms or SPECT stress testing. Although patients who demonstrated viability had a better outcome, viability did not define those patients who would benefit by CABG (N. Engl. J. Med. 2011;364:1617-25).
The “backstory” of the STICH trial was the failure of the U.S. cardiothoracic surgery centers to participate in it in a significant way. A total of 26 countries were required to achieve the 2,136 patients enrolled in the total STICH trial, and only 307 patients (14%) were American. The failure of the academic and large clinical centers to grasp the importance of this trial, and their reluctance to participate, was unfortunate.
The results of STICH indicate that the addition of CABG to patients already receiving optimal medical therapy provides a significant mortality and morbidity benefit. Unfortunately, viability studies do not provide helpful information in regard to the optimal selection of patients for CABG in ischemic heart failure. That decision appears to depend upon the availability of acceptable target vessels. But the data do support CABG, performed with an acceptable risk in experienced hands, as providing long-term benefits for heart failure patients.
Revascularization provides an additional mode of therapy for the treatment of patients with symptomatic ischemic heart failure, which could become a potential therapeutic target for percutaneous intervention in patients with the appropriate anatomy.
Doctor Shortage and Caribbean Medical Schools
Thirty years ago, the Graduate Medical Education National Advisory Committee predicted a surplus of 145,000 physicians, including cardiologists, by the year 2000, and recommended a limitation of the number of entering positions in U.S. medical schools and the number of international graduates coming to the United States.
Although there was no restriction placed on international graduates coming to the United States, the number of positions available for students to enter U.S. medical schools has remained static until the last 2 years. This obstruction to medical school entry led many students to seek education at offshore medical schools (OMS), particularly in the Caribbean.
The flawed predictions of a surplus of doctors were made in anticipation of an expanded role of health maintenance organizations as gatekeepers for access to both family and specialty doctors. GMENAC also failed to foresee the expansion of the elderly population as a result of the baby boomer generation and the increased availability of new diagnostic and therapeutic technologies.
It is now estimated that by 2020 or 2025 there will be a shortage of almost 200,000 doctors in the United States (J. Gen. Intern. Med. 2007;22:264–8). U.S. medical schools are now projected to graduate 16,000 doctors annually, and that number is expected to increase by 30% in 2015, unless the proposed restrictions to education budgets by Congress come into place. However, this increase will continue to fall short of national requirements if physician retirement is factored into the estimates.
I recently had an opportunity to visit one of the Caribbean medical schools and to observe the students in the classroom. I also learned a great deal about the role that the OMS play in mitigating the doctor shortage in the United States. The students in these schools are clearly different from those who attend American medical schools. They are distinguished, not exclusively by their MCAT scores, as though that really matters, but also by being very motivated to become doctors. Many had been out of undergraduate programs for sometime – some as long 15 years – and had tested other careers and come to the realization that medicine is what they really wanted.
Most of these students will spend 2 years in the Caribbean and then move to clinical training in hospitals throughout the United States, ultimately entering residency programs and practice in mainland America.
One of the first hurdles that the OMS students will face is passing the United States Medical Licensing Examination taken by both U.S. and International Medical Graduates (IMGs). Measured against U.S. medical school graduates, who have a first-time passing rate of about 95%, they unfortunately fall short: The rate for non-U.S. IMGs is 73%, and that for American IMGs is lower still, at 60% (Health Aff. 2009;28:1226–33).
Upon the completion of their training, although they may go into subspecialties as do U.S. students, more of the Caribbean students enter family practice, a fact that has not been lost on health planners.
There have been some recent attempts to limit the number of training slots available for OMS students in New York City hospitals because of the presumed lack of total residency positions.
However, the state legislators, aware of current needs, have been reluctant to erect any barriers for physicians interested in family practice.
Currently there are 40 OMS in the Caribbean basin including Mexico, 24 of which were started in the last 10 years, which graduate more than 4,000 students annually in three classes, which vary in size between 60 and 600 students. Tuition is similar to that of U.S. schools and ranges from $47,500 to $186,085 for the 4 years. U.S. medical schools must be accredited by the Liaison Committee on Medical Education, but there is no accreditation process for OMS.
The LCME is now partnering with the Caribbean Accreditation Authority for Education in Medicine and Other Health Professions to establish similar accreditation processes. Federally supported scholarships are available to U.S. citizens in the OMS just as they are for students enrolled in U.S. schools. As a result of the high tuition and relatively low overhead, some of these schools have been targets for venture capitalists.
Of the 800,000 actively practicing doctors in the United States, 23.7% are IMGs, a percentage that is sure to increase. Approximately 60% of the IMGs are from the offshore medical schools.
It is clear that the United States has become increasingly dependent on OMS to meet our doctor supply. It is also clear that a vigorous attempt to improve the certification process for OMS would go a long way to ensure the quality of our future doctors.
Thirty years ago, the Graduate Medical Education National Advisory Committee predicted a surplus of 145,000 physicians, including cardiologists, by the year 2000, and recommended a limitation of the number of entering positions in U.S. medical schools and the number of international graduates coming to the United States.
Although there was no restriction placed on international graduates coming to the United States, the number of positions available for students to enter U.S. medical schools has remained static until the last 2 years. This obstruction to medical school entry led many students to seek education at offshore medical schools (OMS), particularly in the Caribbean.
The flawed predictions of a surplus of doctors were made in anticipation of an expanded role of health maintenance organizations as gatekeepers for access to both family and specialty doctors. GMENAC also failed to foresee the expansion of the elderly population as a result of the baby boomer generation and the increased availability of new diagnostic and therapeutic technologies.
It is now estimated that by 2020 or 2025 there will be a shortage of almost 200,000 doctors in the United States (J. Gen. Intern. Med. 2007;22:264–8). U.S. medical schools are now projected to graduate 16,000 doctors annually, and that number is expected to increase by 30% in 2015, unless the proposed restrictions to education budgets by Congress come into place. However, this increase will continue to fall short of national requirements if physician retirement is factored into the estimates.
I recently had an opportunity to visit one of the Caribbean medical schools and to observe the students in the classroom. I also learned a great deal about the role that the OMS play in mitigating the doctor shortage in the United States. The students in these schools are clearly different from those who attend American medical schools. They are distinguished, not exclusively by their MCAT scores, as though that really matters, but also by being very motivated to become doctors. Many had been out of undergraduate programs for sometime – some as long 15 years – and had tested other careers and come to the realization that medicine is what they really wanted.
Most of these students will spend 2 years in the Caribbean and then move to clinical training in hospitals throughout the United States, ultimately entering residency programs and practice in mainland America.
One of the first hurdles that the OMS students will face is passing the United States Medical Licensing Examination taken by both U.S. and International Medical Graduates (IMGs). Measured against U.S. medical school graduates, who have a first-time passing rate of about 95%, they unfortunately fall short: The rate for non-U.S. IMGs is 73%, and that for American IMGs is lower still, at 60% (Health Aff. 2009;28:1226–33).
Upon the completion of their training, although they may go into subspecialties as do U.S. students, more of the Caribbean students enter family practice, a fact that has not been lost on health planners.
There have been some recent attempts to limit the number of training slots available for OMS students in New York City hospitals because of the presumed lack of total residency positions.
However, the state legislators, aware of current needs, have been reluctant to erect any barriers for physicians interested in family practice.
Currently there are 40 OMS in the Caribbean basin including Mexico, 24 of which were started in the last 10 years, which graduate more than 4,000 students annually in three classes, which vary in size between 60 and 600 students. Tuition is similar to that of U.S. schools and ranges from $47,500 to $186,085 for the 4 years. U.S. medical schools must be accredited by the Liaison Committee on Medical Education, but there is no accreditation process for OMS.
The LCME is now partnering with the Caribbean Accreditation Authority for Education in Medicine and Other Health Professions to establish similar accreditation processes. Federally supported scholarships are available to U.S. citizens in the OMS just as they are for students enrolled in U.S. schools. As a result of the high tuition and relatively low overhead, some of these schools have been targets for venture capitalists.
Of the 800,000 actively practicing doctors in the United States, 23.7% are IMGs, a percentage that is sure to increase. Approximately 60% of the IMGs are from the offshore medical schools.
It is clear that the United States has become increasingly dependent on OMS to meet our doctor supply. It is also clear that a vigorous attempt to improve the certification process for OMS would go a long way to ensure the quality of our future doctors.
Thirty years ago, the Graduate Medical Education National Advisory Committee predicted a surplus of 145,000 physicians, including cardiologists, by the year 2000, and recommended a limitation of the number of entering positions in U.S. medical schools and the number of international graduates coming to the United States.
Although there was no restriction placed on international graduates coming to the United States, the number of positions available for students to enter U.S. medical schools has remained static until the last 2 years. This obstruction to medical school entry led many students to seek education at offshore medical schools (OMS), particularly in the Caribbean.
The flawed predictions of a surplus of doctors were made in anticipation of an expanded role of health maintenance organizations as gatekeepers for access to both family and specialty doctors. GMENAC also failed to foresee the expansion of the elderly population as a result of the baby boomer generation and the increased availability of new diagnostic and therapeutic technologies.
It is now estimated that by 2020 or 2025 there will be a shortage of almost 200,000 doctors in the United States (J. Gen. Intern. Med. 2007;22:264–8). U.S. medical schools are now projected to graduate 16,000 doctors annually, and that number is expected to increase by 30% in 2015, unless the proposed restrictions to education budgets by Congress come into place. However, this increase will continue to fall short of national requirements if physician retirement is factored into the estimates.
I recently had an opportunity to visit one of the Caribbean medical schools and to observe the students in the classroom. I also learned a great deal about the role that the OMS play in mitigating the doctor shortage in the United States. The students in these schools are clearly different from those who attend American medical schools. They are distinguished, not exclusively by their MCAT scores, as though that really matters, but also by being very motivated to become doctors. Many had been out of undergraduate programs for sometime – some as long 15 years – and had tested other careers and come to the realization that medicine is what they really wanted.
Most of these students will spend 2 years in the Caribbean and then move to clinical training in hospitals throughout the United States, ultimately entering residency programs and practice in mainland America.
One of the first hurdles that the OMS students will face is passing the United States Medical Licensing Examination taken by both U.S. and International Medical Graduates (IMGs). Measured against U.S. medical school graduates, who have a first-time passing rate of about 95%, they unfortunately fall short: The rate for non-U.S. IMGs is 73%, and that for American IMGs is lower still, at 60% (Health Aff. 2009;28:1226–33).
Upon the completion of their training, although they may go into subspecialties as do U.S. students, more of the Caribbean students enter family practice, a fact that has not been lost on health planners.
There have been some recent attempts to limit the number of training slots available for OMS students in New York City hospitals because of the presumed lack of total residency positions.
However, the state legislators, aware of current needs, have been reluctant to erect any barriers for physicians interested in family practice.
Currently there are 40 OMS in the Caribbean basin including Mexico, 24 of which were started in the last 10 years, which graduate more than 4,000 students annually in three classes, which vary in size between 60 and 600 students. Tuition is similar to that of U.S. schools and ranges from $47,500 to $186,085 for the 4 years. U.S. medical schools must be accredited by the Liaison Committee on Medical Education, but there is no accreditation process for OMS.
The LCME is now partnering with the Caribbean Accreditation Authority for Education in Medicine and Other Health Professions to establish similar accreditation processes. Federally supported scholarships are available to U.S. citizens in the OMS just as they are for students enrolled in U.S. schools. As a result of the high tuition and relatively low overhead, some of these schools have been targets for venture capitalists.
Of the 800,000 actively practicing doctors in the United States, 23.7% are IMGs, a percentage that is sure to increase. Approximately 60% of the IMGs are from the offshore medical schools.
It is clear that the United States has become increasingly dependent on OMS to meet our doctor supply. It is also clear that a vigorous attempt to improve the certification process for OMS would go a long way to ensure the quality of our future doctors.
Guidelines and ICDs
A recent analysis of the ICD Registry from the National Cardiovascular Data Registry raises significant concerns about the effectiveness of the treatment guidelines for implantable cardioverter defibrillators for the primary prevention of sudden cardiac arrhythmic death.
That study indicates that, although the guidelines as proposed by the sponsoring societies were adhered to in most implantations, a disappointing one-quarter of the implantations were outside of the guidelines. Of the over 111,707 ICDs implanted between 2006 and 2009, 25,145 (22.5%) were implanted in patients who were outside of the recommended guidelines. Certainly one can make a case for the fact that these are only guidelines, and doctor should have the prerogative to make clinical decisions, but there have been few guidelines that have been as carefully explored as those for ICD implantation. They emphasize not only the benefit of the devices implanted within the guidelines, but the hazards of the implantation outside of the guidelines (JAMA 2011;305:43-9).
Four guideline deviations were identified and included implantation carried out within 40 days of an acute MI, in 37%; within 3 months of coronary artery bypass surgery, in 3%; in patients with New York Heart Association class IV symptoms, in 12%; and in newly diagnosed heart failure, in 62%. There are adequate randomized clinical trials that clearly show the lack of benefit or increased risks of implantation in these four classes of patients.
Both the cardiology community and the device manufacturers have emphasized the importance of ICD therapy in as many individuals who fit the criteria for implantation as possible. Despite this effort, the implantation of ICDs on the potential candidates with systolic heart failure has significantly lagged. These results may have a further cooling effect on the rate of future implantation.
There are adequate randomized clinical trials that clearly demonstrate the lack of benefit or increased risks of implantation in these four classes of heart failure patients. The delay in implantation in new heart failure patients has been dictated by the observations that many individuals improve cardiac function after an acute event. In addition, a number of studies have shown that implantation at time of surgery is without merit and with some risk, and we have learned that the mode of death in NYHA class IV patients is dominated by progressive heart failure and not primary arrhythmias.
The use of ICDs within 40 days of an acute MI is of particular importance. Several studies have raised concern about the increase in heart failure mortality in patients who have experienced both appropriate and inappropriate ICD discharge for arrhythmias. It is unclear whether the increased heart failure precedes or is a result of ICD discharge. Similar observations were made in patients in whom an ICD was implanted early after an acute MI for the primary prevention of arrhythmic deaths. A recent study of the Defibrillation in Acute Myocardial Infarction Trial (DINAMIT) re-examines the observation that that heart failure mortality increased in patients who received an ICD shock (Circulation 2010;122:2645-52). DINAMIT enrolled patients 6–40 days after an acute MI in patients with left ventricular ejection fraction of less than 35%. Although ICD therapy decreased arrhythmic deaths, there was an increase in heart failure mortality in those patients who had an ICD shock, which resulted in a net increased mortality.
Both the American College of Cardiology and the Heart Rhythm Society have expressed concern about the recent report and plan to further emphasize ICD guidelines. The HRS is cooperating with the Department of Justice in an investigation of the inappropriate Medicare payments of ICDs “to lend expertise concerning the proper guidelines for clinical decision making,” according to a statement. Should the Justice Department become involved in the appropriate use of medical guidelines, an entirely new disturbing dimension would be introduced in regard to guideline application. In the meantime, both cardiologists and noncardiologists should rethink their decisions in regard to the use of ICDs. They clearly represent an important medical advance that has saved the lives of many of our patients, but their use does have significant risks which must be balanced against their benefit.
A recent analysis of the ICD Registry from the National Cardiovascular Data Registry raises significant concerns about the effectiveness of the treatment guidelines for implantable cardioverter defibrillators for the primary prevention of sudden cardiac arrhythmic death.
That study indicates that, although the guidelines as proposed by the sponsoring societies were adhered to in most implantations, a disappointing one-quarter of the implantations were outside of the guidelines. Of the over 111,707 ICDs implanted between 2006 and 2009, 25,145 (22.5%) were implanted in patients who were outside of the recommended guidelines. Certainly one can make a case for the fact that these are only guidelines, and doctor should have the prerogative to make clinical decisions, but there have been few guidelines that have been as carefully explored as those for ICD implantation. They emphasize not only the benefit of the devices implanted within the guidelines, but the hazards of the implantation outside of the guidelines (JAMA 2011;305:43-9).
Four guideline deviations were identified and included implantation carried out within 40 days of an acute MI, in 37%; within 3 months of coronary artery bypass surgery, in 3%; in patients with New York Heart Association class IV symptoms, in 12%; and in newly diagnosed heart failure, in 62%. There are adequate randomized clinical trials that clearly show the lack of benefit or increased risks of implantation in these four classes of patients.
Both the cardiology community and the device manufacturers have emphasized the importance of ICD therapy in as many individuals who fit the criteria for implantation as possible. Despite this effort, the implantation of ICDs on the potential candidates with systolic heart failure has significantly lagged. These results may have a further cooling effect on the rate of future implantation.
There are adequate randomized clinical trials that clearly demonstrate the lack of benefit or increased risks of implantation in these four classes of heart failure patients. The delay in implantation in new heart failure patients has been dictated by the observations that many individuals improve cardiac function after an acute event. In addition, a number of studies have shown that implantation at time of surgery is without merit and with some risk, and we have learned that the mode of death in NYHA class IV patients is dominated by progressive heart failure and not primary arrhythmias.
The use of ICDs within 40 days of an acute MI is of particular importance. Several studies have raised concern about the increase in heart failure mortality in patients who have experienced both appropriate and inappropriate ICD discharge for arrhythmias. It is unclear whether the increased heart failure precedes or is a result of ICD discharge. Similar observations were made in patients in whom an ICD was implanted early after an acute MI for the primary prevention of arrhythmic deaths. A recent study of the Defibrillation in Acute Myocardial Infarction Trial (DINAMIT) re-examines the observation that that heart failure mortality increased in patients who received an ICD shock (Circulation 2010;122:2645-52). DINAMIT enrolled patients 6–40 days after an acute MI in patients with left ventricular ejection fraction of less than 35%. Although ICD therapy decreased arrhythmic deaths, there was an increase in heart failure mortality in those patients who had an ICD shock, which resulted in a net increased mortality.
Both the American College of Cardiology and the Heart Rhythm Society have expressed concern about the recent report and plan to further emphasize ICD guidelines. The HRS is cooperating with the Department of Justice in an investigation of the inappropriate Medicare payments of ICDs “to lend expertise concerning the proper guidelines for clinical decision making,” according to a statement. Should the Justice Department become involved in the appropriate use of medical guidelines, an entirely new disturbing dimension would be introduced in regard to guideline application. In the meantime, both cardiologists and noncardiologists should rethink their decisions in regard to the use of ICDs. They clearly represent an important medical advance that has saved the lives of many of our patients, but their use does have significant risks which must be balanced against their benefit.
A recent analysis of the ICD Registry from the National Cardiovascular Data Registry raises significant concerns about the effectiveness of the treatment guidelines for implantable cardioverter defibrillators for the primary prevention of sudden cardiac arrhythmic death.
That study indicates that, although the guidelines as proposed by the sponsoring societies were adhered to in most implantations, a disappointing one-quarter of the implantations were outside of the guidelines. Of the over 111,707 ICDs implanted between 2006 and 2009, 25,145 (22.5%) were implanted in patients who were outside of the recommended guidelines. Certainly one can make a case for the fact that these are only guidelines, and doctor should have the prerogative to make clinical decisions, but there have been few guidelines that have been as carefully explored as those for ICD implantation. They emphasize not only the benefit of the devices implanted within the guidelines, but the hazards of the implantation outside of the guidelines (JAMA 2011;305:43-9).
Four guideline deviations were identified and included implantation carried out within 40 days of an acute MI, in 37%; within 3 months of coronary artery bypass surgery, in 3%; in patients with New York Heart Association class IV symptoms, in 12%; and in newly diagnosed heart failure, in 62%. There are adequate randomized clinical trials that clearly show the lack of benefit or increased risks of implantation in these four classes of patients.
Both the cardiology community and the device manufacturers have emphasized the importance of ICD therapy in as many individuals who fit the criteria for implantation as possible. Despite this effort, the implantation of ICDs on the potential candidates with systolic heart failure has significantly lagged. These results may have a further cooling effect on the rate of future implantation.
There are adequate randomized clinical trials that clearly demonstrate the lack of benefit or increased risks of implantation in these four classes of heart failure patients. The delay in implantation in new heart failure patients has been dictated by the observations that many individuals improve cardiac function after an acute event. In addition, a number of studies have shown that implantation at time of surgery is without merit and with some risk, and we have learned that the mode of death in NYHA class IV patients is dominated by progressive heart failure and not primary arrhythmias.
The use of ICDs within 40 days of an acute MI is of particular importance. Several studies have raised concern about the increase in heart failure mortality in patients who have experienced both appropriate and inappropriate ICD discharge for arrhythmias. It is unclear whether the increased heart failure precedes or is a result of ICD discharge. Similar observations were made in patients in whom an ICD was implanted early after an acute MI for the primary prevention of arrhythmic deaths. A recent study of the Defibrillation in Acute Myocardial Infarction Trial (DINAMIT) re-examines the observation that that heart failure mortality increased in patients who received an ICD shock (Circulation 2010;122:2645-52). DINAMIT enrolled patients 6–40 days after an acute MI in patients with left ventricular ejection fraction of less than 35%. Although ICD therapy decreased arrhythmic deaths, there was an increase in heart failure mortality in those patients who had an ICD shock, which resulted in a net increased mortality.
Both the American College of Cardiology and the Heart Rhythm Society have expressed concern about the recent report and plan to further emphasize ICD guidelines. The HRS is cooperating with the Department of Justice in an investigation of the inappropriate Medicare payments of ICDs “to lend expertise concerning the proper guidelines for clinical decision making,” according to a statement. Should the Justice Department become involved in the appropriate use of medical guidelines, an entirely new disturbing dimension would be introduced in regard to guideline application. In the meantime, both cardiologists and noncardiologists should rethink their decisions in regard to the use of ICDs. They clearly represent an important medical advance that has saved the lives of many of our patients, but their use does have significant risks which must be balanced against their benefit.
Deflating Door-to-Balloon Time
Both the American Heart Association and the American College of Cardiology have made a special effort to shorten door-to-balloon time in patients with ST-segment elevation MI in order to decrease the mortality of this high-risk group of patients.
Improvement in the logistics and quality of hospital systems has led to a significant decrease in door-to-balloon time (DBT). Systems have been initiated to effect rapid referral of ST-segment elevation MI (STEMI) patients to primary percutaneous coronary intervention (PCI) centers, either directly or by expeditious transfer of STEMI patients from facilities without interventional capability to PCI centers.
This process has been vigorously advocated by the AHA through its program, “Mission: Lifeline” aimed at improving and shortening the arrival time of STEMI patients to PCI hospitals.
Several recent reports provide insight into the issues relative to the importance of shortening of DBT and give us reason to evaluate the nuances of our strategies. A recent report from Michigan examined the Blue Cross Blue Shield database between 2003 and 2008 that included 8,771 patients. The report indicated that the DBT decreased from an average of 113 minutes to 76 minutes over the 8-year period without any impact on mortality. The number of patients with DBT of less than 90 minutes was 28.5% in 2003 and 67.2% in 2008, with an observed hospital mortality of 4.1% and 3.8% respectively (Arch. Intern. Med. 2010;170:1842-9).
The authors suggested that the failure to affect mortality by shortening the DBT was due in part to the fact that the higher-risk patients accounted for most deaths and experienced the longest symptom-to-door time.
It has been clear for some time that although expeditious hospital therapy is important, the duration between symptom onset and eventual arrival in a medical facility represents the major delay to therapy, compared with DBT. As the patient wrestles with the significance of his or her indigestion or chest pressure, valuable minutes fly by that have critical effects on patient survival.
Major efforts have been made to acquaint patients with the importance of dealing with symptoms, to little avail. But the time from initiating contact with the emergency care system and the patient's arrival to the hospital is a time frame that we should be able to deal with, according to strategies proposed by Mission: Lifeline.
In a study of 6,209 Danish patients who were followed in a registry from 2002 to 2008 in a highly structured emergency care system – unlike that of the United States, which is a system in name only – the investigators observed that the elapsed time from the call for emergency care to the ultimate arrival in the hospital had the largest impact on patient survival, and had greater importance on survival than did DBT (JAMA 2010;304:763-71). A system delay of up to 60 minutes was associated with a long-term mortality of 15.4%, whereas a delay of up to 360 minutes doubled that risk to 30.8%.
The investigators indicated that programs focusing on the time between the first contacts with the health system to initiation of reperfusion will have the greatest impact on mortality.
The importance of the timeliness of early therapy (either PCI or fibrinolysis) was emphasized in a similar registry study carried out in Quebec in 80 hospitals during 2006-2007 (JAMA 2010;303:2148-55). PCI was the predominant mode of therapy for STEMI, either by direct transport to a PCI center or a transfer from a non-PCI center to a PCI center.
Delay in either therapy had a major effect on mortality, and was of particular importance in patients who were transferred from a non-PCI hospital to a PCI center. DBT in directly admitted patients to PCI centers was 83 minutes, compared with 123 minutes for transferred patients. The most striking observation was that regardless of the mode of therapy – fibrinolysis administered within 30 minutes or PCI within 90 minutes – the 30-day mortality benefit of early therapy was similar (3.3% with fibrinolysis and 3.4% with PCI).
Timing, therefore, trumps intervention.
These recent observations, developed exclusively from registry databases and not from randomized clinical trials, should give us pause to rethink our strategies. Registry data often can provide information that more closely represents actual community care.
The overemphasis on PCI for STEMI therapy has led to delay in treatment, when fibrinolysis could be just as effective. This pertains particularly to the patients who are transferred from non-PCI centers to PCI centers. More importantly, these studies emphasize the importance of developing better emergency care systems for the treatment of all patients, including those with STEMI.
Both the American Heart Association and the American College of Cardiology have made a special effort to shorten door-to-balloon time in patients with ST-segment elevation MI in order to decrease the mortality of this high-risk group of patients.
Improvement in the logistics and quality of hospital systems has led to a significant decrease in door-to-balloon time (DBT). Systems have been initiated to effect rapid referral of ST-segment elevation MI (STEMI) patients to primary percutaneous coronary intervention (PCI) centers, either directly or by expeditious transfer of STEMI patients from facilities without interventional capability to PCI centers.
This process has been vigorously advocated by the AHA through its program, “Mission: Lifeline” aimed at improving and shortening the arrival time of STEMI patients to PCI hospitals.
Several recent reports provide insight into the issues relative to the importance of shortening of DBT and give us reason to evaluate the nuances of our strategies. A recent report from Michigan examined the Blue Cross Blue Shield database between 2003 and 2008 that included 8,771 patients. The report indicated that the DBT decreased from an average of 113 minutes to 76 minutes over the 8-year period without any impact on mortality. The number of patients with DBT of less than 90 minutes was 28.5% in 2003 and 67.2% in 2008, with an observed hospital mortality of 4.1% and 3.8% respectively (Arch. Intern. Med. 2010;170:1842-9).
The authors suggested that the failure to affect mortality by shortening the DBT was due in part to the fact that the higher-risk patients accounted for most deaths and experienced the longest symptom-to-door time.
It has been clear for some time that although expeditious hospital therapy is important, the duration between symptom onset and eventual arrival in a medical facility represents the major delay to therapy, compared with DBT. As the patient wrestles with the significance of his or her indigestion or chest pressure, valuable minutes fly by that have critical effects on patient survival.
Major efforts have been made to acquaint patients with the importance of dealing with symptoms, to little avail. But the time from initiating contact with the emergency care system and the patient's arrival to the hospital is a time frame that we should be able to deal with, according to strategies proposed by Mission: Lifeline.
In a study of 6,209 Danish patients who were followed in a registry from 2002 to 2008 in a highly structured emergency care system – unlike that of the United States, which is a system in name only – the investigators observed that the elapsed time from the call for emergency care to the ultimate arrival in the hospital had the largest impact on patient survival, and had greater importance on survival than did DBT (JAMA 2010;304:763-71). A system delay of up to 60 minutes was associated with a long-term mortality of 15.4%, whereas a delay of up to 360 minutes doubled that risk to 30.8%.
The investigators indicated that programs focusing on the time between the first contacts with the health system to initiation of reperfusion will have the greatest impact on mortality.
The importance of the timeliness of early therapy (either PCI or fibrinolysis) was emphasized in a similar registry study carried out in Quebec in 80 hospitals during 2006-2007 (JAMA 2010;303:2148-55). PCI was the predominant mode of therapy for STEMI, either by direct transport to a PCI center or a transfer from a non-PCI center to a PCI center.
Delay in either therapy had a major effect on mortality, and was of particular importance in patients who were transferred from a non-PCI hospital to a PCI center. DBT in directly admitted patients to PCI centers was 83 minutes, compared with 123 minutes for transferred patients. The most striking observation was that regardless of the mode of therapy – fibrinolysis administered within 30 minutes or PCI within 90 minutes – the 30-day mortality benefit of early therapy was similar (3.3% with fibrinolysis and 3.4% with PCI).
Timing, therefore, trumps intervention.
These recent observations, developed exclusively from registry databases and not from randomized clinical trials, should give us pause to rethink our strategies. Registry data often can provide information that more closely represents actual community care.
The overemphasis on PCI for STEMI therapy has led to delay in treatment, when fibrinolysis could be just as effective. This pertains particularly to the patients who are transferred from non-PCI centers to PCI centers. More importantly, these studies emphasize the importance of developing better emergency care systems for the treatment of all patients, including those with STEMI.
Both the American Heart Association and the American College of Cardiology have made a special effort to shorten door-to-balloon time in patients with ST-segment elevation MI in order to decrease the mortality of this high-risk group of patients.
Improvement in the logistics and quality of hospital systems has led to a significant decrease in door-to-balloon time (DBT). Systems have been initiated to effect rapid referral of ST-segment elevation MI (STEMI) patients to primary percutaneous coronary intervention (PCI) centers, either directly or by expeditious transfer of STEMI patients from facilities without interventional capability to PCI centers.
This process has been vigorously advocated by the AHA through its program, “Mission: Lifeline” aimed at improving and shortening the arrival time of STEMI patients to PCI hospitals.
Several recent reports provide insight into the issues relative to the importance of shortening of DBT and give us reason to evaluate the nuances of our strategies. A recent report from Michigan examined the Blue Cross Blue Shield database between 2003 and 2008 that included 8,771 patients. The report indicated that the DBT decreased from an average of 113 minutes to 76 minutes over the 8-year period without any impact on mortality. The number of patients with DBT of less than 90 minutes was 28.5% in 2003 and 67.2% in 2008, with an observed hospital mortality of 4.1% and 3.8% respectively (Arch. Intern. Med. 2010;170:1842-9).
The authors suggested that the failure to affect mortality by shortening the DBT was due in part to the fact that the higher-risk patients accounted for most deaths and experienced the longest symptom-to-door time.
It has been clear for some time that although expeditious hospital therapy is important, the duration between symptom onset and eventual arrival in a medical facility represents the major delay to therapy, compared with DBT. As the patient wrestles with the significance of his or her indigestion or chest pressure, valuable minutes fly by that have critical effects on patient survival.
Major efforts have been made to acquaint patients with the importance of dealing with symptoms, to little avail. But the time from initiating contact with the emergency care system and the patient's arrival to the hospital is a time frame that we should be able to deal with, according to strategies proposed by Mission: Lifeline.
In a study of 6,209 Danish patients who were followed in a registry from 2002 to 2008 in a highly structured emergency care system – unlike that of the United States, which is a system in name only – the investigators observed that the elapsed time from the call for emergency care to the ultimate arrival in the hospital had the largest impact on patient survival, and had greater importance on survival than did DBT (JAMA 2010;304:763-71). A system delay of up to 60 minutes was associated with a long-term mortality of 15.4%, whereas a delay of up to 360 minutes doubled that risk to 30.8%.
The investigators indicated that programs focusing on the time between the first contacts with the health system to initiation of reperfusion will have the greatest impact on mortality.
The importance of the timeliness of early therapy (either PCI or fibrinolysis) was emphasized in a similar registry study carried out in Quebec in 80 hospitals during 2006-2007 (JAMA 2010;303:2148-55). PCI was the predominant mode of therapy for STEMI, either by direct transport to a PCI center or a transfer from a non-PCI center to a PCI center.
Delay in either therapy had a major effect on mortality, and was of particular importance in patients who were transferred from a non-PCI hospital to a PCI center. DBT in directly admitted patients to PCI centers was 83 minutes, compared with 123 minutes for transferred patients. The most striking observation was that regardless of the mode of therapy – fibrinolysis administered within 30 minutes or PCI within 90 minutes – the 30-day mortality benefit of early therapy was similar (3.3% with fibrinolysis and 3.4% with PCI).
Timing, therefore, trumps intervention.
These recent observations, developed exclusively from registry databases and not from randomized clinical trials, should give us pause to rethink our strategies. Registry data often can provide information that more closely represents actual community care.
The overemphasis on PCI for STEMI therapy has led to delay in treatment, when fibrinolysis could be just as effective. This pertains particularly to the patients who are transferred from non-PCI centers to PCI centers. More importantly, these studies emphasize the importance of developing better emergency care systems for the treatment of all patients, including those with STEMI.
CABG Volume vs. Performance
Thanks to the joint effort of the Society for Thoracic Surgery and Consumer Reports, we can now learn about the quality of coronary artery bypass graft surgical centers both locally and throughout the United States. The report lists how each participating center is following the guidelines established by the STS.
Not all centers are participating in the Consumer Reports review, since it is voluntary. If we don't find our local center listed, we might assume that that center is either too busy doing other things or it is an outlier. Each center is rated on pre- and postoperative care and surgical mortality. Like the Michelin Guide for restaurants, each CABG center is scored on a three-star scale, with three stars for above average, two for average, and one for performance below the STS standards (N. Engl. J. Med. 2010;363:1593-5). Quality measures that are graded, in addition to surgical mortality, include postoperative renal failure, the need for reoperation, the use of beta-blockers before and after the operation and at discharge, lipid-lowering treatment at discharge, the occurrence of stroke, duration of intubations, wound infection, and the use of internal thoracic artery for bypass. Using this methodology, 29% of the centers were outliers, receiving only one star based on their performance during the last 3 years.
The volume of CABG surgery over the last few years has leveled off as a result of the wider use of percutaneous coronary intervention. According to the voluntary STS database, 163,149 isolated CABG procedures were performed in 955 operative sites in the United States in 2009 (DCRI executive summary) compared to 146,384 in 365 sites in 2000. The number of CABG centers has increased 2.5-fold in the past decade, while the CABG procedure has increased by about 11% annually. This has resulted in a significant dilution of procedures and an increase in the number of low-volume CABG centers.
Several studies have examined the relationship between volume and CABG mortality. In the most recent, Dr. David M. Shahian of Harvard Medical School, Boston, and colleagues looked at the association of CABG volume with process of care, mortality, and morbidity in the STS database (J. Thorac. Cardiovasc. Surg. 2010;139:273-82). Of the 737 centers that were included in the STS voluntary reports in 2007, 18% performed fewer than 100 procedures and 38% performed fewer than 150 procedures. The surgical mortality varied from 2.6% in the low-volume centers to 1.7% (a highly significant difference) in centers performing 450 procedures or more. Previous studies have reported a variable relationship between volume and mortality. According to the authors of this most recent study, “high volume does not guarantee a better outcome in any specific program” despite the significant difference between the high- and low-volume centers reported. It is quite possible that low-performing centers did not report their results, as only 733 of the 866 centers performing CABG surgery in 2007 were included in the study by Dr. Shahian. It is quite likely that there were more low-volume centers in action in 2007 among the 133 centers that are not included in the report, and that they could have affected the mortality rates in the low-volume centers. Because much of these data are provided on a voluntary basis, Consumer Reports' rankings may fall short in providing a complete picture of the current CABG quality.
Much of the development of new open-heart surgery programs are driven by both the perception and requirement that in some states surgical backup is needed in order to perform PCI. In many states, however, the availability of surgical back-up is no longer a requirement. In the setting of better stents and intervascular support technology, the need for the availability of open-heart surgical programs may no longer be relevant. The other driving force for the development of cardiosurgical programs is the marketing cachet for community hospitals in view of the intense competition between hospitals in many communities.
The development of improved technology and the increased skills of our interventional colleagues have led to much more aggressive PCI. As a result, patients who are referred for surgery have more complex coronary artery disease that is often associated with left ventricular failure and concomitant valvular disease. It is reasonable to question the advisability of the initiation and continuation of low-volume centers. As more low-volume centers enter the CABG surgical arena, it is possible that the marginal differences previously reported might become more significant. With the availability of almost 1,000 CABG centers nationwide, it would seem reasonable to call a halt to further expansion.
Thanks to the joint effort of the Society for Thoracic Surgery and Consumer Reports, we can now learn about the quality of coronary artery bypass graft surgical centers both locally and throughout the United States. The report lists how each participating center is following the guidelines established by the STS.
Not all centers are participating in the Consumer Reports review, since it is voluntary. If we don't find our local center listed, we might assume that that center is either too busy doing other things or it is an outlier. Each center is rated on pre- and postoperative care and surgical mortality. Like the Michelin Guide for restaurants, each CABG center is scored on a three-star scale, with three stars for above average, two for average, and one for performance below the STS standards (N. Engl. J. Med. 2010;363:1593-5). Quality measures that are graded, in addition to surgical mortality, include postoperative renal failure, the need for reoperation, the use of beta-blockers before and after the operation and at discharge, lipid-lowering treatment at discharge, the occurrence of stroke, duration of intubations, wound infection, and the use of internal thoracic artery for bypass. Using this methodology, 29% of the centers were outliers, receiving only one star based on their performance during the last 3 years.
The volume of CABG surgery over the last few years has leveled off as a result of the wider use of percutaneous coronary intervention. According to the voluntary STS database, 163,149 isolated CABG procedures were performed in 955 operative sites in the United States in 2009 (DCRI executive summary) compared to 146,384 in 365 sites in 2000. The number of CABG centers has increased 2.5-fold in the past decade, while the CABG procedure has increased by about 11% annually. This has resulted in a significant dilution of procedures and an increase in the number of low-volume CABG centers.
Several studies have examined the relationship between volume and CABG mortality. In the most recent, Dr. David M. Shahian of Harvard Medical School, Boston, and colleagues looked at the association of CABG volume with process of care, mortality, and morbidity in the STS database (J. Thorac. Cardiovasc. Surg. 2010;139:273-82). Of the 737 centers that were included in the STS voluntary reports in 2007, 18% performed fewer than 100 procedures and 38% performed fewer than 150 procedures. The surgical mortality varied from 2.6% in the low-volume centers to 1.7% (a highly significant difference) in centers performing 450 procedures or more. Previous studies have reported a variable relationship between volume and mortality. According to the authors of this most recent study, “high volume does not guarantee a better outcome in any specific program” despite the significant difference between the high- and low-volume centers reported. It is quite possible that low-performing centers did not report their results, as only 733 of the 866 centers performing CABG surgery in 2007 were included in the study by Dr. Shahian. It is quite likely that there were more low-volume centers in action in 2007 among the 133 centers that are not included in the report, and that they could have affected the mortality rates in the low-volume centers. Because much of these data are provided on a voluntary basis, Consumer Reports' rankings may fall short in providing a complete picture of the current CABG quality.
Much of the development of new open-heart surgery programs are driven by both the perception and requirement that in some states surgical backup is needed in order to perform PCI. In many states, however, the availability of surgical back-up is no longer a requirement. In the setting of better stents and intervascular support technology, the need for the availability of open-heart surgical programs may no longer be relevant. The other driving force for the development of cardiosurgical programs is the marketing cachet for community hospitals in view of the intense competition between hospitals in many communities.
The development of improved technology and the increased skills of our interventional colleagues have led to much more aggressive PCI. As a result, patients who are referred for surgery have more complex coronary artery disease that is often associated with left ventricular failure and concomitant valvular disease. It is reasonable to question the advisability of the initiation and continuation of low-volume centers. As more low-volume centers enter the CABG surgical arena, it is possible that the marginal differences previously reported might become more significant. With the availability of almost 1,000 CABG centers nationwide, it would seem reasonable to call a halt to further expansion.
Thanks to the joint effort of the Society for Thoracic Surgery and Consumer Reports, we can now learn about the quality of coronary artery bypass graft surgical centers both locally and throughout the United States. The report lists how each participating center is following the guidelines established by the STS.
Not all centers are participating in the Consumer Reports review, since it is voluntary. If we don't find our local center listed, we might assume that that center is either too busy doing other things or it is an outlier. Each center is rated on pre- and postoperative care and surgical mortality. Like the Michelin Guide for restaurants, each CABG center is scored on a three-star scale, with three stars for above average, two for average, and one for performance below the STS standards (N. Engl. J. Med. 2010;363:1593-5). Quality measures that are graded, in addition to surgical mortality, include postoperative renal failure, the need for reoperation, the use of beta-blockers before and after the operation and at discharge, lipid-lowering treatment at discharge, the occurrence of stroke, duration of intubations, wound infection, and the use of internal thoracic artery for bypass. Using this methodology, 29% of the centers were outliers, receiving only one star based on their performance during the last 3 years.
The volume of CABG surgery over the last few years has leveled off as a result of the wider use of percutaneous coronary intervention. According to the voluntary STS database, 163,149 isolated CABG procedures were performed in 955 operative sites in the United States in 2009 (DCRI executive summary) compared to 146,384 in 365 sites in 2000. The number of CABG centers has increased 2.5-fold in the past decade, while the CABG procedure has increased by about 11% annually. This has resulted in a significant dilution of procedures and an increase in the number of low-volume CABG centers.
Several studies have examined the relationship between volume and CABG mortality. In the most recent, Dr. David M. Shahian of Harvard Medical School, Boston, and colleagues looked at the association of CABG volume with process of care, mortality, and morbidity in the STS database (J. Thorac. Cardiovasc. Surg. 2010;139:273-82). Of the 737 centers that were included in the STS voluntary reports in 2007, 18% performed fewer than 100 procedures and 38% performed fewer than 150 procedures. The surgical mortality varied from 2.6% in the low-volume centers to 1.7% (a highly significant difference) in centers performing 450 procedures or more. Previous studies have reported a variable relationship between volume and mortality. According to the authors of this most recent study, “high volume does not guarantee a better outcome in any specific program” despite the significant difference between the high- and low-volume centers reported. It is quite possible that low-performing centers did not report their results, as only 733 of the 866 centers performing CABG surgery in 2007 were included in the study by Dr. Shahian. It is quite likely that there were more low-volume centers in action in 2007 among the 133 centers that are not included in the report, and that they could have affected the mortality rates in the low-volume centers. Because much of these data are provided on a voluntary basis, Consumer Reports' rankings may fall short in providing a complete picture of the current CABG quality.
Much of the development of new open-heart surgery programs are driven by both the perception and requirement that in some states surgical backup is needed in order to perform PCI. In many states, however, the availability of surgical back-up is no longer a requirement. In the setting of better stents and intervascular support technology, the need for the availability of open-heart surgical programs may no longer be relevant. The other driving force for the development of cardiosurgical programs is the marketing cachet for community hospitals in view of the intense competition between hospitals in many communities.
The development of improved technology and the increased skills of our interventional colleagues have led to much more aggressive PCI. As a result, patients who are referred for surgery have more complex coronary artery disease that is often associated with left ventricular failure and concomitant valvular disease. It is reasonable to question the advisability of the initiation and continuation of low-volume centers. As more low-volume centers enter the CABG surgical arena, it is possible that the marginal differences previously reported might become more significant. With the availability of almost 1,000 CABG centers nationwide, it would seem reasonable to call a halt to further expansion.