Apathy Persists After First Psychotic Episode

Article Type
Changed
Fri, 01/18/2019 - 11:51
Display Headline
Apathy Persists After First Psychotic Episode

Roughly 30% of first-episode psychosis patients will experience apathy symptoms 10 years later, investigators reported in the April issue of Schizophrenia Research.

Moreover, the investigators’ findings suggest that "it is difficult to determine at baseline which patients will experience enduring levels of apathy," higher levels of psychopathology, and poorer functioning in this group, complicating efforts at targeted treatment (Schizophrenia Res. 2012;136:19-24).

Apathy, a common neuropsychiatric symptom in first-episode psychosis, is associated with "dysfunction of the prefrontal cortex and its subcortical connections," Dr. Julie Evensen and her colleagues wrote. The symptom has been associated with functional decline, worse disease course and outcome, and poor executive functioning among patients.

In the study, Dr. Evensen and her colleagues looked at 178 subjects from the TIPS study (Early Treatment and Intervention in Psychosis), a large, longitudinal study of first-episode psychosis patients from four Scandinavian sites.

All patients were adults aged 18-65 years with IQ scores greater than 70, and all completed the 12-item abridged self-report Apathy Evaluation Scale (AES-S-Apathy) at 10 years after the first psychotic episode.

Scores of 27 and greater were considered to be consistent with clinical apathy.

The researchers then attempted to correlate those scores with objective baseline characteristics, including symptom levels according to the Positive and Negative Syndrome Scale (PANSS).

PANSS negative component items N2 (emotional withdrawal) and N4 (passive/apathetic social withdrawal) were used as proxy measures of apathy at baseline as well as at assessments prior to the 10- year follow-up, at baseline, 3 months, 1 year, 2 years, and 5 years, wrote Dr. Evensen of the University of Oslo.

The authors found that overall, 53 patients (29.8%) showed clinical levels of self-assessed apathy at the 10-year follow-up.

The mean score of patients with self-assessed apathy was 30.9, compared with a mean score among the nonapathy group of 18.9.

Neither patient age, nor baseline years of education, nor the duration of untreated psychosis before the first episode was predictive of 10-year apathy.

Nor did scores on the premorbid assessment of functioning scale correlate with apathy status, including on the childhood academic function, last academic function, childhood social function, and the last social function domains.

Indeed, at baseline, "Only the PANSS negative symptoms component correlated significantly with AES-S-Apathy at 10 years," wrote the authors, though they added that "this variable did not, however, survive as a significant predictor of AES-S-Apathy when entered into regression analyses."

The authors did find that, using the proxy scores of PANSS items N2 (social withdrawal) and N4 (passive/apathetic withdrawal), "The nonapathy group showed a steady decrease in proxy apathy scores over the follow-up period. The apathy group, on the other hand, showed a fairly stable level."

By the 10-year mark, "Higher apathy was associated with less employment, less contact with friends and daily activities, and lower [Global Assessment of Functioning] score."

They added: "Apathy, measured by both AES-S-Apathy and PANSS items N2 and N4, showed a strong correlation with poor subjective quality of life."

Dr. Evensen postulated that one reason for the lack of any clear predictive factors for 10-year apathy could be that "the subgroup with lasting apathy becomes evident only later in the course of the illness. Our longitudinal data on proxy apathy scores support this explanation."

Another reason could be that "there might be a subgroup in our sample with enduring apathy. In this group, apathy appears to be more trait than state."

The findings showing the persistence of apathy among patients with psychotic disorders might help clinicians care for these patients, Dr. Evensen and her colleagues said. Also, they could "be a useful starting point for rehabilitative efforts."

The authors declared that they had no conflicts of interest to disclose. They wrote that the study was supported by funding from Lundbeck Pharma, Eli Lilly, and Janssen-Cilag Pharmaceuticals, and several nonprofit groups and municipalities.

Author and Disclosure Information

Publications
Topics
Legacy Keywords
schizophrenia, psychotic episode, psychosis, apathy, PANSS, Positive and Negative Syndrome Scale, Dr. Julie Evensen, TIPS study (Early Treatment and Intervention in Psychosis)
Author and Disclosure Information

Author and Disclosure Information

Roughly 30% of first-episode psychosis patients will experience apathy symptoms 10 years later, investigators reported in the April issue of Schizophrenia Research.

Moreover, the investigators’ findings suggest that "it is difficult to determine at baseline which patients will experience enduring levels of apathy," higher levels of psychopathology, and poorer functioning in this group, complicating efforts at targeted treatment (Schizophrenia Res. 2012;136:19-24).

Apathy, a common neuropsychiatric symptom in first-episode psychosis, is associated with "dysfunction of the prefrontal cortex and its subcortical connections," Dr. Julie Evensen and her colleagues wrote. The symptom has been associated with functional decline, worse disease course and outcome, and poor executive functioning among patients.

In the study, Dr. Evensen and her colleagues looked at 178 subjects from the TIPS study (Early Treatment and Intervention in Psychosis), a large, longitudinal study of first-episode psychosis patients from four Scandinavian sites.

All patients were adults aged 18-65 years with IQ scores greater than 70, and all completed the 12-item abridged self-report Apathy Evaluation Scale (AES-S-Apathy) at 10 years after the first psychotic episode.

Scores of 27 and greater were considered to be consistent with clinical apathy.

The researchers then attempted to correlate those scores with objective baseline characteristics, including symptom levels according to the Positive and Negative Syndrome Scale (PANSS).

PANSS negative component items N2 (emotional withdrawal) and N4 (passive/apathetic social withdrawal) were used as proxy measures of apathy at baseline as well as at assessments prior to the 10- year follow-up, at baseline, 3 months, 1 year, 2 years, and 5 years, wrote Dr. Evensen of the University of Oslo.

The authors found that overall, 53 patients (29.8%) showed clinical levels of self-assessed apathy at the 10-year follow-up.

The mean score of patients with self-assessed apathy was 30.9, compared with a mean score among the nonapathy group of 18.9.

Neither patient age, nor baseline years of education, nor the duration of untreated psychosis before the first episode was predictive of 10-year apathy.

Nor did scores on the premorbid assessment of functioning scale correlate with apathy status, including on the childhood academic function, last academic function, childhood social function, and the last social function domains.

Indeed, at baseline, "Only the PANSS negative symptoms component correlated significantly with AES-S-Apathy at 10 years," wrote the authors, though they added that "this variable did not, however, survive as a significant predictor of AES-S-Apathy when entered into regression analyses."

The authors did find that, using the proxy scores of PANSS items N2 (social withdrawal) and N4 (passive/apathetic withdrawal), "The nonapathy group showed a steady decrease in proxy apathy scores over the follow-up period. The apathy group, on the other hand, showed a fairly stable level."

By the 10-year mark, "Higher apathy was associated with less employment, less contact with friends and daily activities, and lower [Global Assessment of Functioning] score."

They added: "Apathy, measured by both AES-S-Apathy and PANSS items N2 and N4, showed a strong correlation with poor subjective quality of life."

Dr. Evensen postulated that one reason for the lack of any clear predictive factors for 10-year apathy could be that "the subgroup with lasting apathy becomes evident only later in the course of the illness. Our longitudinal data on proxy apathy scores support this explanation."

Another reason could be that "there might be a subgroup in our sample with enduring apathy. In this group, apathy appears to be more trait than state."

The findings showing the persistence of apathy among patients with psychotic disorders might help clinicians care for these patients, Dr. Evensen and her colleagues said. Also, they could "be a useful starting point for rehabilitative efforts."

The authors declared that they had no conflicts of interest to disclose. They wrote that the study was supported by funding from Lundbeck Pharma, Eli Lilly, and Janssen-Cilag Pharmaceuticals, and several nonprofit groups and municipalities.

Roughly 30% of first-episode psychosis patients will experience apathy symptoms 10 years later, investigators reported in the April issue of Schizophrenia Research.

Moreover, the investigators’ findings suggest that "it is difficult to determine at baseline which patients will experience enduring levels of apathy," higher levels of psychopathology, and poorer functioning in this group, complicating efforts at targeted treatment (Schizophrenia Res. 2012;136:19-24).

Apathy, a common neuropsychiatric symptom in first-episode psychosis, is associated with "dysfunction of the prefrontal cortex and its subcortical connections," Dr. Julie Evensen and her colleagues wrote. The symptom has been associated with functional decline, worse disease course and outcome, and poor executive functioning among patients.

In the study, Dr. Evensen and her colleagues looked at 178 subjects from the TIPS study (Early Treatment and Intervention in Psychosis), a large, longitudinal study of first-episode psychosis patients from four Scandinavian sites.

All patients were adults aged 18-65 years with IQ scores greater than 70, and all completed the 12-item abridged self-report Apathy Evaluation Scale (AES-S-Apathy) at 10 years after the first psychotic episode.

Scores of 27 and greater were considered to be consistent with clinical apathy.

The researchers then attempted to correlate those scores with objective baseline characteristics, including symptom levels according to the Positive and Negative Syndrome Scale (PANSS).

PANSS negative component items N2 (emotional withdrawal) and N4 (passive/apathetic social withdrawal) were used as proxy measures of apathy at baseline as well as at assessments prior to the 10- year follow-up, at baseline, 3 months, 1 year, 2 years, and 5 years, wrote Dr. Evensen of the University of Oslo.

The authors found that overall, 53 patients (29.8%) showed clinical levels of self-assessed apathy at the 10-year follow-up.

The mean score of patients with self-assessed apathy was 30.9, compared with a mean score among the nonapathy group of 18.9.

Neither patient age, nor baseline years of education, nor the duration of untreated psychosis before the first episode was predictive of 10-year apathy.

Nor did scores on the premorbid assessment of functioning scale correlate with apathy status, including on the childhood academic function, last academic function, childhood social function, and the last social function domains.

Indeed, at baseline, "Only the PANSS negative symptoms component correlated significantly with AES-S-Apathy at 10 years," wrote the authors, though they added that "this variable did not, however, survive as a significant predictor of AES-S-Apathy when entered into regression analyses."

The authors did find that, using the proxy scores of PANSS items N2 (social withdrawal) and N4 (passive/apathetic withdrawal), "The nonapathy group showed a steady decrease in proxy apathy scores over the follow-up period. The apathy group, on the other hand, showed a fairly stable level."

By the 10-year mark, "Higher apathy was associated with less employment, less contact with friends and daily activities, and lower [Global Assessment of Functioning] score."

They added: "Apathy, measured by both AES-S-Apathy and PANSS items N2 and N4, showed a strong correlation with poor subjective quality of life."

Dr. Evensen postulated that one reason for the lack of any clear predictive factors for 10-year apathy could be that "the subgroup with lasting apathy becomes evident only later in the course of the illness. Our longitudinal data on proxy apathy scores support this explanation."

Another reason could be that "there might be a subgroup in our sample with enduring apathy. In this group, apathy appears to be more trait than state."

The findings showing the persistence of apathy among patients with psychotic disorders might help clinicians care for these patients, Dr. Evensen and her colleagues said. Also, they could "be a useful starting point for rehabilitative efforts."

The authors declared that they had no conflicts of interest to disclose. They wrote that the study was supported by funding from Lundbeck Pharma, Eli Lilly, and Janssen-Cilag Pharmaceuticals, and several nonprofit groups and municipalities.

Publications
Publications
Topics
Article Type
Display Headline
Apathy Persists After First Psychotic Episode
Display Headline
Apathy Persists After First Psychotic Episode
Legacy Keywords
schizophrenia, psychotic episode, psychosis, apathy, PANSS, Positive and Negative Syndrome Scale, Dr. Julie Evensen, TIPS study (Early Treatment and Intervention in Psychosis)
Legacy Keywords
schizophrenia, psychotic episode, psychosis, apathy, PANSS, Positive and Negative Syndrome Scale, Dr. Julie Evensen, TIPS study (Early Treatment and Intervention in Psychosis)
Article Source

FROM SCHIZOPHRENIA RESEARCH

PURLs Copyright

Inside the Article

Vitals

Major Finding: Ten years following their first psychotic episode, nearly one-third of patients experience enduring apathy, regardless of baseline demographics and functioning scores.

Data Source: Data were taken from the TIPS study (Early Treatment and Intervention in Psychosis), a large, longitudinal study of consecutively admitted first-episode psychosis patients in Scandinavia.

Disclosures: The authors declared that they had no conflicts of interest to disclose. They wrote that the study was supported by funding from Lundbeck Pharma, Eli Lilly, Janssen-Cilag Pharmaceuticals, and several nonprofit groups and municipalities.

Ten Biomarkers May Aid Lung Cancer Detection

Article Type
Changed
Fri, 01/04/2019 - 11:53
Display Headline
Ten Biomarkers May Aid Lung Cancer Detection

A panel of 10 serum biomarkers for lung cancer could offer more accurate interpretation of nodules detected on computed tomography, avoiding invasive biopsies and radiographic follow-up.

"CT-screening detection of an indeterminate pulmonary nodule, a nonspecific but frequent finding in high-risk subjects with a smoking history, creates a diagnostic dilemma," wrote investigator William L. Bigbee, Ph.D., and his colleagues in the April issue of the Journal of Thoracic Oncology.

"Although the biomarker model we described could not detect every lung cancer, it offers a significant clinical improvement over CT imaging alone ... Also, patients with nodules not identified as cancer by the model would continue to receive follow up clinical monitoring and would be biopsied if the nodules grew in size, which is the current standard of care," (J. Thorac. Oncol. 2012;7:698-708).

Dr. Bigbee of the University of Pittsburgh and his colleagues cite results of the National Lung Screening Trial (NLST), published in June 2011, which showed for the first time that low-dose CT screening of heavy smokers could reduce lung cancer mortality by 20%. But, as Dr. Bigbee et al. note in the current study, the "vast majority" of positive results in the NLST program turned out to be false after diagnostic evaluation. Moreover, smaller nodules are least likely to be malignant and least likely to be considered for biopsy or surgery.

For the current study, the researchers initially looked at a "training" set of 56 patients with non–small cell lung cancer in the University of Pittsburgh Cancer Institute Georgia Cooper Lung Research Registry. These cases were matched with 56 controls from the Pittsburgh Lung Screening Study (PLuSS), a volunteer cohort at high risk for lung cancer. All controls were known to be cancer free. The authors then analyzed serum samples from both groups for the presence of 70 potential cancer-associated biomarkers.

"Together, these biomarkers incorporate a wide range of host and tumor derived factors that allow a broad analysis of the lung cancer/host interaction, and includes a number of previously described epithelial cell cancer-associated serological markers," wrote the investigators. "The initial goal of this discovery study was to identify the most robust subset of these biomarkers to discriminate lung cancer and matched control samples."

The researchers, using a rule-learning algorithm, whittled the field of potential biomarkers down to eight: prolactin, transthyretin, thrombospondin-1, E-selectin, C-C motif chemokine 5, macrophage migration inhibitory factor, plasminogen activator inhibitor 1, and receptor tyrosine-protein kinase erbB-2.

"This rule model distinguished the lung cancer case samples from the control samples in the training set with a sensitivity of 92.9% and specificity of 87.5%," they reported.

Ultimately, two additional biomarkers were added to the panel – cytokeratin fragment 19-9 and serum amyloid A protein – and an additional set of cases and controls, 30 in each cohort, was assessed, in a blinded "verification" set.

In this set, the authors calculated an overall classification performance of 73.3% sensitivity and 93.3% specificity. Only 10 misclassifications occurred among 60 predictions made. Moreover, when looking at accuracy according to patient demographic factors, the researchers found that the 10-biomarker panel was equally good at distinguishing males and females as either cases or controls and that neither current smoking status nor airway obstruction skewed the results.

"Age overall was not a significant factor in misclassification of cases or controls, although two of three cases aged 38-44 [years] were misclassified as controls by the 10-biomarker model," the authors concede. "This inaccuracy may result from the absence of younger subjects in the training set that included no cases younger than 46 years at diagnosis and no controls younger than 50 years."

Nor did the presence of nodules visible on CT scan confound the biomarkers’ predictive value. "In fact, those PLuSS subjects with a suspicious nodule were more often correctly classified as controls than those with no nodule or a benign nodule," wrote the authors.

They add that all nodules found in controls remained clinically noncancerous at least 3 years after initial detection, with either resolution or no further growth on subsequent CT scans.

Finally, Dr. Bigbee assessed the model’s accuracy when confronted with early- vs. late-stage tumors.

"Among stage I/II lung tumors, the 10-biomarker panel misclassified 15% of stage I/II tumors in the verification set, compared to 50% of the stage III/IV tumors, suggesting the model performs well in discriminating early-stage lung cancer," he wrote. "With a specificity of 93.3%, the 10-biomarker model [balanced accuracy] was 89.2% in stage I/II disease."

The authors conceded that the biomarker panel presented here would not suffice for general population screening. However, in a clinical context, among high-risk patients, the model "may provide clinical utility in guiding interpretation of screening CT scans, even in tobacco-exposed persons with COPD or emphysema," they wrote.

 

 

"Formal validation in larger patient cohorts will be needed to confirm these initial findings."

The authors disclosed that funding for this study was supplied by grants from the National Cancer Institute. Dr. Bigbee stated that there were no personal disclosures.

Author and Disclosure Information

Publications
Topics
Legacy Keywords
lung cancer detection, serum biomarkers, lung cancer biomarkers, computed tomography cancer, National Lung Screening Trial
Author and Disclosure Information

Author and Disclosure Information

Related Articles

A panel of 10 serum biomarkers for lung cancer could offer more accurate interpretation of nodules detected on computed tomography, avoiding invasive biopsies and radiographic follow-up.

"CT-screening detection of an indeterminate pulmonary nodule, a nonspecific but frequent finding in high-risk subjects with a smoking history, creates a diagnostic dilemma," wrote investigator William L. Bigbee, Ph.D., and his colleagues in the April issue of the Journal of Thoracic Oncology.

"Although the biomarker model we described could not detect every lung cancer, it offers a significant clinical improvement over CT imaging alone ... Also, patients with nodules not identified as cancer by the model would continue to receive follow up clinical monitoring and would be biopsied if the nodules grew in size, which is the current standard of care," (J. Thorac. Oncol. 2012;7:698-708).

Dr. Bigbee of the University of Pittsburgh and his colleagues cite results of the National Lung Screening Trial (NLST), published in June 2011, which showed for the first time that low-dose CT screening of heavy smokers could reduce lung cancer mortality by 20%. But, as Dr. Bigbee et al. note in the current study, the "vast majority" of positive results in the NLST program turned out to be false after diagnostic evaluation. Moreover, smaller nodules are least likely to be malignant and least likely to be considered for biopsy or surgery.

For the current study, the researchers initially looked at a "training" set of 56 patients with non–small cell lung cancer in the University of Pittsburgh Cancer Institute Georgia Cooper Lung Research Registry. These cases were matched with 56 controls from the Pittsburgh Lung Screening Study (PLuSS), a volunteer cohort at high risk for lung cancer. All controls were known to be cancer free. The authors then analyzed serum samples from both groups for the presence of 70 potential cancer-associated biomarkers.

"Together, these biomarkers incorporate a wide range of host and tumor derived factors that allow a broad analysis of the lung cancer/host interaction, and includes a number of previously described epithelial cell cancer-associated serological markers," wrote the investigators. "The initial goal of this discovery study was to identify the most robust subset of these biomarkers to discriminate lung cancer and matched control samples."

The researchers, using a rule-learning algorithm, whittled the field of potential biomarkers down to eight: prolactin, transthyretin, thrombospondin-1, E-selectin, C-C motif chemokine 5, macrophage migration inhibitory factor, plasminogen activator inhibitor 1, and receptor tyrosine-protein kinase erbB-2.

"This rule model distinguished the lung cancer case samples from the control samples in the training set with a sensitivity of 92.9% and specificity of 87.5%," they reported.

Ultimately, two additional biomarkers were added to the panel – cytokeratin fragment 19-9 and serum amyloid A protein – and an additional set of cases and controls, 30 in each cohort, was assessed, in a blinded "verification" set.

In this set, the authors calculated an overall classification performance of 73.3% sensitivity and 93.3% specificity. Only 10 misclassifications occurred among 60 predictions made. Moreover, when looking at accuracy according to patient demographic factors, the researchers found that the 10-biomarker panel was equally good at distinguishing males and females as either cases or controls and that neither current smoking status nor airway obstruction skewed the results.

"Age overall was not a significant factor in misclassification of cases or controls, although two of three cases aged 38-44 [years] were misclassified as controls by the 10-biomarker model," the authors concede. "This inaccuracy may result from the absence of younger subjects in the training set that included no cases younger than 46 years at diagnosis and no controls younger than 50 years."

Nor did the presence of nodules visible on CT scan confound the biomarkers’ predictive value. "In fact, those PLuSS subjects with a suspicious nodule were more often correctly classified as controls than those with no nodule or a benign nodule," wrote the authors.

They add that all nodules found in controls remained clinically noncancerous at least 3 years after initial detection, with either resolution or no further growth on subsequent CT scans.

Finally, Dr. Bigbee assessed the model’s accuracy when confronted with early- vs. late-stage tumors.

"Among stage I/II lung tumors, the 10-biomarker panel misclassified 15% of stage I/II tumors in the verification set, compared to 50% of the stage III/IV tumors, suggesting the model performs well in discriminating early-stage lung cancer," he wrote. "With a specificity of 93.3%, the 10-biomarker model [balanced accuracy] was 89.2% in stage I/II disease."

The authors conceded that the biomarker panel presented here would not suffice for general population screening. However, in a clinical context, among high-risk patients, the model "may provide clinical utility in guiding interpretation of screening CT scans, even in tobacco-exposed persons with COPD or emphysema," they wrote.

 

 

"Formal validation in larger patient cohorts will be needed to confirm these initial findings."

The authors disclosed that funding for this study was supplied by grants from the National Cancer Institute. Dr. Bigbee stated that there were no personal disclosures.

A panel of 10 serum biomarkers for lung cancer could offer more accurate interpretation of nodules detected on computed tomography, avoiding invasive biopsies and radiographic follow-up.

"CT-screening detection of an indeterminate pulmonary nodule, a nonspecific but frequent finding in high-risk subjects with a smoking history, creates a diagnostic dilemma," wrote investigator William L. Bigbee, Ph.D., and his colleagues in the April issue of the Journal of Thoracic Oncology.

"Although the biomarker model we described could not detect every lung cancer, it offers a significant clinical improvement over CT imaging alone ... Also, patients with nodules not identified as cancer by the model would continue to receive follow up clinical monitoring and would be biopsied if the nodules grew in size, which is the current standard of care," (J. Thorac. Oncol. 2012;7:698-708).

Dr. Bigbee of the University of Pittsburgh and his colleagues cite results of the National Lung Screening Trial (NLST), published in June 2011, which showed for the first time that low-dose CT screening of heavy smokers could reduce lung cancer mortality by 20%. But, as Dr. Bigbee et al. note in the current study, the "vast majority" of positive results in the NLST program turned out to be false after diagnostic evaluation. Moreover, smaller nodules are least likely to be malignant and least likely to be considered for biopsy or surgery.

For the current study, the researchers initially looked at a "training" set of 56 patients with non–small cell lung cancer in the University of Pittsburgh Cancer Institute Georgia Cooper Lung Research Registry. These cases were matched with 56 controls from the Pittsburgh Lung Screening Study (PLuSS), a volunteer cohort at high risk for lung cancer. All controls were known to be cancer free. The authors then analyzed serum samples from both groups for the presence of 70 potential cancer-associated biomarkers.

"Together, these biomarkers incorporate a wide range of host and tumor derived factors that allow a broad analysis of the lung cancer/host interaction, and includes a number of previously described epithelial cell cancer-associated serological markers," wrote the investigators. "The initial goal of this discovery study was to identify the most robust subset of these biomarkers to discriminate lung cancer and matched control samples."

The researchers, using a rule-learning algorithm, whittled the field of potential biomarkers down to eight: prolactin, transthyretin, thrombospondin-1, E-selectin, C-C motif chemokine 5, macrophage migration inhibitory factor, plasminogen activator inhibitor 1, and receptor tyrosine-protein kinase erbB-2.

"This rule model distinguished the lung cancer case samples from the control samples in the training set with a sensitivity of 92.9% and specificity of 87.5%," they reported.

Ultimately, two additional biomarkers were added to the panel – cytokeratin fragment 19-9 and serum amyloid A protein – and an additional set of cases and controls, 30 in each cohort, was assessed, in a blinded "verification" set.

In this set, the authors calculated an overall classification performance of 73.3% sensitivity and 93.3% specificity. Only 10 misclassifications occurred among 60 predictions made. Moreover, when looking at accuracy according to patient demographic factors, the researchers found that the 10-biomarker panel was equally good at distinguishing males and females as either cases or controls and that neither current smoking status nor airway obstruction skewed the results.

"Age overall was not a significant factor in misclassification of cases or controls, although two of three cases aged 38-44 [years] were misclassified as controls by the 10-biomarker model," the authors concede. "This inaccuracy may result from the absence of younger subjects in the training set that included no cases younger than 46 years at diagnosis and no controls younger than 50 years."

Nor did the presence of nodules visible on CT scan confound the biomarkers’ predictive value. "In fact, those PLuSS subjects with a suspicious nodule were more often correctly classified as controls than those with no nodule or a benign nodule," wrote the authors.

They add that all nodules found in controls remained clinically noncancerous at least 3 years after initial detection, with either resolution or no further growth on subsequent CT scans.

Finally, Dr. Bigbee assessed the model’s accuracy when confronted with early- vs. late-stage tumors.

"Among stage I/II lung tumors, the 10-biomarker panel misclassified 15% of stage I/II tumors in the verification set, compared to 50% of the stage III/IV tumors, suggesting the model performs well in discriminating early-stage lung cancer," he wrote. "With a specificity of 93.3%, the 10-biomarker model [balanced accuracy] was 89.2% in stage I/II disease."

The authors conceded that the biomarker panel presented here would not suffice for general population screening. However, in a clinical context, among high-risk patients, the model "may provide clinical utility in guiding interpretation of screening CT scans, even in tobacco-exposed persons with COPD or emphysema," they wrote.

 

 

"Formal validation in larger patient cohorts will be needed to confirm these initial findings."

The authors disclosed that funding for this study was supplied by grants from the National Cancer Institute. Dr. Bigbee stated that there were no personal disclosures.

Publications
Publications
Topics
Article Type
Display Headline
Ten Biomarkers May Aid Lung Cancer Detection
Display Headline
Ten Biomarkers May Aid Lung Cancer Detection
Legacy Keywords
lung cancer detection, serum biomarkers, lung cancer biomarkers, computed tomography cancer, National Lung Screening Trial
Legacy Keywords
lung cancer detection, serum biomarkers, lung cancer biomarkers, computed tomography cancer, National Lung Screening Trial
Article Source

FROM THE JOURNAL OF THORACIC ONCOLOGY

PURLs Copyright

Inside the Article

Vitals

Major Finding: A panel of 10 serum biomarkers for lung cancer had a 73.3% sensitivity and 93.3% specificity in a blinded verification set, with the best performance in early, stage I/II cases.

Data Source: The study compared the panel in cases from the University of Pittsburgh Cancer Institute Lung Research Registry and controls from the Pittsburgh Lung Screening Study.

Disclosures: The authors disclosed that funding for this study was supplied by grants from the National Cancer Institute. Dr. Bigbee stated that there were no personal disclosures.

Arthritis Plus Hypothyroidism Ups Women's CVD Risk

Article Type
Changed
Fri, 01/18/2019 - 11:48
Display Headline
Arthritis Plus Hypothyroidism Ups Women's CVD Risk

Women with both hypothyroidism and inflammatory arthritis have a nearly fourfold greater risk of developing heart disease, compared with controls.

The finding adds to "sparse" data on the long-hypothesized association between hypothyroidism and inflammatory arthritis, and also "emphasizes the need for cardiovascular risk management in this case," wrote Dr. Hennie G. Raterman and colleagues in Annals of the Rheumatic Diseases, published online March 14.

©STAN ROHRER/ISTOCKPHOTO.COM
    The association with an increased risk for cardiovascular disease was found in women with inflammatory arthritis and hypothyroidism but not in men.

Dr. Raterman of the department of rheumatology at the VU University Medical Center in Amsterdam and colleagues looked at more than 175,000 patients from the Netherlands Information Network of General Practice, retrieved from the electronic medical records of a representative sample of 69 general practices with 360,000 registered patients in 2006 (Ann. Rheum. Dis. 2012 [doi: 10.1136/annrheumdis-2011-200836]).

Patients younger than 30 years of age were excluded, given that cardiovascular disease is uncommon in this population. Morbidity data were derived from diagnostic coding, with codes for myocardial infarction, transient ischemic attack, and stroke/cerebrovascular accident indicating cardiovascular disease.

Overall, 1,518 (0.9%) of patients had inflammatory arthritis (including 973 females).

"In both male and female patients with inflammatory arthritis, hypothyroidism prevalence rates were significantly higher than in controls: 2.4% vs. 0.8% in male patients and 6.5% vs. 3.9% in female patients," wrote the authors, with an overall prevalence of hypothyroidism among inflammatory arthritis patients of 5.01% (vs. an overall 2.39 in controls, with P less than .0005).

The authors then calculated the prevalence of cardiovascular disease in this cohort. The analysis was restricted to female patients, however, "as there were too few men with both hypothyroidism and inflammatory arthritis to yield meaningful estimates."

They found that, after adjustment for age, hypertension, diabetes, and hypercholesterolemia, women with hypothyroidism plus inflammatory arthritis had an odds ratio of 3.72 for heart disease, compared with controls (95% confidence interval, 1.74-7.95).

That compared with an odds ratio of 1.48 for inflammatory arthritis alone (95% CI, 1.10-2.00) and 1.19 for patients with only hypothyroid (95% CI, 0.99-1.43).

According to Dr. Raterman, hypothyroidism acts on the cardiovascular system in several ways: by increasing oxidative stress and deteriorating endothelial function; and by decreasing nitric oxide production while increasing platelet activity, stimulating atherogenesis.

Additionally, "it is noteworthy that functional polymorphisms of protein tyrosine phosphatase N22 – a susceptibility factor for several autoimmune diseases such as hypothyroidism, inflammatory arthritis, and diabetes – accelerate atherosclerosis," wrote Dr. Raterman and his associates.

The authors cautioned that their study included several important limitations.

For one, "several CVD risk factors, such as lifestyle factors, family history of CVD, socioeconomic status, and ethnic background, were unavailable and could not be adjusted for in this study," they wrote.

Moreover, "we cannot exclude that part of our observed findings may be explained by an increased frequency of testing for thyroid disorders, as we and others previously described an increased prevalence of hypothyroidism in secondary care patients with RA."

Finally, they pointed out that the coding system used to identify inflammatory arthritis was unable to distinguish between different types, including rheumatoid arthritis, psoriatic arthritis, or ankylosing spondylitis.

"This association needs further elaboration in prospective studies."

The authors stated that they had no competing interests and no outside funding to disclose.

The investigators stated that they had no competing interests in relation to this study and received no outside funding.

Author and Disclosure Information

Publications
Topics
Legacy Keywords
hypothyroidism arthritis, developing heart disease, signs of heart disease in women, inflammatory arthritis, Dr. Hennie G. Raterman
Author and Disclosure Information

Author and Disclosure Information

Women with both hypothyroidism and inflammatory arthritis have a nearly fourfold greater risk of developing heart disease, compared with controls.

The finding adds to "sparse" data on the long-hypothesized association between hypothyroidism and inflammatory arthritis, and also "emphasizes the need for cardiovascular risk management in this case," wrote Dr. Hennie G. Raterman and colleagues in Annals of the Rheumatic Diseases, published online March 14.

©STAN ROHRER/ISTOCKPHOTO.COM
    The association with an increased risk for cardiovascular disease was found in women with inflammatory arthritis and hypothyroidism but not in men.

Dr. Raterman of the department of rheumatology at the VU University Medical Center in Amsterdam and colleagues looked at more than 175,000 patients from the Netherlands Information Network of General Practice, retrieved from the electronic medical records of a representative sample of 69 general practices with 360,000 registered patients in 2006 (Ann. Rheum. Dis. 2012 [doi: 10.1136/annrheumdis-2011-200836]).

Patients younger than 30 years of age were excluded, given that cardiovascular disease is uncommon in this population. Morbidity data were derived from diagnostic coding, with codes for myocardial infarction, transient ischemic attack, and stroke/cerebrovascular accident indicating cardiovascular disease.

Overall, 1,518 (0.9%) of patients had inflammatory arthritis (including 973 females).

"In both male and female patients with inflammatory arthritis, hypothyroidism prevalence rates were significantly higher than in controls: 2.4% vs. 0.8% in male patients and 6.5% vs. 3.9% in female patients," wrote the authors, with an overall prevalence of hypothyroidism among inflammatory arthritis patients of 5.01% (vs. an overall 2.39 in controls, with P less than .0005).

The authors then calculated the prevalence of cardiovascular disease in this cohort. The analysis was restricted to female patients, however, "as there were too few men with both hypothyroidism and inflammatory arthritis to yield meaningful estimates."

They found that, after adjustment for age, hypertension, diabetes, and hypercholesterolemia, women with hypothyroidism plus inflammatory arthritis had an odds ratio of 3.72 for heart disease, compared with controls (95% confidence interval, 1.74-7.95).

That compared with an odds ratio of 1.48 for inflammatory arthritis alone (95% CI, 1.10-2.00) and 1.19 for patients with only hypothyroid (95% CI, 0.99-1.43).

According to Dr. Raterman, hypothyroidism acts on the cardiovascular system in several ways: by increasing oxidative stress and deteriorating endothelial function; and by decreasing nitric oxide production while increasing platelet activity, stimulating atherogenesis.

Additionally, "it is noteworthy that functional polymorphisms of protein tyrosine phosphatase N22 – a susceptibility factor for several autoimmune diseases such as hypothyroidism, inflammatory arthritis, and diabetes – accelerate atherosclerosis," wrote Dr. Raterman and his associates.

The authors cautioned that their study included several important limitations.

For one, "several CVD risk factors, such as lifestyle factors, family history of CVD, socioeconomic status, and ethnic background, were unavailable and could not be adjusted for in this study," they wrote.

Moreover, "we cannot exclude that part of our observed findings may be explained by an increased frequency of testing for thyroid disorders, as we and others previously described an increased prevalence of hypothyroidism in secondary care patients with RA."

Finally, they pointed out that the coding system used to identify inflammatory arthritis was unable to distinguish between different types, including rheumatoid arthritis, psoriatic arthritis, or ankylosing spondylitis.

"This association needs further elaboration in prospective studies."

The authors stated that they had no competing interests and no outside funding to disclose.

The investigators stated that they had no competing interests in relation to this study and received no outside funding.

Women with both hypothyroidism and inflammatory arthritis have a nearly fourfold greater risk of developing heart disease, compared with controls.

The finding adds to "sparse" data on the long-hypothesized association between hypothyroidism and inflammatory arthritis, and also "emphasizes the need for cardiovascular risk management in this case," wrote Dr. Hennie G. Raterman and colleagues in Annals of the Rheumatic Diseases, published online March 14.

©STAN ROHRER/ISTOCKPHOTO.COM
    The association with an increased risk for cardiovascular disease was found in women with inflammatory arthritis and hypothyroidism but not in men.

Dr. Raterman of the department of rheumatology at the VU University Medical Center in Amsterdam and colleagues looked at more than 175,000 patients from the Netherlands Information Network of General Practice, retrieved from the electronic medical records of a representative sample of 69 general practices with 360,000 registered patients in 2006 (Ann. Rheum. Dis. 2012 [doi: 10.1136/annrheumdis-2011-200836]).

Patients younger than 30 years of age were excluded, given that cardiovascular disease is uncommon in this population. Morbidity data were derived from diagnostic coding, with codes for myocardial infarction, transient ischemic attack, and stroke/cerebrovascular accident indicating cardiovascular disease.

Overall, 1,518 (0.9%) of patients had inflammatory arthritis (including 973 females).

"In both male and female patients with inflammatory arthritis, hypothyroidism prevalence rates were significantly higher than in controls: 2.4% vs. 0.8% in male patients and 6.5% vs. 3.9% in female patients," wrote the authors, with an overall prevalence of hypothyroidism among inflammatory arthritis patients of 5.01% (vs. an overall 2.39 in controls, with P less than .0005).

The authors then calculated the prevalence of cardiovascular disease in this cohort. The analysis was restricted to female patients, however, "as there were too few men with both hypothyroidism and inflammatory arthritis to yield meaningful estimates."

They found that, after adjustment for age, hypertension, diabetes, and hypercholesterolemia, women with hypothyroidism plus inflammatory arthritis had an odds ratio of 3.72 for heart disease, compared with controls (95% confidence interval, 1.74-7.95).

That compared with an odds ratio of 1.48 for inflammatory arthritis alone (95% CI, 1.10-2.00) and 1.19 for patients with only hypothyroid (95% CI, 0.99-1.43).

According to Dr. Raterman, hypothyroidism acts on the cardiovascular system in several ways: by increasing oxidative stress and deteriorating endothelial function; and by decreasing nitric oxide production while increasing platelet activity, stimulating atherogenesis.

Additionally, "it is noteworthy that functional polymorphisms of protein tyrosine phosphatase N22 – a susceptibility factor for several autoimmune diseases such as hypothyroidism, inflammatory arthritis, and diabetes – accelerate atherosclerosis," wrote Dr. Raterman and his associates.

The authors cautioned that their study included several important limitations.

For one, "several CVD risk factors, such as lifestyle factors, family history of CVD, socioeconomic status, and ethnic background, were unavailable and could not be adjusted for in this study," they wrote.

Moreover, "we cannot exclude that part of our observed findings may be explained by an increased frequency of testing for thyroid disorders, as we and others previously described an increased prevalence of hypothyroidism in secondary care patients with RA."

Finally, they pointed out that the coding system used to identify inflammatory arthritis was unable to distinguish between different types, including rheumatoid arthritis, psoriatic arthritis, or ankylosing spondylitis.

"This association needs further elaboration in prospective studies."

The authors stated that they had no competing interests and no outside funding to disclose.

The investigators stated that they had no competing interests in relation to this study and received no outside funding.

Publications
Publications
Topics
Article Type
Display Headline
Arthritis Plus Hypothyroidism Ups Women's CVD Risk
Display Headline
Arthritis Plus Hypothyroidism Ups Women's CVD Risk
Legacy Keywords
hypothyroidism arthritis, developing heart disease, signs of heart disease in women, inflammatory arthritis, Dr. Hennie G. Raterman
Legacy Keywords
hypothyroidism arthritis, developing heart disease, signs of heart disease in women, inflammatory arthritis, Dr. Hennie G. Raterman
Article Source

FROM ANNALS OF THE RHEUMATIC DISEASES

PURLs Copyright

Inside the Article

Vitals

Major Finding: Women who have both inflammatory arthritis plus hypothyroidism have 3.72-fold greater odds of developing cardiovascular disease, compared with women with neither condition.

Data Source: Researchers used the Netherlands Information Network of General Practice, a database of more than 360,000 Dutch patients in general practice.

Disclosures: The investigators stated that they had no competing interests in relation to this study and received no outside funding.

Second Round of Fecal Testing Reveals Fewer Cancers

Article Type
Changed
Wed, 05/26/2021 - 14:04
Display Headline
Second Round of Fecal Testing Reveals Fewer Cancers

The positive predictive value for colorectal cancer of second-round fecal immunochemical testing was half that of first-round testing among average-risk patients, according to a report by Dr. Maaike J. Denters and colleagues in the March issue of Gastroenterology.

Moreover, there was no significant difference in positive predictive value (PPV) between participants who had performed a guaiac fecal occult blood test in the first round and those who had performed a fecal immunochemical test (FIT) in the first round (Gastroenterology 2012 March [doi:10.1053/j.gastro.2011.11.024]).

The researchers wrote that the differences between the two tests should not be overemphasized, and "no large difficulties are to be expected should a switch from a guaiac-based program to a FIT-based program be desired in screening programs currently using guaiac tests."

Dr. Denters, of the Academic Medical Centre, University of Amsterdam, and colleagues randomized 4,990 average-risk persons aged 50-74 years to a guaiac (n = 2,119) or FIT test (n = 2,871).

Tests were sent to participants and mailed back to the researchers. No dietary instructions were included with the guaiac test. Two years later, FIT kits were sent to all participants with negative results on the first round of testing.

Participants with a positive test result after either round of testing received an invitation for a consultation at the screening center, where a colonoscopy was recommended, barring contraindications.

Overall, 293 participants tested positive on the first round – 233 in the FIT cohort and 60 in the guaiac group. Thus, the positivity rate was 8.1% for the FIT test and 2.8% for the guaiac test.

A total of 239 of the positive patients underwent colonoscopy. In the guaiac group (n = 53), 24 had advanced adenomas as their most advanced finding. These were defined as any adenoma 10 mm or greater, or with a villous component greater than 20%, or with high-grade dysplasia. That meant there was a PPV of 45% for advanced adenomas. Eight patients had colorectal cancer (CRC), for a PPV for cancer of 15%.

Among the FIT-positive group (n = 186), there were 88 advanced adenomas (PPV = 47%) and 12 cancers (PPV = 6%).

Overall, there were 20 cancers, representing a PPV for cancer of 8% for all first-round positive patients.

In the second round of testing, among the FIT-after-guaiac patients who underwent colonoscopy following a positive second-round test result (n = 122), there were 53 advanced adenomas (PPV = 43%) and 5 cancers (PPV = 4%).

Similarly, the FIT-after-FIT cohort had 50 advanced adenomas detected (PPV = 38%) and 4 cancers (PPV = 3%).

Overall, this totaled nine cancers, yielding a PPV for cancer of just 4% in the second round. "In the second round, fewer cases of advanced neoplasia were detected after a positive test result, while the chances of finding CRC were halved," the authors wrote.

However, "despite a significant decrease in the PPV for CRC in a second round of screening, a substantial number of significant lesions are detected in a second screening round," independent of the type of test used in the first round, although this finding applies more to advanced adenomas than to cancer, they wrote.

The authors pointed to one potential limitation: They chose a low hemoglobin cut-off level for FIT positivity (50 ng Hb/mL) compared with other studies that have used a value of 75 ng Hg/mL or even 100 ng Hg/mL.

"It is very well possible that in further screening rounds [at this level,] positivity rates will stay relatively high, resulting in many colonoscopy procedures, whereas PPV will decrease further," they wrote. "The choice for a cut-off level will be a fine balance between these two parameters and will be influenced by economic, behavioral, and other parameters, and differ per country."

The study was funded by the Netherlands Organization for Health Research and Development (ZonMw). The authors had no conflicts to disclose.

Author and Disclosure Information

Publications
Topics
Legacy Keywords
colorectal cancer, fecal immunochemical testing, positive predictive value, guaiac fecal occult blood test, FIT fecal
Author and Disclosure Information

Author and Disclosure Information

The positive predictive value for colorectal cancer of second-round fecal immunochemical testing was half that of first-round testing among average-risk patients, according to a report by Dr. Maaike J. Denters and colleagues in the March issue of Gastroenterology.

Moreover, there was no significant difference in positive predictive value (PPV) between participants who had performed a guaiac fecal occult blood test in the first round and those who had performed a fecal immunochemical test (FIT) in the first round (Gastroenterology 2012 March [doi:10.1053/j.gastro.2011.11.024]).

The researchers wrote that the differences between the two tests should not be overemphasized, and "no large difficulties are to be expected should a switch from a guaiac-based program to a FIT-based program be desired in screening programs currently using guaiac tests."

Dr. Denters, of the Academic Medical Centre, University of Amsterdam, and colleagues randomized 4,990 average-risk persons aged 50-74 years to a guaiac (n = 2,119) or FIT test (n = 2,871).

Tests were sent to participants and mailed back to the researchers. No dietary instructions were included with the guaiac test. Two years later, FIT kits were sent to all participants with negative results on the first round of testing.

Participants with a positive test result after either round of testing received an invitation for a consultation at the screening center, where a colonoscopy was recommended, barring contraindications.

Overall, 293 participants tested positive on the first round – 233 in the FIT cohort and 60 in the guaiac group. Thus, the positivity rate was 8.1% for the FIT test and 2.8% for the guaiac test.

A total of 239 of the positive patients underwent colonoscopy. In the guaiac group (n = 53), 24 had advanced adenomas as their most advanced finding. These were defined as any adenoma 10 mm or greater, or with a villous component greater than 20%, or with high-grade dysplasia. That meant there was a PPV of 45% for advanced adenomas. Eight patients had colorectal cancer (CRC), for a PPV for cancer of 15%.

Among the FIT-positive group (n = 186), there were 88 advanced adenomas (PPV = 47%) and 12 cancers (PPV = 6%).

Overall, there were 20 cancers, representing a PPV for cancer of 8% for all first-round positive patients.

In the second round of testing, among the FIT-after-guaiac patients who underwent colonoscopy following a positive second-round test result (n = 122), there were 53 advanced adenomas (PPV = 43%) and 5 cancers (PPV = 4%).

Similarly, the FIT-after-FIT cohort had 50 advanced adenomas detected (PPV = 38%) and 4 cancers (PPV = 3%).

Overall, this totaled nine cancers, yielding a PPV for cancer of just 4% in the second round. "In the second round, fewer cases of advanced neoplasia were detected after a positive test result, while the chances of finding CRC were halved," the authors wrote.

However, "despite a significant decrease in the PPV for CRC in a second round of screening, a substantial number of significant lesions are detected in a second screening round," independent of the type of test used in the first round, although this finding applies more to advanced adenomas than to cancer, they wrote.

The authors pointed to one potential limitation: They chose a low hemoglobin cut-off level for FIT positivity (50 ng Hb/mL) compared with other studies that have used a value of 75 ng Hg/mL or even 100 ng Hg/mL.

"It is very well possible that in further screening rounds [at this level,] positivity rates will stay relatively high, resulting in many colonoscopy procedures, whereas PPV will decrease further," they wrote. "The choice for a cut-off level will be a fine balance between these two parameters and will be influenced by economic, behavioral, and other parameters, and differ per country."

The study was funded by the Netherlands Organization for Health Research and Development (ZonMw). The authors had no conflicts to disclose.

The positive predictive value for colorectal cancer of second-round fecal immunochemical testing was half that of first-round testing among average-risk patients, according to a report by Dr. Maaike J. Denters and colleagues in the March issue of Gastroenterology.

Moreover, there was no significant difference in positive predictive value (PPV) between participants who had performed a guaiac fecal occult blood test in the first round and those who had performed a fecal immunochemical test (FIT) in the first round (Gastroenterology 2012 March [doi:10.1053/j.gastro.2011.11.024]).

The researchers wrote that the differences between the two tests should not be overemphasized, and "no large difficulties are to be expected should a switch from a guaiac-based program to a FIT-based program be desired in screening programs currently using guaiac tests."

Dr. Denters, of the Academic Medical Centre, University of Amsterdam, and colleagues randomized 4,990 average-risk persons aged 50-74 years to a guaiac (n = 2,119) or FIT test (n = 2,871).

Tests were sent to participants and mailed back to the researchers. No dietary instructions were included with the guaiac test. Two years later, FIT kits were sent to all participants with negative results on the first round of testing.

Participants with a positive test result after either round of testing received an invitation for a consultation at the screening center, where a colonoscopy was recommended, barring contraindications.

Overall, 293 participants tested positive on the first round – 233 in the FIT cohort and 60 in the guaiac group. Thus, the positivity rate was 8.1% for the FIT test and 2.8% for the guaiac test.

A total of 239 of the positive patients underwent colonoscopy. In the guaiac group (n = 53), 24 had advanced adenomas as their most advanced finding. These were defined as any adenoma 10 mm or greater, or with a villous component greater than 20%, or with high-grade dysplasia. That meant there was a PPV of 45% for advanced adenomas. Eight patients had colorectal cancer (CRC), for a PPV for cancer of 15%.

Among the FIT-positive group (n = 186), there were 88 advanced adenomas (PPV = 47%) and 12 cancers (PPV = 6%).

Overall, there were 20 cancers, representing a PPV for cancer of 8% for all first-round positive patients.

In the second round of testing, among the FIT-after-guaiac patients who underwent colonoscopy following a positive second-round test result (n = 122), there were 53 advanced adenomas (PPV = 43%) and 5 cancers (PPV = 4%).

Similarly, the FIT-after-FIT cohort had 50 advanced adenomas detected (PPV = 38%) and 4 cancers (PPV = 3%).

Overall, this totaled nine cancers, yielding a PPV for cancer of just 4% in the second round. "In the second round, fewer cases of advanced neoplasia were detected after a positive test result, while the chances of finding CRC were halved," the authors wrote.

However, "despite a significant decrease in the PPV for CRC in a second round of screening, a substantial number of significant lesions are detected in a second screening round," independent of the type of test used in the first round, although this finding applies more to advanced adenomas than to cancer, they wrote.

The authors pointed to one potential limitation: They chose a low hemoglobin cut-off level for FIT positivity (50 ng Hb/mL) compared with other studies that have used a value of 75 ng Hg/mL or even 100 ng Hg/mL.

"It is very well possible that in further screening rounds [at this level,] positivity rates will stay relatively high, resulting in many colonoscopy procedures, whereas PPV will decrease further," they wrote. "The choice for a cut-off level will be a fine balance between these two parameters and will be influenced by economic, behavioral, and other parameters, and differ per country."

The study was funded by the Netherlands Organization for Health Research and Development (ZonMw). The authors had no conflicts to disclose.

Publications
Publications
Topics
Article Type
Display Headline
Second Round of Fecal Testing Reveals Fewer Cancers
Display Headline
Second Round of Fecal Testing Reveals Fewer Cancers
Legacy Keywords
colorectal cancer, fecal immunochemical testing, positive predictive value, guaiac fecal occult blood test, FIT fecal
Legacy Keywords
colorectal cancer, fecal immunochemical testing, positive predictive value, guaiac fecal occult blood test, FIT fecal
Article Source

FROM GASTROENTEROLOGY

PURLs Copyright

Inside the Article

Vitals

Major Finding: The positive predictive value for colorectal cancer was halved, from 8% in a first round of fecal testing to 4% in the second round (P = .024), with fecal immunochemical testing (FIT) as the second round and either FIT or guaiac testing in the first round.

Data Source: Data are from asymptomatic adults who were screened for colorectal cancer.

Disclosures: The study was funded by the Netherlands Organization for Health Research and Development (ZonMw). The authors had no conflicts to disclose.

High Vitamin D Linked to 50% Lower Crohn's Risk in Women

Article Type
Changed
Fri, 12/07/2018 - 14:39
Display Headline
High Vitamin D Linked to 50% Lower Crohn's Risk in Women

Higher predicted plasma levels of vitamin D may reduce the risk for incident Crohn’s disease by half in women, reported Dr. Ashwin N. Ananthakrishnan and colleagues in the March 1 issue of Gastroenterology.

"Our results strengthen the rationale for considering vitamin D supplementation both for treatment of active CD or prevention of disease flares," wrote the authors (Gastroenterology 2012 [doi:10.1053/j.gastro.2011.11.040]).

Additionally, "our data suggest a possible role for routine screening for vitamin D deficiency or vitamin D supplementation among individuals at high risk for development of CD."

Dr. Ananthakrishnan of Harvard Medical School, Boston, analyzed data from 1,492,811 person-years of follow-up in the Nurses' Health Study, a prospective cohort begun in 1976.

For the study, the authors focused on data from the 72,719 women (median age at baseline, 53 years) who returned the 1986 questionnaire, which included data on dietary intake and physical activity.

Subjects had no prior history of CD, ulcerative colitis (UC), or cancer (except nonmelanoma skin cancer). Plasma 25(OH)D was derived using a previously published, validated regression model that includes vitamin D intake from diet and supplements, sun exposure, race, and body mass index (J. Natl. Cancer Inst. 2006;98:451-9).

When a new diagnosis of either CD or UC occurred, subjects were contacted and sent a supplemental questionnaire, and their medical records were reviewed by the researchers.

Overall, there were 122 cases of CD and 123 cases of UC documented from 1986 through June 30, 2008, with a median age at diagnosis of 64 years for CD and 63.5 years for UC.

The authors then stratified the women into quartiles based on predicted plasma 25(OH)D levels.

They found that, compared with women in the lowest quartile, women in the highest two quartiles had a significantly lower risk of CD, with a multivariate hazard ratio (HR) of 0.50 in the highest quartile (95% confidence interval, 0.28-0.90), and an HR of 0.55 in the second-highest quartile (95% CI, 0.30-1.00).

For UC, there was also a lower risk among women in the highest quartile, although it did not reach significance (HR, 0.68; 95% CI, 0.5-1.31).

Next, the authors examined the associations between risk for CD and UC and predefined 25(OH)D levels. Values greater than or equal to 30 ng/mL were defined as vitamin D sufficiency; values between 20 ng/mL and 30 ng/mL were defined as insufficiency, and values less than 20 ng/mL were classified as deficiency.

"Compared to women who were predicted to be vitamin D deficient, the multivariate HR of CD was 0.38 (95% CI, 0.15-0.97) for women predicted to be vitamin D sufficient," wrote the authors.

"The corresponding multivariate HR of UC was 0.57 (95% CI, 0.19-1.70) for women predicted to be vitamin D sufficient," they added.

Finally, looking at the findings another way, the authors calculated that for each 1 ng/mL increase in plasma 25(OH)D, there was a 6% relative reduction in risk of CD (multivariate HR, 0.94; 95% CI 0.89-0.99) and a nonsignificant 4% reduction in risk of UC (multivariate HR, 0.96; 95% CI 0.91-1.02).

Vitamin D intake from diet and supplements was also related to risk of CD and UC, with a 10% reduction in UC risk and a 7% reduction in CD risk for every 100 IUs consumed per day.

The authors conceded the possibility that women who take vitamin D supplements may be more likely to engage in other healthy behaviors that would minimize their likelihood of developing UC or CD.

"However, vitamin D supplement intake is only a minor determinant of vitamin D status," the authors wrote, with more than 50% of plasma 25(OH)D levels attributable to conversion in the skin.

Moreover, "compared to the lowest level of vitamin D supplement intake, intake of greater than 400 IU/day of vitamin D supplements increased plasma 25(OH)D by less than 1 ng/mL," they added.

"Thus, our overall associations with vitamin D status are unlikely to be attributable to differing propensity to use dietary supplements."

The authors reported that the study was supported by a grant from the American Gastroenterological Association and the Broad Medical Research Foundation. Two authors disclosed financial relationships with companies including Proctor & Gamble, Shire Pharmaceuticals, Cytokine Pharma, Warner Chilcott, Bayer HealthCare, and Millennium Pharmaceuticals.

Dr. Ashwin Ananthakrishnan

© Kaspri/Fotolia.com
Vitamin D supplements can lead to a greatly decreased risk of Crohn’s disease in women.
Author and Disclosure Information

Publications
Topics
Legacy Keywords
high vitamin D levels, vitamin d and crohns disease, crohns disease causes, vitamin D supplementation, Dr. Ananthakrishnan, Nurses' Health Study
Author and Disclosure Information

Author and Disclosure Information

Higher predicted plasma levels of vitamin D may reduce the risk for incident Crohn’s disease by half in women, reported Dr. Ashwin N. Ananthakrishnan and colleagues in the March 1 issue of Gastroenterology.

"Our results strengthen the rationale for considering vitamin D supplementation both for treatment of active CD or prevention of disease flares," wrote the authors (Gastroenterology 2012 [doi:10.1053/j.gastro.2011.11.040]).

Additionally, "our data suggest a possible role for routine screening for vitamin D deficiency or vitamin D supplementation among individuals at high risk for development of CD."

Dr. Ananthakrishnan of Harvard Medical School, Boston, analyzed data from 1,492,811 person-years of follow-up in the Nurses' Health Study, a prospective cohort begun in 1976.

For the study, the authors focused on data from the 72,719 women (median age at baseline, 53 years) who returned the 1986 questionnaire, which included data on dietary intake and physical activity.

Subjects had no prior history of CD, ulcerative colitis (UC), or cancer (except nonmelanoma skin cancer). Plasma 25(OH)D was derived using a previously published, validated regression model that includes vitamin D intake from diet and supplements, sun exposure, race, and body mass index (J. Natl. Cancer Inst. 2006;98:451-9).

When a new diagnosis of either CD or UC occurred, subjects were contacted and sent a supplemental questionnaire, and their medical records were reviewed by the researchers.

Overall, there were 122 cases of CD and 123 cases of UC documented from 1986 through June 30, 2008, with a median age at diagnosis of 64 years for CD and 63.5 years for UC.

The authors then stratified the women into quartiles based on predicted plasma 25(OH)D levels.

They found that, compared with women in the lowest quartile, women in the highest two quartiles had a significantly lower risk of CD, with a multivariate hazard ratio (HR) of 0.50 in the highest quartile (95% confidence interval, 0.28-0.90), and an HR of 0.55 in the second-highest quartile (95% CI, 0.30-1.00).

For UC, there was also a lower risk among women in the highest quartile, although it did not reach significance (HR, 0.68; 95% CI, 0.5-1.31).

Next, the authors examined the associations between risk for CD and UC and predefined 25(OH)D levels. Values greater than or equal to 30 ng/mL were defined as vitamin D sufficiency; values between 20 ng/mL and 30 ng/mL were defined as insufficiency, and values less than 20 ng/mL were classified as deficiency.

"Compared to women who were predicted to be vitamin D deficient, the multivariate HR of CD was 0.38 (95% CI, 0.15-0.97) for women predicted to be vitamin D sufficient," wrote the authors.

"The corresponding multivariate HR of UC was 0.57 (95% CI, 0.19-1.70) for women predicted to be vitamin D sufficient," they added.

Finally, looking at the findings another way, the authors calculated that for each 1 ng/mL increase in plasma 25(OH)D, there was a 6% relative reduction in risk of CD (multivariate HR, 0.94; 95% CI 0.89-0.99) and a nonsignificant 4% reduction in risk of UC (multivariate HR, 0.96; 95% CI 0.91-1.02).

Vitamin D intake from diet and supplements was also related to risk of CD and UC, with a 10% reduction in UC risk and a 7% reduction in CD risk for every 100 IUs consumed per day.

The authors conceded the possibility that women who take vitamin D supplements may be more likely to engage in other healthy behaviors that would minimize their likelihood of developing UC or CD.

"However, vitamin D supplement intake is only a minor determinant of vitamin D status," the authors wrote, with more than 50% of plasma 25(OH)D levels attributable to conversion in the skin.

Moreover, "compared to the lowest level of vitamin D supplement intake, intake of greater than 400 IU/day of vitamin D supplements increased plasma 25(OH)D by less than 1 ng/mL," they added.

"Thus, our overall associations with vitamin D status are unlikely to be attributable to differing propensity to use dietary supplements."

The authors reported that the study was supported by a grant from the American Gastroenterological Association and the Broad Medical Research Foundation. Two authors disclosed financial relationships with companies including Proctor & Gamble, Shire Pharmaceuticals, Cytokine Pharma, Warner Chilcott, Bayer HealthCare, and Millennium Pharmaceuticals.

Dr. Ashwin Ananthakrishnan

© Kaspri/Fotolia.com
Vitamin D supplements can lead to a greatly decreased risk of Crohn’s disease in women.

Higher predicted plasma levels of vitamin D may reduce the risk for incident Crohn’s disease by half in women, reported Dr. Ashwin N. Ananthakrishnan and colleagues in the March 1 issue of Gastroenterology.

"Our results strengthen the rationale for considering vitamin D supplementation both for treatment of active CD or prevention of disease flares," wrote the authors (Gastroenterology 2012 [doi:10.1053/j.gastro.2011.11.040]).

Additionally, "our data suggest a possible role for routine screening for vitamin D deficiency or vitamin D supplementation among individuals at high risk for development of CD."

Dr. Ananthakrishnan of Harvard Medical School, Boston, analyzed data from 1,492,811 person-years of follow-up in the Nurses' Health Study, a prospective cohort begun in 1976.

For the study, the authors focused on data from the 72,719 women (median age at baseline, 53 years) who returned the 1986 questionnaire, which included data on dietary intake and physical activity.

Subjects had no prior history of CD, ulcerative colitis (UC), or cancer (except nonmelanoma skin cancer). Plasma 25(OH)D was derived using a previously published, validated regression model that includes vitamin D intake from diet and supplements, sun exposure, race, and body mass index (J. Natl. Cancer Inst. 2006;98:451-9).

When a new diagnosis of either CD or UC occurred, subjects were contacted and sent a supplemental questionnaire, and their medical records were reviewed by the researchers.

Overall, there were 122 cases of CD and 123 cases of UC documented from 1986 through June 30, 2008, with a median age at diagnosis of 64 years for CD and 63.5 years for UC.

The authors then stratified the women into quartiles based on predicted plasma 25(OH)D levels.

They found that, compared with women in the lowest quartile, women in the highest two quartiles had a significantly lower risk of CD, with a multivariate hazard ratio (HR) of 0.50 in the highest quartile (95% confidence interval, 0.28-0.90), and an HR of 0.55 in the second-highest quartile (95% CI, 0.30-1.00).

For UC, there was also a lower risk among women in the highest quartile, although it did not reach significance (HR, 0.68; 95% CI, 0.5-1.31).

Next, the authors examined the associations between risk for CD and UC and predefined 25(OH)D levels. Values greater than or equal to 30 ng/mL were defined as vitamin D sufficiency; values between 20 ng/mL and 30 ng/mL were defined as insufficiency, and values less than 20 ng/mL were classified as deficiency.

"Compared to women who were predicted to be vitamin D deficient, the multivariate HR of CD was 0.38 (95% CI, 0.15-0.97) for women predicted to be vitamin D sufficient," wrote the authors.

"The corresponding multivariate HR of UC was 0.57 (95% CI, 0.19-1.70) for women predicted to be vitamin D sufficient," they added.

Finally, looking at the findings another way, the authors calculated that for each 1 ng/mL increase in plasma 25(OH)D, there was a 6% relative reduction in risk of CD (multivariate HR, 0.94; 95% CI 0.89-0.99) and a nonsignificant 4% reduction in risk of UC (multivariate HR, 0.96; 95% CI 0.91-1.02).

Vitamin D intake from diet and supplements was also related to risk of CD and UC, with a 10% reduction in UC risk and a 7% reduction in CD risk for every 100 IUs consumed per day.

The authors conceded the possibility that women who take vitamin D supplements may be more likely to engage in other healthy behaviors that would minimize their likelihood of developing UC or CD.

"However, vitamin D supplement intake is only a minor determinant of vitamin D status," the authors wrote, with more than 50% of plasma 25(OH)D levels attributable to conversion in the skin.

Moreover, "compared to the lowest level of vitamin D supplement intake, intake of greater than 400 IU/day of vitamin D supplements increased plasma 25(OH)D by less than 1 ng/mL," they added.

"Thus, our overall associations with vitamin D status are unlikely to be attributable to differing propensity to use dietary supplements."

The authors reported that the study was supported by a grant from the American Gastroenterological Association and the Broad Medical Research Foundation. Two authors disclosed financial relationships with companies including Proctor & Gamble, Shire Pharmaceuticals, Cytokine Pharma, Warner Chilcott, Bayer HealthCare, and Millennium Pharmaceuticals.

Dr. Ashwin Ananthakrishnan

© Kaspri/Fotolia.com
Vitamin D supplements can lead to a greatly decreased risk of Crohn’s disease in women.
Publications
Publications
Topics
Article Type
Display Headline
High Vitamin D Linked to 50% Lower Crohn's Risk in Women
Display Headline
High Vitamin D Linked to 50% Lower Crohn's Risk in Women
Legacy Keywords
high vitamin D levels, vitamin d and crohns disease, crohns disease causes, vitamin D supplementation, Dr. Ananthakrishnan, Nurses' Health Study
Legacy Keywords
high vitamin D levels, vitamin d and crohns disease, crohns disease causes, vitamin D supplementation, Dr. Ananthakrishnan, Nurses' Health Study
Article Source

FROM GASTROENTEROLOGY

PURLs Copyright

Inside the Article

Vitals

Major Finding: Women in the highest quartile of predicted plasma 25(OH)D levels had a hazard ratio of 0.50 for developing Crohn’s disease, compared with women in the lowest quartile (95% confidence interval, 0.28-0.90).

Data Source: A total of 1,492,811 person-years of follow-up from the Nurses’ Health Study, a prospective cohort begun in 1976.

Disclosures: The authors reported that the study was supported by a grant from the American Gastroenterological Association and the Broad Medical Research Foundation. Two authors disclosed financial relationships with companies including Proctor and Gamble, Shire Pharmaceuticals, CytokinePharma, Warner Chilcott, Bayer HealthCare, and Millenium Pharmaceuticals.

Ablative Lasers Effective, but Painful for Treating Acne Scars

Article Type
Changed
Fri, 06/11/2021 - 10:21
Display Headline
Ablative Lasers Effective, but Painful for Treating Acne Scars

Ablative fractional laser resurfacing is more effective than nonablative therapy for the treatment of acne scaring, albeit, with greater side effects and pain, according to a new review.

Ms. Michal Wen Sheue Ong and Dr. Saquib Bashir, of King’s College Hospital NHS Foundation Trust in London, conducted literature searches using the PubMed and Scopus databases for English-language articles published between 2003 and January 2011 that reported on "acne scars" and "fractional photothermolysis."

A total of 26 papers, published between 2008 and 2011, met the criteria – 13 papers on ablative fractional lasers and an equal number on nonablative fractional lasers (Br. J. Dermatol. 2012 Feb. 1 [doi: 10.1111/j.1365-2133.2012.10870.x]).

"Most ablative studies reported a percentage of improvement within the range of 26% to 75%," they wrote. In two cases, studies claimed 79.8% and 83% mean improvement, although the reviewers questioned the appropriateness of using mean values rather than medians, given that "the properties of the ordinal scales were unknown and points on the scale were not necessarily equidistant."

The nonablative studies reported an improvement range of 26% to 50%.

Only four studies were split-face randomized controlled trials, and most had follow-up criteria of just 1 to 6 months; only one study included a 2-year follow-up. Moreover, the methods and rating scales for measuring improvement varied widely.

Only five studies analyzed the histological degree of scar improvement, but in these, new collagen formation was noted with both ablative and nonablative fractional photothermolysis.

In one of the nonablative studies included in the review, an increase in elastic fibers framework in the papillary dermis, as well as the upper and mid dermis, was noted 12 weeks after final treatment (Photodermatol. Photoimmunol. Photomed. 2009;25:138-42).

Similarly, using 3D optical profiling, a study of ablative laser resurfacing showed a marked, statistically significant improvement in skin smoothness and scar volume 1 month after treatment. "However, there were no further improvements of skin smoothness or scar volume at 3- and 6-months follow-up," wrote the authors (J. Am. Acad. Dermatol. 2010;63:274-83).

Looking at side effects, "A higher proportion of patients (up to 92.3%) who undertake ablative FP [fractional photothermolysis] experience post inflammatory hyperpigmentation (PIH) than those who have nonablative FP (up to 13%)," wrote the reviewers, with a maximum duration of PIH of up to 6 months in ablative FP, vs. 1 week in nonablative treatment.

Pain ratings for nonablative procedures were also lower, compared with ablative procedures. "The mean pain score reported across ablative FP studies ranged from 5.9-8.1 (scale of 10)," reported the authors. In contrast, the mean pain scores for nonablative FP procedures were rated 3.9-5.7.

The authors pointed out that they found no evidence regarding the effects of FP lasers on patients’ psychological status and quality of life. "This information can be useful and should be obtained before and after treatment," they wrote.

Limitations of the review included the fact that none of the methods of assessing clinical outcome had had its validity or reliability investigated, wrote the reviewers. For the most part, however, the results were promising.

"Most studies had clinicians/dermatologists to assess overall scar improvement, and there were some studies which had patient assessment," they wrote. In many cases the evaluators were blinded, but at least three studies used evaluators who were not.

Fractional photothermolysis laser resurfacing improves facial acne scarring, despite dramatic methodological variability in efficacy studies. Nevertheless, more studies are needed, especially split-face, randomized controlled trials using objective assessment measures of improvement, like histological or 3D optical profiling, they concluded.

The review authors reported having no outside funding and stated that they had no conflicts of interest to declare.

Author and Disclosure Information

Publications
Topics
Legacy Keywords
Ablative fractional laser resurfacing, nonablative therapy, treatment, acne scaring, Ms. Michal Wen Sheue Ong, Dr. Saquib Bashir, fractional photothermolysis, ablative fractional lasers, new collagen formation, nonablative fractional photothermolysis,

Sections
Author and Disclosure Information

Author and Disclosure Information

Ablative fractional laser resurfacing is more effective than nonablative therapy for the treatment of acne scaring, albeit, with greater side effects and pain, according to a new review.

Ms. Michal Wen Sheue Ong and Dr. Saquib Bashir, of King’s College Hospital NHS Foundation Trust in London, conducted literature searches using the PubMed and Scopus databases for English-language articles published between 2003 and January 2011 that reported on "acne scars" and "fractional photothermolysis."

A total of 26 papers, published between 2008 and 2011, met the criteria – 13 papers on ablative fractional lasers and an equal number on nonablative fractional lasers (Br. J. Dermatol. 2012 Feb. 1 [doi: 10.1111/j.1365-2133.2012.10870.x]).

"Most ablative studies reported a percentage of improvement within the range of 26% to 75%," they wrote. In two cases, studies claimed 79.8% and 83% mean improvement, although the reviewers questioned the appropriateness of using mean values rather than medians, given that "the properties of the ordinal scales were unknown and points on the scale were not necessarily equidistant."

The nonablative studies reported an improvement range of 26% to 50%.

Only four studies were split-face randomized controlled trials, and most had follow-up criteria of just 1 to 6 months; only one study included a 2-year follow-up. Moreover, the methods and rating scales for measuring improvement varied widely.

Only five studies analyzed the histological degree of scar improvement, but in these, new collagen formation was noted with both ablative and nonablative fractional photothermolysis.

In one of the nonablative studies included in the review, an increase in elastic fibers framework in the papillary dermis, as well as the upper and mid dermis, was noted 12 weeks after final treatment (Photodermatol. Photoimmunol. Photomed. 2009;25:138-42).

Similarly, using 3D optical profiling, a study of ablative laser resurfacing showed a marked, statistically significant improvement in skin smoothness and scar volume 1 month after treatment. "However, there were no further improvements of skin smoothness or scar volume at 3- and 6-months follow-up," wrote the authors (J. Am. Acad. Dermatol. 2010;63:274-83).

Looking at side effects, "A higher proportion of patients (up to 92.3%) who undertake ablative FP [fractional photothermolysis] experience post inflammatory hyperpigmentation (PIH) than those who have nonablative FP (up to 13%)," wrote the reviewers, with a maximum duration of PIH of up to 6 months in ablative FP, vs. 1 week in nonablative treatment.

Pain ratings for nonablative procedures were also lower, compared with ablative procedures. "The mean pain score reported across ablative FP studies ranged from 5.9-8.1 (scale of 10)," reported the authors. In contrast, the mean pain scores for nonablative FP procedures were rated 3.9-5.7.

The authors pointed out that they found no evidence regarding the effects of FP lasers on patients’ psychological status and quality of life. "This information can be useful and should be obtained before and after treatment," they wrote.

Limitations of the review included the fact that none of the methods of assessing clinical outcome had had its validity or reliability investigated, wrote the reviewers. For the most part, however, the results were promising.

"Most studies had clinicians/dermatologists to assess overall scar improvement, and there were some studies which had patient assessment," they wrote. In many cases the evaluators were blinded, but at least three studies used evaluators who were not.

Fractional photothermolysis laser resurfacing improves facial acne scarring, despite dramatic methodological variability in efficacy studies. Nevertheless, more studies are needed, especially split-face, randomized controlled trials using objective assessment measures of improvement, like histological or 3D optical profiling, they concluded.

The review authors reported having no outside funding and stated that they had no conflicts of interest to declare.

Ablative fractional laser resurfacing is more effective than nonablative therapy for the treatment of acne scaring, albeit, with greater side effects and pain, according to a new review.

Ms. Michal Wen Sheue Ong and Dr. Saquib Bashir, of King’s College Hospital NHS Foundation Trust in London, conducted literature searches using the PubMed and Scopus databases for English-language articles published between 2003 and January 2011 that reported on "acne scars" and "fractional photothermolysis."

A total of 26 papers, published between 2008 and 2011, met the criteria – 13 papers on ablative fractional lasers and an equal number on nonablative fractional lasers (Br. J. Dermatol. 2012 Feb. 1 [doi: 10.1111/j.1365-2133.2012.10870.x]).

"Most ablative studies reported a percentage of improvement within the range of 26% to 75%," they wrote. In two cases, studies claimed 79.8% and 83% mean improvement, although the reviewers questioned the appropriateness of using mean values rather than medians, given that "the properties of the ordinal scales were unknown and points on the scale were not necessarily equidistant."

The nonablative studies reported an improvement range of 26% to 50%.

Only four studies were split-face randomized controlled trials, and most had follow-up criteria of just 1 to 6 months; only one study included a 2-year follow-up. Moreover, the methods and rating scales for measuring improvement varied widely.

Only five studies analyzed the histological degree of scar improvement, but in these, new collagen formation was noted with both ablative and nonablative fractional photothermolysis.

In one of the nonablative studies included in the review, an increase in elastic fibers framework in the papillary dermis, as well as the upper and mid dermis, was noted 12 weeks after final treatment (Photodermatol. Photoimmunol. Photomed. 2009;25:138-42).

Similarly, using 3D optical profiling, a study of ablative laser resurfacing showed a marked, statistically significant improvement in skin smoothness and scar volume 1 month after treatment. "However, there were no further improvements of skin smoothness or scar volume at 3- and 6-months follow-up," wrote the authors (J. Am. Acad. Dermatol. 2010;63:274-83).

Looking at side effects, "A higher proportion of patients (up to 92.3%) who undertake ablative FP [fractional photothermolysis] experience post inflammatory hyperpigmentation (PIH) than those who have nonablative FP (up to 13%)," wrote the reviewers, with a maximum duration of PIH of up to 6 months in ablative FP, vs. 1 week in nonablative treatment.

Pain ratings for nonablative procedures were also lower, compared with ablative procedures. "The mean pain score reported across ablative FP studies ranged from 5.9-8.1 (scale of 10)," reported the authors. In contrast, the mean pain scores for nonablative FP procedures were rated 3.9-5.7.

The authors pointed out that they found no evidence regarding the effects of FP lasers on patients’ psychological status and quality of life. "This information can be useful and should be obtained before and after treatment," they wrote.

Limitations of the review included the fact that none of the methods of assessing clinical outcome had had its validity or reliability investigated, wrote the reviewers. For the most part, however, the results were promising.

"Most studies had clinicians/dermatologists to assess overall scar improvement, and there were some studies which had patient assessment," they wrote. In many cases the evaluators were blinded, but at least three studies used evaluators who were not.

Fractional photothermolysis laser resurfacing improves facial acne scarring, despite dramatic methodological variability in efficacy studies. Nevertheless, more studies are needed, especially split-face, randomized controlled trials using objective assessment measures of improvement, like histological or 3D optical profiling, they concluded.

The review authors reported having no outside funding and stated that they had no conflicts of interest to declare.

Publications
Publications
Topics
Article Type
Display Headline
Ablative Lasers Effective, but Painful for Treating Acne Scars
Display Headline
Ablative Lasers Effective, but Painful for Treating Acne Scars
Legacy Keywords
Ablative fractional laser resurfacing, nonablative therapy, treatment, acne scaring, Ms. Michal Wen Sheue Ong, Dr. Saquib Bashir, fractional photothermolysis, ablative fractional lasers, new collagen formation, nonablative fractional photothermolysis,

Legacy Keywords
Ablative fractional laser resurfacing, nonablative therapy, treatment, acne scaring, Ms. Michal Wen Sheue Ong, Dr. Saquib Bashir, fractional photothermolysis, ablative fractional lasers, new collagen formation, nonablative fractional photothermolysis,

Sections
Article Source

FROM THE BRITISH JOURNAL OF DERMATOLOGY

PURLs Copyright

Inside the Article

Vitals

Major Finding: Most ablative studies reported a percentage of improvement within the range of 26% to 75%, compared with a reported range of improvement of 26% to 50% for the nonablative studies.

Data Source: A review of 26 studies on ablative and nonablative fractional photothermolysis for facial acne scars.

Disclosures: The review authors reported having no outside funding and stated that they had no conflicts of interest to declare.

Accelerated Decrease in HBsAg Seen Before HBV Clearance

Article Type
Changed
Fri, 12/07/2018 - 14:39
Display Headline
Accelerated Decrease in HBsAg Seen Before HBV Clearance

Serum hepatitis B surface antigen (HBsAg) seroclearance is preceded by significant, accelerated decreases in serum HBsAg levels during the 3 years leading up to seroclearance, wrote Dr. Yi-Cheng Chen and colleagues in the March issue of Clinical Gastroenterology and Hepatology.

Moreover, serum HBsAg levels of less than 200 IU/mL are highly predictive of spontaneous seroclearance.

"It is therefore recommended to quantify HBsAg every 2 years in hepatitis B e antigen-negative HBsAg carriers with persistently normal alanine aminotransferase levels to detect who requires yearly HBsAg assay for the anticipated HBsAg seroclearance" (Clin. Gastroenterol. Hepatol. 2012 [doi:10.1016/j.cgh.2011.08.029]).

Dr. Chen of the Liver Research Unit at Chang Gung University, Taipei, Taiwan, studied 46 patients who had undergone spontaneous HBsAg seroclearance, defined as "loss of serum HBsAg documented on two occasions at least 6 months apart and maintained to the last visit."

Prior to clearance, all patients were hepatitis B e antigen (HBeAg)–negative for between 8 and 28 years, and most had persistently normal alanine aminotransferase (PNALT) for a mean of 14.2 years, except seven patients with mild ALT elevation.

Their median age at clearance was 48 years, and 87% were male.

"Considering that sustained remission is a prerequisite of HBsAg seroclearance, a group of 46 HBeAg-negative, noncirrhotic HBV carriers with PNALT for greater than 10 years were selected as [a] control group for comparison," wrote the researchers.

Controls were matched for age, gender, and genotype and all had PNALT for greater than 10 years but remained HBsAg seropositive.

Levels of HBsAg were assessed in saved serum specimens collected at 5 years, 3 years, and 1 year before HBsAg seroclearance (or, for controls, the last examination).

The authors found that median HBsAg levels at all time points were significantly lower in the seroclearance group, compared with controls.

For example, at 5 years before seroclearance, the study group’s median HBsAg level was 154.8 IU/mL, vs. 1,361 IU/mL for controls; at 3 years, the median values were 56.3 IU/mL vs. 1,063.3 IU/mL; and at 1 year prior to HBsAg seroclearance, levels were 1.6 IU/mL vs. 642.6 IU/mL (P less than .0001 for all time points).

Not only were the values much lower in the study group at all time points, but also the rate of change from year to year was significantly greater in the seroclearance group, compared with controls.

Indeed, the authors calculated the estimated annual decline of HBsAg to be 0.53 log10 IU/mL for the study group, significantly higher than 0.09 log10 IU/mL per year in the matched controls (P less than .0001).

Finally, the authors looked at the link between the serum HBsAg levels themselves and achievement of clearance.

"In the study group, HBsAg levels declined to less than 200 and less than 100 IU/mL in 55% and 37% of the patients, respectively, at 5 years prior to HBsAg seroclearance," wrote the authors.

"The proportion of patients reaching these levels of HBsAg increased to 81% and 64% at 3 years; [and to] 100% and 98% at 1 year prior to HBsAg seroclearance."

Taken together, "A greater than 1 log10 IU/mL decline in HBsAg during a 2-year period, combined with a single time point HBsAg level less than or equal to 200 IU/mL, can best predict HBsAg seroclearance in 1 and 3 years," with a positive predictive value for clearance in 1 year of 97%, and the positive predictive value for clearance at 3 years of 100%.

The authors mentioned several limitations. "Matching age, gender, and HBV genotype for this study might mask the potential influence of these factors on HBsAg seroclearance," they wrote.

Additionally, some stored serum specimens were of insufficient volume for analysis.

"Nevertheless, more than 90% of stored serum specimens were available at the 1- and 3-year time points, with 83% being available at the 5-year time point."

The authors stated that the study was supported by grants from the Chang Gung Medical Research Fund and the Prosperous Foundation. One of the researchers disclosed serving as a global advisory board member or being involved with clinical trials sponsored by Roche, Bristol-Myers Squibb, Novartis, and Gilead Sciences.

Author and Disclosure Information

Publications
Topics
Legacy Keywords
Serum hepatitis B surface antigen, HBsAg, seroclearance, serum HBsAg levels, Dr. Yi-Cheng Chen, Clinical Gastroenterology and Hepatology, spontaneous seroclearance, hepatitis B e antigen-negative HBsAg carriers, alanine aminotransferase, PNALT, mild ALT elevation,
Author and Disclosure Information

Author and Disclosure Information

Serum hepatitis B surface antigen (HBsAg) seroclearance is preceded by significant, accelerated decreases in serum HBsAg levels during the 3 years leading up to seroclearance, wrote Dr. Yi-Cheng Chen and colleagues in the March issue of Clinical Gastroenterology and Hepatology.

Moreover, serum HBsAg levels of less than 200 IU/mL are highly predictive of spontaneous seroclearance.

"It is therefore recommended to quantify HBsAg every 2 years in hepatitis B e antigen-negative HBsAg carriers with persistently normal alanine aminotransferase levels to detect who requires yearly HBsAg assay for the anticipated HBsAg seroclearance" (Clin. Gastroenterol. Hepatol. 2012 [doi:10.1016/j.cgh.2011.08.029]).

Dr. Chen of the Liver Research Unit at Chang Gung University, Taipei, Taiwan, studied 46 patients who had undergone spontaneous HBsAg seroclearance, defined as "loss of serum HBsAg documented on two occasions at least 6 months apart and maintained to the last visit."

Prior to clearance, all patients were hepatitis B e antigen (HBeAg)–negative for between 8 and 28 years, and most had persistently normal alanine aminotransferase (PNALT) for a mean of 14.2 years, except seven patients with mild ALT elevation.

Their median age at clearance was 48 years, and 87% were male.

"Considering that sustained remission is a prerequisite of HBsAg seroclearance, a group of 46 HBeAg-negative, noncirrhotic HBV carriers with PNALT for greater than 10 years were selected as [a] control group for comparison," wrote the researchers.

Controls were matched for age, gender, and genotype and all had PNALT for greater than 10 years but remained HBsAg seropositive.

Levels of HBsAg were assessed in saved serum specimens collected at 5 years, 3 years, and 1 year before HBsAg seroclearance (or, for controls, the last examination).

The authors found that median HBsAg levels at all time points were significantly lower in the seroclearance group, compared with controls.

For example, at 5 years before seroclearance, the study group’s median HBsAg level was 154.8 IU/mL, vs. 1,361 IU/mL for controls; at 3 years, the median values were 56.3 IU/mL vs. 1,063.3 IU/mL; and at 1 year prior to HBsAg seroclearance, levels were 1.6 IU/mL vs. 642.6 IU/mL (P less than .0001 for all time points).

Not only were the values much lower in the study group at all time points, but also the rate of change from year to year was significantly greater in the seroclearance group, compared with controls.

Indeed, the authors calculated the estimated annual decline of HBsAg to be 0.53 log10 IU/mL for the study group, significantly higher than 0.09 log10 IU/mL per year in the matched controls (P less than .0001).

Finally, the authors looked at the link between the serum HBsAg levels themselves and achievement of clearance.

"In the study group, HBsAg levels declined to less than 200 and less than 100 IU/mL in 55% and 37% of the patients, respectively, at 5 years prior to HBsAg seroclearance," wrote the authors.

"The proportion of patients reaching these levels of HBsAg increased to 81% and 64% at 3 years; [and to] 100% and 98% at 1 year prior to HBsAg seroclearance."

Taken together, "A greater than 1 log10 IU/mL decline in HBsAg during a 2-year period, combined with a single time point HBsAg level less than or equal to 200 IU/mL, can best predict HBsAg seroclearance in 1 and 3 years," with a positive predictive value for clearance in 1 year of 97%, and the positive predictive value for clearance at 3 years of 100%.

The authors mentioned several limitations. "Matching age, gender, and HBV genotype for this study might mask the potential influence of these factors on HBsAg seroclearance," they wrote.

Additionally, some stored serum specimens were of insufficient volume for analysis.

"Nevertheless, more than 90% of stored serum specimens were available at the 1- and 3-year time points, with 83% being available at the 5-year time point."

The authors stated that the study was supported by grants from the Chang Gung Medical Research Fund and the Prosperous Foundation. One of the researchers disclosed serving as a global advisory board member or being involved with clinical trials sponsored by Roche, Bristol-Myers Squibb, Novartis, and Gilead Sciences.

Serum hepatitis B surface antigen (HBsAg) seroclearance is preceded by significant, accelerated decreases in serum HBsAg levels during the 3 years leading up to seroclearance, wrote Dr. Yi-Cheng Chen and colleagues in the March issue of Clinical Gastroenterology and Hepatology.

Moreover, serum HBsAg levels of less than 200 IU/mL are highly predictive of spontaneous seroclearance.

"It is therefore recommended to quantify HBsAg every 2 years in hepatitis B e antigen-negative HBsAg carriers with persistently normal alanine aminotransferase levels to detect who requires yearly HBsAg assay for the anticipated HBsAg seroclearance" (Clin. Gastroenterol. Hepatol. 2012 [doi:10.1016/j.cgh.2011.08.029]).

Dr. Chen of the Liver Research Unit at Chang Gung University, Taipei, Taiwan, studied 46 patients who had undergone spontaneous HBsAg seroclearance, defined as "loss of serum HBsAg documented on two occasions at least 6 months apart and maintained to the last visit."

Prior to clearance, all patients were hepatitis B e antigen (HBeAg)–negative for between 8 and 28 years, and most had persistently normal alanine aminotransferase (PNALT) for a mean of 14.2 years, except seven patients with mild ALT elevation.

Their median age at clearance was 48 years, and 87% were male.

"Considering that sustained remission is a prerequisite of HBsAg seroclearance, a group of 46 HBeAg-negative, noncirrhotic HBV carriers with PNALT for greater than 10 years were selected as [a] control group for comparison," wrote the researchers.

Controls were matched for age, gender, and genotype and all had PNALT for greater than 10 years but remained HBsAg seropositive.

Levels of HBsAg were assessed in saved serum specimens collected at 5 years, 3 years, and 1 year before HBsAg seroclearance (or, for controls, the last examination).

The authors found that median HBsAg levels at all time points were significantly lower in the seroclearance group, compared with controls.

For example, at 5 years before seroclearance, the study group’s median HBsAg level was 154.8 IU/mL, vs. 1,361 IU/mL for controls; at 3 years, the median values were 56.3 IU/mL vs. 1,063.3 IU/mL; and at 1 year prior to HBsAg seroclearance, levels were 1.6 IU/mL vs. 642.6 IU/mL (P less than .0001 for all time points).

Not only were the values much lower in the study group at all time points, but also the rate of change from year to year was significantly greater in the seroclearance group, compared with controls.

Indeed, the authors calculated the estimated annual decline of HBsAg to be 0.53 log10 IU/mL for the study group, significantly higher than 0.09 log10 IU/mL per year in the matched controls (P less than .0001).

Finally, the authors looked at the link between the serum HBsAg levels themselves and achievement of clearance.

"In the study group, HBsAg levels declined to less than 200 and less than 100 IU/mL in 55% and 37% of the patients, respectively, at 5 years prior to HBsAg seroclearance," wrote the authors.

"The proportion of patients reaching these levels of HBsAg increased to 81% and 64% at 3 years; [and to] 100% and 98% at 1 year prior to HBsAg seroclearance."

Taken together, "A greater than 1 log10 IU/mL decline in HBsAg during a 2-year period, combined with a single time point HBsAg level less than or equal to 200 IU/mL, can best predict HBsAg seroclearance in 1 and 3 years," with a positive predictive value for clearance in 1 year of 97%, and the positive predictive value for clearance at 3 years of 100%.

The authors mentioned several limitations. "Matching age, gender, and HBV genotype for this study might mask the potential influence of these factors on HBsAg seroclearance," they wrote.

Additionally, some stored serum specimens were of insufficient volume for analysis.

"Nevertheless, more than 90% of stored serum specimens were available at the 1- and 3-year time points, with 83% being available at the 5-year time point."

The authors stated that the study was supported by grants from the Chang Gung Medical Research Fund and the Prosperous Foundation. One of the researchers disclosed serving as a global advisory board member or being involved with clinical trials sponsored by Roche, Bristol-Myers Squibb, Novartis, and Gilead Sciences.

Publications
Publications
Topics
Article Type
Display Headline
Accelerated Decrease in HBsAg Seen Before HBV Clearance
Display Headline
Accelerated Decrease in HBsAg Seen Before HBV Clearance
Legacy Keywords
Serum hepatitis B surface antigen, HBsAg, seroclearance, serum HBsAg levels, Dr. Yi-Cheng Chen, Clinical Gastroenterology and Hepatology, spontaneous seroclearance, hepatitis B e antigen-negative HBsAg carriers, alanine aminotransferase, PNALT, mild ALT elevation,
Legacy Keywords
Serum hepatitis B surface antigen, HBsAg, seroclearance, serum HBsAg levels, Dr. Yi-Cheng Chen, Clinical Gastroenterology and Hepatology, spontaneous seroclearance, hepatitis B e antigen-negative HBsAg carriers, alanine aminotransferase, PNALT, mild ALT elevation,
Article Source

FROM CLINICAL GASTROENTEROLOGY AND HEPATOLOGY

PURLs Copyright

Inside the Article

Vitals

Major Finding: In the 3 years before hepatitis B surface antigen seroclearance, the estimated annual decline of the surface antigen was 0.53 log10 IU/mL, significantly higher than the 0.09 log10 IU/mL annual decline seen in matched controls (P less than .0001).

Data Source: A retrospective study of 46 patients who underwent spontaneous seroclearance of HBsAg.

Disclosures: The authors stated that the study was supported by grants from the Chang Gung Medical Research Fund and the Prosperous Foundation. One of the researchers disclosed serving as a global advisory board member or being involved with clinical trials sponsored by Roche, Bristol-Myers Squibb, Novartis and Gilead Sciences.

Demand for Screening Colonoscopy Down During Recession

Article Type
Changed
Wed, 05/26/2021 - 14:04
Display Headline
Demand for Screening Colonoscopy Down During Recession

Screening colonoscopy rates dropped significantly among insured Americans during the recession of December 2007 to June 2009, compared with prerecession rates, according to a report published in the March issue of Clinical Gastroenterology and Hepatology.

"These findings reflect the intimate link between socioeconomic factors and health care use," wrote Dr. Spencer D. Dorn of the University of North Carolina at Chapel Hill and his colleagues.

Dr. Spencer Dorn

"When faced with economic insecurity, asymptomatic individuals may be unable to afford screening colonoscopy, or may perceive it to be less important than competing demands for their more limited resources," they noted.

"Down the line, this may increase health care costs and the proportion of individuals diagnosed with late-stage colorectal cancer," the authors added (Clin. Gastroenterol. Hepatol. 2012 March [doi: 10.1016/j.cgh.2011.11.020]).

Dr. Dorn and his colleagues analyzed data from a random sample of all persons aged 50-64 years across 106 commercial health plans in the United States with at least 6 months of continuous health plan enrollment.

The investigators then compared trends in the monthly rate of screening colonoscopies (per 1 million eligible beneficiaries) prior to the recession (January 2005 to November 2007) with rates during the recession (December 2007 to June 2009).

The time periods were defined by the National Bureau of Economic Research.

"We hypothesized a priori that there would not be an abrupt decline in screenings given the recession’s gradual onset and therefore chose to only model changes in the trend of screenings before and during the recession," they wrote.

Indeed, Dr. Dorn and his coauthors found that prior to the recession, screening colonoscopies increased at a rate of 38.2 more colonoscopies per 1 million insured individuals per month (95% confidence interval, 32.4-43.9).

However, during the recession, screening colonoscopy use declined, to the tune of 30.7 fewer colonoscopies per 1 million insured individuals per month (95% CI, –42.2 to –19.1).

"In sum, compared to what would have been expected based on prerecession trends, during the recession, screening colonoscopy use declined at a rate of 68.9 (95% CI, –84.6 to –53.1) fewer colonoscopies per 1 million insured individuals per month (P less than .001)," the authors wrote.

They also looked at utilization rates stratified according to low versus high out-of-pocket cost plans.

They found that during the recession, among low-cost plans ($50 or less per month), screening colonoscopies declined at a rate of 58.1 fewer colonoscopies than expected per 1 million insured patients.

However, among high-cost plans ($300 or more per month), the decline was even more pronounced, with 81.5 fewer colonoscopies than expected per 1 million insured patients (P = .035).

"Applying the decreased utilization documented here to the 39.5 million commercially insured, 50- to 64-year-old Americans, over the entire 19-month recession period, this would have resulted in 516,309 fewer colonoscopies than what would have been expected based on prerecession trends," they added.

"Policies to reduce patient cost sharing for colonoscopy and other recommended, cost-effective preventive services should be considered," Dr. Dorn and his colleagues concluded.

The study was supported in part by grants from the National Institutes of Health and the Crohn’s and Colitis Foundation of America. The authors stated that they had no relevant financial disclosures.

Author and Disclosure Information

Publications
Topics
Legacy Keywords
screening colonoscopy, health care recession, health care cost US, late-stage colorectal cancer, patient cost sharing
Author and Disclosure Information

Author and Disclosure Information

Screening colonoscopy rates dropped significantly among insured Americans during the recession of December 2007 to June 2009, compared with prerecession rates, according to a report published in the March issue of Clinical Gastroenterology and Hepatology.

"These findings reflect the intimate link between socioeconomic factors and health care use," wrote Dr. Spencer D. Dorn of the University of North Carolina at Chapel Hill and his colleagues.

Dr. Spencer Dorn

"When faced with economic insecurity, asymptomatic individuals may be unable to afford screening colonoscopy, or may perceive it to be less important than competing demands for their more limited resources," they noted.

"Down the line, this may increase health care costs and the proportion of individuals diagnosed with late-stage colorectal cancer," the authors added (Clin. Gastroenterol. Hepatol. 2012 March [doi: 10.1016/j.cgh.2011.11.020]).

Dr. Dorn and his colleagues analyzed data from a random sample of all persons aged 50-64 years across 106 commercial health plans in the United States with at least 6 months of continuous health plan enrollment.

The investigators then compared trends in the monthly rate of screening colonoscopies (per 1 million eligible beneficiaries) prior to the recession (January 2005 to November 2007) with rates during the recession (December 2007 to June 2009).

The time periods were defined by the National Bureau of Economic Research.

"We hypothesized a priori that there would not be an abrupt decline in screenings given the recession’s gradual onset and therefore chose to only model changes in the trend of screenings before and during the recession," they wrote.

Indeed, Dr. Dorn and his coauthors found that prior to the recession, screening colonoscopies increased at a rate of 38.2 more colonoscopies per 1 million insured individuals per month (95% confidence interval, 32.4-43.9).

However, during the recession, screening colonoscopy use declined, to the tune of 30.7 fewer colonoscopies per 1 million insured individuals per month (95% CI, –42.2 to –19.1).

"In sum, compared to what would have been expected based on prerecession trends, during the recession, screening colonoscopy use declined at a rate of 68.9 (95% CI, –84.6 to –53.1) fewer colonoscopies per 1 million insured individuals per month (P less than .001)," the authors wrote.

They also looked at utilization rates stratified according to low versus high out-of-pocket cost plans.

They found that during the recession, among low-cost plans ($50 or less per month), screening colonoscopies declined at a rate of 58.1 fewer colonoscopies than expected per 1 million insured patients.

However, among high-cost plans ($300 or more per month), the decline was even more pronounced, with 81.5 fewer colonoscopies than expected per 1 million insured patients (P = .035).

"Applying the decreased utilization documented here to the 39.5 million commercially insured, 50- to 64-year-old Americans, over the entire 19-month recession period, this would have resulted in 516,309 fewer colonoscopies than what would have been expected based on prerecession trends," they added.

"Policies to reduce patient cost sharing for colonoscopy and other recommended, cost-effective preventive services should be considered," Dr. Dorn and his colleagues concluded.

The study was supported in part by grants from the National Institutes of Health and the Crohn’s and Colitis Foundation of America. The authors stated that they had no relevant financial disclosures.

Screening colonoscopy rates dropped significantly among insured Americans during the recession of December 2007 to June 2009, compared with prerecession rates, according to a report published in the March issue of Clinical Gastroenterology and Hepatology.

"These findings reflect the intimate link between socioeconomic factors and health care use," wrote Dr. Spencer D. Dorn of the University of North Carolina at Chapel Hill and his colleagues.

Dr. Spencer Dorn

"When faced with economic insecurity, asymptomatic individuals may be unable to afford screening colonoscopy, or may perceive it to be less important than competing demands for their more limited resources," they noted.

"Down the line, this may increase health care costs and the proportion of individuals diagnosed with late-stage colorectal cancer," the authors added (Clin. Gastroenterol. Hepatol. 2012 March [doi: 10.1016/j.cgh.2011.11.020]).

Dr. Dorn and his colleagues analyzed data from a random sample of all persons aged 50-64 years across 106 commercial health plans in the United States with at least 6 months of continuous health plan enrollment.

The investigators then compared trends in the monthly rate of screening colonoscopies (per 1 million eligible beneficiaries) prior to the recession (January 2005 to November 2007) with rates during the recession (December 2007 to June 2009).

The time periods were defined by the National Bureau of Economic Research.

"We hypothesized a priori that there would not be an abrupt decline in screenings given the recession’s gradual onset and therefore chose to only model changes in the trend of screenings before and during the recession," they wrote.

Indeed, Dr. Dorn and his coauthors found that prior to the recession, screening colonoscopies increased at a rate of 38.2 more colonoscopies per 1 million insured individuals per month (95% confidence interval, 32.4-43.9).

However, during the recession, screening colonoscopy use declined, to the tune of 30.7 fewer colonoscopies per 1 million insured individuals per month (95% CI, –42.2 to –19.1).

"In sum, compared to what would have been expected based on prerecession trends, during the recession, screening colonoscopy use declined at a rate of 68.9 (95% CI, –84.6 to –53.1) fewer colonoscopies per 1 million insured individuals per month (P less than .001)," the authors wrote.

They also looked at utilization rates stratified according to low versus high out-of-pocket cost plans.

They found that during the recession, among low-cost plans ($50 or less per month), screening colonoscopies declined at a rate of 58.1 fewer colonoscopies than expected per 1 million insured patients.

However, among high-cost plans ($300 or more per month), the decline was even more pronounced, with 81.5 fewer colonoscopies than expected per 1 million insured patients (P = .035).

"Applying the decreased utilization documented here to the 39.5 million commercially insured, 50- to 64-year-old Americans, over the entire 19-month recession period, this would have resulted in 516,309 fewer colonoscopies than what would have been expected based on prerecession trends," they added.

"Policies to reduce patient cost sharing for colonoscopy and other recommended, cost-effective preventive services should be considered," Dr. Dorn and his colleagues concluded.

The study was supported in part by grants from the National Institutes of Health and the Crohn’s and Colitis Foundation of America. The authors stated that they had no relevant financial disclosures.

Publications
Publications
Topics
Article Type
Display Headline
Demand for Screening Colonoscopy Down During Recession
Display Headline
Demand for Screening Colonoscopy Down During Recession
Legacy Keywords
screening colonoscopy, health care recession, health care cost US, late-stage colorectal cancer, patient cost sharing
Legacy Keywords
screening colonoscopy, health care recession, health care cost US, late-stage colorectal cancer, patient cost sharing
Article Source

FROM CLINICAL GASTROENTEROLOGY AND HEPATOLOGY

PURLs Copyright

Inside the Article

Vitals

Major Finding: During the recent recession (December 2007 to June 2009), screening colonoscopy use declined at a rate of 68.9 fewer colonoscopies per 1 million insured individuals per month, compared with what would have been expected based on prerecession trends.

Data Source: A time-series analysis using health insurance claims data from 106 commercial health plans across the United States.

Disclosures: The study was supported in part by grants from the National Institutes of Health and the Crohn’s and Colitis Foundation of America. The authors stated that they had no relevant financial disclosures.

Long-Term Mortality No Higher for Living Liver Donors

Article Type
Changed
Fri, 01/18/2019 - 11:39
Display Headline
Long-Term Mortality No Higher for Living Liver Donors

Long-term mortality risk following live donor liver donation was nearly identical to that of matched live kidney donors as well as healthy, demographically matched controls, according to a report by Dr. Abimereki D. Muzaale and colleagues published in the February issue of Gastroenterology.

"Donor-related complications not resulting in immediate death or acute liver failure do not seem to result in decreased long-term longevity," they wrote.

Dr. Muzaale, of Johns Hopkins University, Baltimore, followed the 4,111 patients in the United States who had donated a portion of their livers between April 1, 1994, and March 31, 2011. The donors were followed for a median of 7.6 years (Gastroenterology 2012 February [doi:10.1053/j.gastro.2011.11.015]).

Most livers (77%) were donated to a biological relative of the donor. Nonspousal, nonbiologically related donations made up 17% of the total, and spousal donations made up the remaining 6%.

"All living donors had reportedly excellent hepatic and renal function" at the time of transplant, the researchers wrote. Body mass index was greater than 30 in 15% of the donors. At some point, 22% had smoked cigarettes, and 90% of the donors were under 50 years old.

This cohort was compared with live kidney donors, who were matched to the liver donors according to year of donation, age, gender, race, education background, and BMI. Finally, liver donors also were compared with a third cohort of healthy adults, similarly matched, from the National Health and Nutrition Examination Survey III (NHANES III). None of the NHANES III patients had comorbidities that might have deemed them ineligible for liver donation, according to the authors.

All deaths were determined from the Social Security Death Master File.

Overall, the researchers found that in the first 90 days after donation, 7 living liver donors died, for a rate of 1.7 deaths per 1,000 donors (95% confidence interval, 0.7-3.5). Compared with kidney donors, this rate was higher, but not significantly higher (0.5 deaths per 1,000 donors; 95% CI, 0.1-1.8, P = .09).

"By contrast, as would be anticipated, living liver donors had a significantly higher risk of early death than matched NHANES III participants who likely did not undergo a surgical procedure in their first 90 days of follow-up (P = .008)," wrote the authors.

Among the decedents, neither donor age, recipient age, portion of liver donated, center volume at time of donor death, nor cause of death correlated with 90-day mortality.

Causes of death included anaphylaxis, multiorgan failure, infection, drug overdose and suicide, and cardiovascular and respiratory arrest.

Beyond 90 days, however, the differences in mortality decreased even more among cohorts.

For example, at 2 years, live liver donors had a cumulative mortality of 0.3%, compared with 0.2% among kidney donors and 0.3% among healthy NHANES III participants.

At 5 years, the results were similar, with all three cohorts registering a 0.4% cumulative mortality.

By 9 years, liver donors could expect a 0.9% mortality rate, compared with 1.0% among the kidney donors and 0.8% among healthy controls.

Finally, at 11 years out, the respective mortality rates were once again similar or identical among cohorts, with 1.2% for liver donors, 1.2% for kidney donors, and 1.4% for healthy controls.

The authors conceded that the study was limited by the fact that live liver donors were "meticulously screened and likely more healthy than NHANES-III participants.

"The ideal comparison group would have included healthy individuals who were cleared for donation but did not proceed to donation," they wrote. "However, this comparison group was not available."

Dr. Muzaale and the authors of this study stated that they had no financial disclosures. They added that the Organ Procurement and Transplantation Network is supported by the U.S. Health Resources and Services Administration.

Author and Disclosure Information

Publications
Topics
Legacy Keywords
live liver donation, long-term mortality, live kidney donors,
kidney donor complications, liver donor complications
Author and Disclosure Information

Author and Disclosure Information

Long-term mortality risk following live donor liver donation was nearly identical to that of matched live kidney donors as well as healthy, demographically matched controls, according to a report by Dr. Abimereki D. Muzaale and colleagues published in the February issue of Gastroenterology.

"Donor-related complications not resulting in immediate death or acute liver failure do not seem to result in decreased long-term longevity," they wrote.

Dr. Muzaale, of Johns Hopkins University, Baltimore, followed the 4,111 patients in the United States who had donated a portion of their livers between April 1, 1994, and March 31, 2011. The donors were followed for a median of 7.6 years (Gastroenterology 2012 February [doi:10.1053/j.gastro.2011.11.015]).

Most livers (77%) were donated to a biological relative of the donor. Nonspousal, nonbiologically related donations made up 17% of the total, and spousal donations made up the remaining 6%.

"All living donors had reportedly excellent hepatic and renal function" at the time of transplant, the researchers wrote. Body mass index was greater than 30 in 15% of the donors. At some point, 22% had smoked cigarettes, and 90% of the donors were under 50 years old.

This cohort was compared with live kidney donors, who were matched to the liver donors according to year of donation, age, gender, race, education background, and BMI. Finally, liver donors also were compared with a third cohort of healthy adults, similarly matched, from the National Health and Nutrition Examination Survey III (NHANES III). None of the NHANES III patients had comorbidities that might have deemed them ineligible for liver donation, according to the authors.

All deaths were determined from the Social Security Death Master File.

Overall, the researchers found that in the first 90 days after donation, 7 living liver donors died, for a rate of 1.7 deaths per 1,000 donors (95% confidence interval, 0.7-3.5). Compared with kidney donors, this rate was higher, but not significantly higher (0.5 deaths per 1,000 donors; 95% CI, 0.1-1.8, P = .09).

"By contrast, as would be anticipated, living liver donors had a significantly higher risk of early death than matched NHANES III participants who likely did not undergo a surgical procedure in their first 90 days of follow-up (P = .008)," wrote the authors.

Among the decedents, neither donor age, recipient age, portion of liver donated, center volume at time of donor death, nor cause of death correlated with 90-day mortality.

Causes of death included anaphylaxis, multiorgan failure, infection, drug overdose and suicide, and cardiovascular and respiratory arrest.

Beyond 90 days, however, the differences in mortality decreased even more among cohorts.

For example, at 2 years, live liver donors had a cumulative mortality of 0.3%, compared with 0.2% among kidney donors and 0.3% among healthy NHANES III participants.

At 5 years, the results were similar, with all three cohorts registering a 0.4% cumulative mortality.

By 9 years, liver donors could expect a 0.9% mortality rate, compared with 1.0% among the kidney donors and 0.8% among healthy controls.

Finally, at 11 years out, the respective mortality rates were once again similar or identical among cohorts, with 1.2% for liver donors, 1.2% for kidney donors, and 1.4% for healthy controls.

The authors conceded that the study was limited by the fact that live liver donors were "meticulously screened and likely more healthy than NHANES-III participants.

"The ideal comparison group would have included healthy individuals who were cleared for donation but did not proceed to donation," they wrote. "However, this comparison group was not available."

Dr. Muzaale and the authors of this study stated that they had no financial disclosures. They added that the Organ Procurement and Transplantation Network is supported by the U.S. Health Resources and Services Administration.

Long-term mortality risk following live donor liver donation was nearly identical to that of matched live kidney donors as well as healthy, demographically matched controls, according to a report by Dr. Abimereki D. Muzaale and colleagues published in the February issue of Gastroenterology.

"Donor-related complications not resulting in immediate death or acute liver failure do not seem to result in decreased long-term longevity," they wrote.

Dr. Muzaale, of Johns Hopkins University, Baltimore, followed the 4,111 patients in the United States who had donated a portion of their livers between April 1, 1994, and March 31, 2011. The donors were followed for a median of 7.6 years (Gastroenterology 2012 February [doi:10.1053/j.gastro.2011.11.015]).

Most livers (77%) were donated to a biological relative of the donor. Nonspousal, nonbiologically related donations made up 17% of the total, and spousal donations made up the remaining 6%.

"All living donors had reportedly excellent hepatic and renal function" at the time of transplant, the researchers wrote. Body mass index was greater than 30 in 15% of the donors. At some point, 22% had smoked cigarettes, and 90% of the donors were under 50 years old.

This cohort was compared with live kidney donors, who were matched to the liver donors according to year of donation, age, gender, race, education background, and BMI. Finally, liver donors also were compared with a third cohort of healthy adults, similarly matched, from the National Health and Nutrition Examination Survey III (NHANES III). None of the NHANES III patients had comorbidities that might have deemed them ineligible for liver donation, according to the authors.

All deaths were determined from the Social Security Death Master File.

Overall, the researchers found that in the first 90 days after donation, 7 living liver donors died, for a rate of 1.7 deaths per 1,000 donors (95% confidence interval, 0.7-3.5). Compared with kidney donors, this rate was higher, but not significantly higher (0.5 deaths per 1,000 donors; 95% CI, 0.1-1.8, P = .09).

"By contrast, as would be anticipated, living liver donors had a significantly higher risk of early death than matched NHANES III participants who likely did not undergo a surgical procedure in their first 90 days of follow-up (P = .008)," wrote the authors.

Among the decedents, neither donor age, recipient age, portion of liver donated, center volume at time of donor death, nor cause of death correlated with 90-day mortality.

Causes of death included anaphylaxis, multiorgan failure, infection, drug overdose and suicide, and cardiovascular and respiratory arrest.

Beyond 90 days, however, the differences in mortality decreased even more among cohorts.

For example, at 2 years, live liver donors had a cumulative mortality of 0.3%, compared with 0.2% among kidney donors and 0.3% among healthy NHANES III participants.

At 5 years, the results were similar, with all three cohorts registering a 0.4% cumulative mortality.

By 9 years, liver donors could expect a 0.9% mortality rate, compared with 1.0% among the kidney donors and 0.8% among healthy controls.

Finally, at 11 years out, the respective mortality rates were once again similar or identical among cohorts, with 1.2% for liver donors, 1.2% for kidney donors, and 1.4% for healthy controls.

The authors conceded that the study was limited by the fact that live liver donors were "meticulously screened and likely more healthy than NHANES-III participants.

"The ideal comparison group would have included healthy individuals who were cleared for donation but did not proceed to donation," they wrote. "However, this comparison group was not available."

Dr. Muzaale and the authors of this study stated that they had no financial disclosures. They added that the Organ Procurement and Transplantation Network is supported by the U.S. Health Resources and Services Administration.

Publications
Publications
Topics
Article Type
Display Headline
Long-Term Mortality No Higher for Living Liver Donors
Display Headline
Long-Term Mortality No Higher for Living Liver Donors
Legacy Keywords
live liver donation, long-term mortality, live kidney donors,
kidney donor complications, liver donor complications
Legacy Keywords
live liver donation, long-term mortality, live kidney donors,
kidney donor complications, liver donor complications
Article Source

FROM GASTROENTEROLOGY

PURLs Copyright

Inside the Article

Vitals

Major Finding: At 11 years, the respective mortality rates were 1.2% for liver donors, 1.2% for kidney donors, and 1.4% for controls.

Data Source: A cohort comparison study of all 4,111 live liver donors over a 17-year period in the United States, matched kidney donors and matched healthy adults in the National Health and Nutrition Examination Survey III (NHANES III).

Disclosures: Dr. Muzaale and the authors of this study stated that they had no financial disclosures. The Organ Procurement and Transplantation Network is supported by the U.S. Health Resources and Services Administration.

Fatigue in Cirrhosis Linked to Psychosocial Factors

Article Type
Changed
Fri, 12/07/2018 - 14:34
Display Headline
Fatigue in Cirrhosis Linked to Psychosocial Factors

Cirrhosis patients have significantly more fatigue than do matched controls from the general population, and this fatigue often persists 1 year after liver transplantation, reported Dr. Evangelos Kalaitzakis and colleagues in the February issue of Clinical Gastroenterology and Hepatology.

Moreover, that fatigue is highly correlated with depression and anxiety and it impairs quality of life, the authors wrote.

"Psychological distress was found to be a major determinant of fatigue in cirrhosis," the authors concluded.

Dr. Kalaitzakis of the University of Gothenburg (Sweden) studied 108 cirrhosis patients seen at a single institution between May 2004 and April 2007. The patients’ mean age was 52 years, and 36 of the 108 were women.

At study entry, all patients completed the Fatigue Impact Scale (FIS), which assesses fatigue in the physical, psychosocial, and cognitive domains and gives a total fatigue score. Patients were compared with a random sample of the general Swedish population who were mailed identical surveys and matched to cirrhosis patients.

Patients scoring greater than two standard deviations above the general population cohort on the FIS were classified as being fatigued (Clin. Gastro. Hepatol. 2012 [doi:10.1016/j.cgh.2011.07.029]).

At baseline, cirrhosis patients scored significantly higher than controls on the total FIS score, as well as on the physical, psychosocial, and cognitive domains (P less than .001 for all).

Study participants also completed the Hospital Anxiety and Depression Scale (HAD). Significant depression and anxiety on the HAD were both highly correlated with total fatigue as measured by the FIS (P less than .001).

Indeed, compared with controls, cirrhosis patients were more likely to have borderline or significant anxiety (12% vs. 21% and 8% vs. 16%, respectively, P = .034) and borderline or significant depression (9% vs. 23% and 6% vs. 14%, respectively, P = .001), Dr. Kalaitzakis and associates reported.

Clinical factors played a role as well, according to the analysis. In univariate analysis, higher Child-Pugh class was significantly correlated with overall fatigue, as were current ascites or history of ascites (P less than .001 for all).

Current overt hepatic encephalopathy also was significantly correlated with overall fatigue, although somewhat less so than the other factors (P less than .05).

Factors not significantly related to total fatigue included liver disease etiology, existence of stable or bleeding varices, and malnutrition, the investigators said.

In fact, in multivariate analysis, FIS scores were only related to depression, anxiety, Child-Pugh score, and low serum cortisol levels, they wrote.

Overall, 66 out of the 108 patients completed a liver transplant, and follow-up data were available at 1 year on 60 of these.

"FIS domain and total scores had improved 1 year post-transplant, but transplant recipients still had higher physical fatigue compared to controls," Dr. Kalaitzakis and associates noted.

Of the 37 patients whose FIS scores before transplant had classified them as physically fatigued, 17 (46%) continued to be fatigued after transplant, wrote the authors.

Compared with the patients whose fatigue levels dropped post transplant, these 17 patients once again were more likely to have significant or borderline depression at baseline, according to the HAD (15% vs. 35% and 15% vs. 41%, respectively; P = .019).

"Psychological distress was found to be a major determinant of fatigue in cirrhosis," the authors concluded.

Although some previous studies have found that antidepressants do not improve fatigue, at least in cancer patients, "our findings ... indicate that patients with cirrhosis and significant anxiety or depression confirmed by a psychiatrist may benefit from specific treatment for these disorders, which could lead to improvement in fatigue," they wrote. "However, this would need to be formally tested in interventional trials."

Dr. Kalaitzakis and associates stated that they had no conflicts of interest to disclose and no grant support for this study.

Author and Disclosure Information

Publications
Topics
Legacy Keywords
liver cirrhosis, depression, anxiety, fatigue
Author and Disclosure Information

Author and Disclosure Information

Cirrhosis patients have significantly more fatigue than do matched controls from the general population, and this fatigue often persists 1 year after liver transplantation, reported Dr. Evangelos Kalaitzakis and colleagues in the February issue of Clinical Gastroenterology and Hepatology.

Moreover, that fatigue is highly correlated with depression and anxiety and it impairs quality of life, the authors wrote.

"Psychological distress was found to be a major determinant of fatigue in cirrhosis," the authors concluded.

Dr. Kalaitzakis of the University of Gothenburg (Sweden) studied 108 cirrhosis patients seen at a single institution between May 2004 and April 2007. The patients’ mean age was 52 years, and 36 of the 108 were women.

At study entry, all patients completed the Fatigue Impact Scale (FIS), which assesses fatigue in the physical, psychosocial, and cognitive domains and gives a total fatigue score. Patients were compared with a random sample of the general Swedish population who were mailed identical surveys and matched to cirrhosis patients.

Patients scoring greater than two standard deviations above the general population cohort on the FIS were classified as being fatigued (Clin. Gastro. Hepatol. 2012 [doi:10.1016/j.cgh.2011.07.029]).

At baseline, cirrhosis patients scored significantly higher than controls on the total FIS score, as well as on the physical, psychosocial, and cognitive domains (P less than .001 for all).

Study participants also completed the Hospital Anxiety and Depression Scale (HAD). Significant depression and anxiety on the HAD were both highly correlated with total fatigue as measured by the FIS (P less than .001).

Indeed, compared with controls, cirrhosis patients were more likely to have borderline or significant anxiety (12% vs. 21% and 8% vs. 16%, respectively, P = .034) and borderline or significant depression (9% vs. 23% and 6% vs. 14%, respectively, P = .001), Dr. Kalaitzakis and associates reported.

Clinical factors played a role as well, according to the analysis. In univariate analysis, higher Child-Pugh class was significantly correlated with overall fatigue, as were current ascites or history of ascites (P less than .001 for all).

Current overt hepatic encephalopathy also was significantly correlated with overall fatigue, although somewhat less so than the other factors (P less than .05).

Factors not significantly related to total fatigue included liver disease etiology, existence of stable or bleeding varices, and malnutrition, the investigators said.

In fact, in multivariate analysis, FIS scores were only related to depression, anxiety, Child-Pugh score, and low serum cortisol levels, they wrote.

Overall, 66 out of the 108 patients completed a liver transplant, and follow-up data were available at 1 year on 60 of these.

"FIS domain and total scores had improved 1 year post-transplant, but transplant recipients still had higher physical fatigue compared to controls," Dr. Kalaitzakis and associates noted.

Of the 37 patients whose FIS scores before transplant had classified them as physically fatigued, 17 (46%) continued to be fatigued after transplant, wrote the authors.

Compared with the patients whose fatigue levels dropped post transplant, these 17 patients once again were more likely to have significant or borderline depression at baseline, according to the HAD (15% vs. 35% and 15% vs. 41%, respectively; P = .019).

"Psychological distress was found to be a major determinant of fatigue in cirrhosis," the authors concluded.

Although some previous studies have found that antidepressants do not improve fatigue, at least in cancer patients, "our findings ... indicate that patients with cirrhosis and significant anxiety or depression confirmed by a psychiatrist may benefit from specific treatment for these disorders, which could lead to improvement in fatigue," they wrote. "However, this would need to be formally tested in interventional trials."

Dr. Kalaitzakis and associates stated that they had no conflicts of interest to disclose and no grant support for this study.

Cirrhosis patients have significantly more fatigue than do matched controls from the general population, and this fatigue often persists 1 year after liver transplantation, reported Dr. Evangelos Kalaitzakis and colleagues in the February issue of Clinical Gastroenterology and Hepatology.

Moreover, that fatigue is highly correlated with depression and anxiety and it impairs quality of life, the authors wrote.

"Psychological distress was found to be a major determinant of fatigue in cirrhosis," the authors concluded.

Dr. Kalaitzakis of the University of Gothenburg (Sweden) studied 108 cirrhosis patients seen at a single institution between May 2004 and April 2007. The patients’ mean age was 52 years, and 36 of the 108 were women.

At study entry, all patients completed the Fatigue Impact Scale (FIS), which assesses fatigue in the physical, psychosocial, and cognitive domains and gives a total fatigue score. Patients were compared with a random sample of the general Swedish population who were mailed identical surveys and matched to cirrhosis patients.

Patients scoring greater than two standard deviations above the general population cohort on the FIS were classified as being fatigued (Clin. Gastro. Hepatol. 2012 [doi:10.1016/j.cgh.2011.07.029]).

At baseline, cirrhosis patients scored significantly higher than controls on the total FIS score, as well as on the physical, psychosocial, and cognitive domains (P less than .001 for all).

Study participants also completed the Hospital Anxiety and Depression Scale (HAD). Significant depression and anxiety on the HAD were both highly correlated with total fatigue as measured by the FIS (P less than .001).

Indeed, compared with controls, cirrhosis patients were more likely to have borderline or significant anxiety (12% vs. 21% and 8% vs. 16%, respectively, P = .034) and borderline or significant depression (9% vs. 23% and 6% vs. 14%, respectively, P = .001), Dr. Kalaitzakis and associates reported.

Clinical factors played a role as well, according to the analysis. In univariate analysis, higher Child-Pugh class was significantly correlated with overall fatigue, as were current ascites or history of ascites (P less than .001 for all).

Current overt hepatic encephalopathy also was significantly correlated with overall fatigue, although somewhat less so than the other factors (P less than .05).

Factors not significantly related to total fatigue included liver disease etiology, existence of stable or bleeding varices, and malnutrition, the investigators said.

In fact, in multivariate analysis, FIS scores were only related to depression, anxiety, Child-Pugh score, and low serum cortisol levels, they wrote.

Overall, 66 out of the 108 patients completed a liver transplant, and follow-up data were available at 1 year on 60 of these.

"FIS domain and total scores had improved 1 year post-transplant, but transplant recipients still had higher physical fatigue compared to controls," Dr. Kalaitzakis and associates noted.

Of the 37 patients whose FIS scores before transplant had classified them as physically fatigued, 17 (46%) continued to be fatigued after transplant, wrote the authors.

Compared with the patients whose fatigue levels dropped post transplant, these 17 patients once again were more likely to have significant or borderline depression at baseline, according to the HAD (15% vs. 35% and 15% vs. 41%, respectively; P = .019).

"Psychological distress was found to be a major determinant of fatigue in cirrhosis," the authors concluded.

Although some previous studies have found that antidepressants do not improve fatigue, at least in cancer patients, "our findings ... indicate that patients with cirrhosis and significant anxiety or depression confirmed by a psychiatrist may benefit from specific treatment for these disorders, which could lead to improvement in fatigue," they wrote. "However, this would need to be formally tested in interventional trials."

Dr. Kalaitzakis and associates stated that they had no conflicts of interest to disclose and no grant support for this study.

Publications
Publications
Topics
Article Type
Display Headline
Fatigue in Cirrhosis Linked to Psychosocial Factors
Display Headline
Fatigue in Cirrhosis Linked to Psychosocial Factors
Legacy Keywords
liver cirrhosis, depression, anxiety, fatigue
Legacy Keywords
liver cirrhosis, depression, anxiety, fatigue
Article Source

FROM CLINICAL GASTROENTEROLOGY AND HEPATOLOGY

PURLs Copyright

Inside the Article

Vitals

Major Finding: Depression and anxiety were both highly correlated (P less than .001) with fatigue in cirrhosis patients, even 1 year post transplantation.

Data Source: A prospective, observational study of 108 cirrhosis patients, 66 of whom received a liver transplant, at a single center in Sweden.

Disclosures: The authors stated that they had no conflicts of interest and no grant support for this study.